Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Battle of the gods! An inventory of Chinese "big model" players

2025-04-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

Since the beginning of the year, the global AI large model industry, flowers are beautiful, shoulder to shoulder. Microsoft, Google, Meta, Baidu and other technology giants catch up with each other, opening the prelude to the era of big model "arms race". With the entry of new challengers, the concept of large model is getting higher and higher, and many AI enterprises have announced their march and embraced a new round of productivity revolution.

In China, making similar ChatGPT products and building a Chinese version of OpenAI, this unparalleled "sexy story" has ignited the entrepreneurial enthusiasm of Internet companies and technology bigwigs, and the "Battle of the Gods" of AI 2.0 has begun.

In February this year, a "AI Hero list" by Wang Huiwen, former co-founder of Meituan, entered the big model of "sword pointing" in a high profile. After that, former Sogou CEO Wang Xiaochuan, former JD.com AI director Zhou Bowen, Ali Technology Vice President Jia Yangqing, known as "the first Chinese in the field of AI framework," and asked founder Li Zhifei and other bigwigs rushed to join the entrepreneurial craze set off by ChatGPT.

There are also many entrepreneurial teams with gorgeous backgrounds emerging, among which Tsinghua University is in the first echelon to live up to expectations. For example, the wisdom spectrum AI of Professor Tang Jie of Tsinghua University, the heart-listening intelligence of Professor Huang Minlie, Sun Maosong and Liu Zhiyuan's team of NLP Laboratory of Tsinghua University, saying deeply that science and technology are unsurpassed, as well as circular intelligence Yang Zhilin, and so on, are all given the hope of subverting the industry by the head VC, and the entrepreneurial train is speeding up again.

Internet companies are not willing to miss the next era. Baidu has taken the lead in launching "Wen Xin Yi Yan", Huawei's "Pangu" big model, and Tencent's "mixed yuan" AI model, which are ready to go. In addition, it was revealed that the ChatGPT-like product to be launched by Aridamo House, the byte jump was exposed to be laying out the big model, and iFLYTEK was also heard that the ChatGPT-like technology would be launched in the company's AI learning machine product in May this year.

The paradigm shift of productivity, the era of computing has quietly begun, and a fantasy revolution brought about by AIGC is under way. Battle of the gods! Who will win the final battle of "Chinese version of ChatGPT"?

First, start a business to send Wang Xiaochuan team

According to public information, in June 2022, Wang Xiaochuan set up an artificial intelligence technology company, Beijing Baifang Zhongzhi Information Technology Partnership, which owns 80% of its shares. In addition, according to the news that the media asked Wang Xiaochuan himself whether to return to entrepreneurship to do a big AI model, Wang Xiaochuan admitted that he was "in rapid preparation."

"State ℃" learned from sources close to Wang Xiaochuan that Wang Xiaochuan is in intensive communication with some important people in industry and academia recently. the project has not yet been made public, and it is said that it has secured a start-up capital of US $100 million, known as the "Friends and relatives Wheel."

Wang Xiaochuan won the gold medal in the International Olympiad Informatics Competition in 1996 and entered Tsinghua University. After joining Sohu, he led the launch of Sogou search in 2004 and became the youngest senior executive in China's Internet industry at the age of 27. Starting in 2015, Wang Xiaochuan began to tell the story of AI, with "natural interaction + knowledge computing" as the core, launching products such as AI translators.

Sogou landed on the New York Stock Exchange in November 2017 and has been listed for more than four years. The revenue peak stopped in 2019, with revenue of 1.17 billion US dollars and net profit of 105 million US dollars. Since then, revenue has fallen to $837 million, with a loss of $109 million. In the first two quarters of 2021, revenue basically fell by more than 44 per cent compared with the same period a year earlier.

In early October 2021, Sogou completed privatization and delisted from the New York Stock Exchange. On September 4, 2021, Sogou announced the completion of the merger with Tencent. Upon completion of the merger, Sogou will become an indirect wholly-owned subsidiary of Tencent Holdings. On October 15, 2021, Wang Xiaochuan issued an email to bid farewell to Sogou, resigned from his post as CEO and resigned with success.

With both technocratic ideals and commercial wisdom, Wang Xiaochuan is undoubtedly a dark horse on the big model track.

Light years away

Wang Huiwen, former co-founder and senior vice president of Meituan, announced his entry into the AI model track, which once triggered a hot discussion in the technology Internet industry.

On February 13, Wang Huiwen posted a list of AI heroes on social media, announcing the creation of China's OpenAI and the establishment of Beijing Lightyear beyond Technology Co., Ltd. (hereinafter referred to as "Lightyear beyond"), with an investment of US $50 million, a valuation of US $200 million and a 25% stake in its own capital. Wang Huiwen further said that the next round of financing has already had top VC subscriptions of $230 million.

On the evening of March 27, the company founded by Wang Huiwen was rumored to be buying Beijing's top technology in an exchange of shares to build a Chinese version of ChatGPT on the basis of the existing team. According to public information, first-class technology was founded in January 2017 and claims to be the only team in the world to develop a general deep learning framework as an "entrepreneurial team". Competitors to its product OneFlow include Baidu's PaddlePaddle, Google's TensorFlow and Facebook's PyTorch.

It was also reported that Wang Xing, founder and chairman of Meituan, will personally participate in round A financing of Wang Huiwen AI start-up company and become a director. "the AI model not only excites me about the huge productivity that will be created, but also worries about its impact on the whole world in the future," Wang once said on moments. Lao Wang and I have been on the entrepreneurial road for nearly 20 years, and since he is determined to embrace this big wave, I must support it. "

Wang Huiwen and Wang Xing co-founded Xiaonei (later renamed Renren) and then co-founded Meituan. When Wang Huiwen announced his retirement from Meituan in December 2020, Wang Xing said in an open letter: "Lao Wang and I are classmates and roommates who share the same interests, partners who start a business together and comrades-in-arms who fight side by side. They are also lifelong friends who can collide their ideas and talk with their souls."

Two years later, at the AI model track, the two men came together again to go to a new battlefield.

Project AI 2.0

Also high-profile entrance is AI Daniel Li Kaifu, who announced the establishment of Project AI 2.0 company in moments on the afternoon of March 19. On the same day, the new company officially launched on its Innovation Workshop website.

It is understood that Project AI 2.0 should not only be a Chinese version of ChatGPT. Kaifu Lee believes that AI 2.0 is not only a high-energy chat tool, nor is it just the AIGC generation of picture and text creation. Co-pilot and the applications seen today are just the beginning of AI 2.0 capabilities.

He also revealed that the funding and computing power of Project AI 2.0 are gradually in place, and the new company options are absolutely dominated by the new team, and the first batch of top talents in the direction of widely recruiting large models, multimodal, NLP (natural language processing), AI algorithm engineering and research, distributed computing / Infrastructure recommend themselves.

Lanzhou science and technology

On March 14 this year, Lan Zhou Technology, a large language model player, just completed the Pre-A + round of financing. This round of financing is led by Beijing Zhongguancun Science City Company, followed by Si Dao Capital and Innovation works. This is also the second round of financing harvested by Lanzhou Technology within a year, with a cumulative total of hundreds of millions of yuan.

Along with the financing news, there is also the language generation model launched by Lanzhou Technology-Mencius MChat controllable model. According to reports, it has ChatGPT-like capabilities, coupled with the original Lanzhou Science and Technology Mencius model N project functional engines, the final presentation of MChat, not only has the ability of dialogue, but also has the ability of translation, generation, information search and other capabilities.

Zhou Ming, founder of Lanzhou Technology, is a world-famous NLP expert and one of the most accomplished Chinese in the field of NLP. He is not only the first developer of Chinese-English translation system in China, but also the president of ACL (International Computing language Association), the highest-level meeting in the field of AI in 2019.

According to public data, Zhou Ming successively studied at Chongqing University, Harbin Institute of Technology and Tsinghua University, and served as an associate professor at Tsinghua University. Before starting a business, Zhou Ming served as the head of the NLP group and vice president of Microsoft Asia Research for a long time in Microsoft Research Asia. Zhou Ming has the highest number of articles published in the field of NLP in the world. Between 2012 and 2020, Zhou Ming will publish the largest number of papers in the field of NLP in the world.

MiniMax

MiniMax, a mysterious artificial intelligence company founded in November 2021, has made only a handful of voices since its inception. In the first half of 2022, Mingshi Capital led the investment in MiniMax's Angel Round financing, according to the official news from Mingshi Capital.

But in fact, MiniMax is one of the best multi-modal AI large model start-ups in China, forming the basic model architecture of three modes: text to text to visual, text to voice (text to audio) and text to text (text to text). It may also be the first start-up company in China with the ability of three modal large models at the same time.

MiniMax's first C-end product, Glow, is different from the ChatGPT encyclopedia. Glow is positioned in AI virtual chat social software, focusing on chat, companionship and other emotional functions, and currently has more than one million users. In the direction of to B, MiniMax plans to open API to the public this year.

Some media have disclosed that the co-founder of MiniMax is Yan Junjie, former vice president of Shangtang Technology and head of general intelligent technology, and also served as vice president of Shangtang Research Institute, setting up Shangtang's face recognition and smart city related technology system. In the field of deep learning and computer vision, Yan Junjie has published more than 100 top conference and periodical papers and more than 10000 Google Scholar citations.

At present, the number of MiniMax team has exceeded 100, and the company's core technology R & D members are from world-renowned universities and the world's top technology companies. It has the world's top natural language processing, voice, computer vision, computer graphics and other industrial and academic experience, has a number of global leading research achievements in the field of artificial intelligence, and has hundreds of global invention patents. The team members of 1Accord 3 have doctorates from the world's top technical laboratories.

Title far science and technology

On the evening of February 26, Zhou Bowen, the former head of JD.com AI, released an AI hero post, recruiting partners to build a Chinese version of ChatGPT. This is another industry Daniel after the former Wang Huiwen and Wang Xiaochuan.

Zhou Bowen has long devoted himself to the research fields of speech, natural language processing, multimodal content understanding and generation, knowledge representation and reasoning, man-machine dialogue, trusted AI and its application in industrialization and mathematical intelligence. His natural language representation mechanism of self-attention fusion multi-head mechanism later became one of the core ideas of Transformer architecture. He also worked with Turing Prize winner Yoshua Bengio on a paper in the direction of NLP, which has been cited more than 2000 times, and the total number of citations is nearly 15000.

After leaving JD.com at the end of 2021, Zhou Bowen moved to academia as Professor Huiyan of Tsinghua University, Professor of Electronics Department, and Director of Collaborative Interactive Intelligence Research Center of Electronics Department of Tsinghua University. At the same time, he is also the founder and chief scientist of Jianyuan Technology.

In the hero post, Zhou Bowen also expressed the differential path taken by Jiyuan Technology to participate in the wave of ChatGPT-China's OpenAI needs to explore a new way: vertical integration from its own basic large model to application, user panoramic closed loop, to achieve the "double landing" of generative artificial intelligence technology and business value.

On March 1 this year, Dingyuan Technology announced that it had completed hundreds of millions of yuan of angel round financing, which was led by Qiming Venture Capital and followed by Matrix Partners.

Second, the academic wisdom spectrum AI

AI was founded by Professor Tang Jie, Department of computer Science, Tsinghua University. Core members of the team have participated in the research and development of Wudao, a cooperative project between Tsinghua University and Zhiyuan Research Institute, and have rich experience in large models.

In August 2022, GLM-130B, a very large-scale pre-training language model developed by the knowledge Engineering Laboratory of Tsinghua University and Zhisu AI, was officially launched and outperformed GPT-3 in several public evaluation sets.

In September last year, Zhisu AI announced that it had received hundreds of millions of yuan in round B financing, jointly led by Junlian Capital and QiMing Venture Partners, which will continue to be invested in building a high-performance 100 billion-level Pratt & Whitney model.

Tang Jie, a professor of computer science at Tsinghua University, has just been selected into the 2023 AAAI Fellow, also known as ACM / IEEE Fellow. His research interests include artificial intelligence, data mining, social networking, machine learning and knowledge graph. He has been awarded ACM SIGKDD Test-of-Time Award (Best Paper of the decade).

On the same day that GPT4 was released, Professor Tang announced on Weibo that ChatGLM, a dialogue robot based on a large model with hundreds of billions of parameters, is now open for internal testing based on the invitation system. According to reports, this is a hundreds of billions of Chinese and English language model with the function of question and answer and dialogue.

Heart-listening intelligence

Another famous NLP team of the Tsinghua Department is Zhu Xiaoyan, a professor of interactive artificial intelligence (CoAI), and his student Huang Minlie's team. They have successively proposed SentiLARE, StoryGPT and other pre-training models.

Associate Professor Huang Minglie founded Heart listening Intelligence in 2021, focusing on the psychological counseling track, dedicated to creating a "super-anthropomorphic bottom model". In 2022, the team launched the "AI Utopia" dialogue robot to support custom AI human settings for chat interaction. The core members of the team are from Tsinghua University, Carnegie Mellon University, Google and other top universities and companies at home and abroad, with rich commercial experience.

As one of the domestic teams with the ability to train super-large-scale language models, Linxin Intelligence has positioned itself as the "Character AI of China", aiming to provide emotionally accompanied text generation services. The company has now completed the Pre-A round of financing.

Huang Minglie himself is a senior expert in the field of natural language generation, associate professor in the Department of computer Science and Technology of Tsinghua University, winner of the National Outstanding Youth Science Foundation, and deputy director of the Intelligent Technology and Systems Laboratory. Research interests are mainly focused on artificial intelligence, deep learning, reinforcement learning, natural language processing such as automatic question and answer, dialogue system, emotional and emotional intelligence and so on. More than 100 CCF A / B papers have been published in ACL, IJCAI, AAAI, EMNLP, KDD, ACM TOIS, TASLP, TKDE and other international top or mainstream conferences and journals.

Sun Maosong and Liu Zhiyuan's team from NLP Laboratory of Tsinghua University

From the perspective of the discipline direction of NLP, the Sun Maosong and Liu Zhiyuan teams of Tsinghua Department are regarded as one of the backbone, from the Tsinghua Natural language processing and Social Humanities Computing Laboratory (THUNLP). THUNLP was established in the laboratory in the 1970s, led by Huang Changning, a domestic leader in the field of NLP research.

The Natural language processing and Social Humanities Computing Laboratory (THUNLP) of Tsinghua University is the earliest and influential scientific research institution in China to carry out NLP research. When the laboratory was established in the late 1970s, it was led by Huang Changning, the founder of Chinese NLP. Sun Maosong, ACL Fellow, was his student, and Liu Zhiyuan was Sun Maosong's student.

In terms of language models, Sun Maosong and Liu Zhiyuan's team followed the pre-training paradigm in 2018, released the ERNIE language model (with the same name as the Baidu version of ERNIE in the same period) in early 2019, and then developed the CPM model, which is the predecessor of Wudao Wenyuan of Zhiyuan Research Institute.

At the end of 2021, the team released the Machine Chinese language proficiency Evaluation benchmark "Zhiyuan Index" (CUGE) to evaluate and promote the development of Chinese NLP. One is AI's poetry writing model "Nine songs", in which 800000 ancient Chinese poems were learned during the training, and the other is the language representation model ERNIE (with the same name as Baidu Wenxin model), which is comparable to BERT, which has the best task at that time.

Talk deeply about science and technology

Sun Maosong, Liu Zhiyuan and their graduates have incubated a number of companies in the field of NLP and large models. Among them, Shenyan Technology, which was founded in 2022, is committed to building an industrial-grade Chinese information processing engine based on a self-developed Chinese model.

In 2019, the Natural language processing Laboratory (THUNLP) of Tsinghua University developed a reverse dictionary called WantWords, which can find words based on semantic descriptions and can be used to assist writing. In November 2021, the app suddenly became popular on Weibo, crowding servers several times. In May 2022, WantQuotes, a semantic-based retrieval system for famous quotes, was launched, and as of August last year, WantWords and WantQuotes had more than 2 million users.

This team is the predecessor of DeepLang AI. At present, several students who developed this application have graduated and together with their mentor, Professor Sun Maosong, founded the start-up company Shenyan Technology (Shenyan Technology). The company received a first-line investment of 10 million yuan at the beginning of its establishment.

CEO of science and technology has published nearly 30 papers and applied for more than 10 patents in international journals in the field of artificial intelligence and natural language processing. At present, the core members of the science and technology team are mainly composed of doctors and masters from Tsinghua University, and Professor Sun Maosong is the chief scientist.

Cyclic intelligence

In this fierce big model "arms race", there is no shortage of post-90s entrepreneurs with glamorous resumes, the first of which is Yang Zhilin, co-founder of circular intelligence.

Yang Zhilin studied computer science at Tsinghua University, studied under IEEE Fellow Tang Jie, graduated with honors at the top of his grade in 2015, and then went to study for his PhD at Carnegie Mellon University's language Technology Institute (LTI), which is the world's number one in natural language processing (NLP). He earned his doctorate from Ruslan Salakhutdinov, head of AI research at Apple, and William Cohen,4, chief scientist at Google.

During his doctorate, he collaborated with Turing Award winner Yoshua Bengio to release the hotpot Q & A dataset HotpotQA, and XLNet and Transformer-XL, published as a work, had an important impact in the field of NLP, becoming one of the highest cited papers in NeurIPS 2019 and ACL 2019. Google academic citations directly exceeded 10,000 times.

This post-90s entrepreneur not only has an excellent academic resume, but also has a remarkable performance in industry. In 2016, Yang Zhilin founded Circular Intelligence, and the company's main business is to create "sales technology" solutions using artificial intelligence technologies such as NLP, voice, multi-mode and large model. At the same time, he leads several AI projects of Zhisu AI and Tsinghua research team. For example, Huawei's Pangu model is also jointly launched by Yang Zhilin's team and Huawei Cloud.

At present, Circular Intelligence has carried out round B financing, and has achieved more than 200% revenue growth for three consecutive years.

Professor Qiu Xipeng team of Fudan

On February 20, Professor Qiu Xipeng of the Natural language processing Laboratory of Fudan University and the Shanghai artificial Intelligence Laboratory released "China's first large-scale conversational language model MOSS". It is a conversation chat robot or language model similar to ChatGPT, which can perform various functions, such as giving answers, generating conversations, writing code, and so on. Professor Qiu Xipeng introduced that the conversational large-scale language model MOSS has about 20 billion parameters.

According to reports, in the future, the optimization of MOSS will focus on three aspects: the team will prepare higher-quality Chinese data; it will open the interface to allow MOSS to talk to humans and collect more dialogue data; and at the same time, the team will further increase investment and expand its parameter scale.

Professor Qiu Xipeng, a senior scholar in the field of natural language processing, has led the development of the NLP tool FudanNLP. The team's main research interests are natural language processing and deep learning. At present, Qiu Xipeng and his MOSS team, eight young Fudan students, are working intensively on internal testing and iteration.

Third, big factory sends Baidu

On March 16, Baidu launched an invitation test of Wenxin Yiyan, a generative AI product based on Baidu's new big language model, which mainly includes literary creation, commercial copy creation, mathematical inference, Chinese understanding and multimodal generation.

At present, the basic ability of "Wen Xin Yi Yan" is solid and has the ability of literary creation, which can be summarized, analyzed, content, retrieved and extended to retrieval enhancement. However, from the current test state, there is no lack of "rollover site", the effect is not perfect.

Baidu has always adhered to the R & D investment of pressure type and marathon type. Baidu's high R & D investment is mainly concentrated in the field of AI. Since the establishment of the AI department in 2012, Baidu has invested more than 100 billion yuan in AI research and development in the past decade, making it one of the few major companies in the world that can support the training of large language models.

From computing power, algorithms to data, from the high-end chip Kunlun core to the flying oar deep learning framework, Baidu has key self-research technologies at all levels, which provides a certain R & D investment barrier for Baidu Wenxin.

Huawei

In April 2021, the Pangu model was officially released to the public. Among them, the Pangu NLP model is the first Chinese pre-training model with hundreds of billions of parameters in the industry, and it has achieved the leading position in the CLUE ranking. In order to train the large NLP model, the team uses 40TB text data in the training process, which contains a lot of general knowledge and industry experience.

On March 27, it was reported that the Huawei Pangu model will be released in April, and the standard CHATGPT will also reach 100 billion-level parameters. it is the world's largest Chinese language AI training model, jointly developed by Huawei and Pengcheng Lab. Pengcheng Yunnao Phase II provides a computing base.

Huawei's core figures in this regard are IEEE Fellow, academician of the International Eurasian Academy of Sciences, and Tian Qi, chief scientist in Huawei's cloud artificial intelligence field. Before joining Huawei, Tian Qi was already a well-known AI scholar in the industry. He is a Ph.D. and IEEE Fellow from the University of Illinois at Urbana-Champaign. He is also a full professor in the Department of computer Science at the University of Texas at San Antonio. After 17 years of teaching in colleges and universities, with the cutting-edge thinking and scientific research achievements of academia, Tian Qi and his team came to the front line of the industry.

Tencent

In terms of AI model, Tencent said it is investing heavily in artificial intelligence and cloud infrastructure construction. Tencent mixed-element AI model covers basic models such as NLP (natural language processing), CV (computer vision) and multimodal models, as well as many industry and domain models. It has also launched a trillion Chinese NLP pre-training model.

In April 2022, Tencent disclosed for the first time the progress of the research and development of the HunYuan model-HunYuan, which integrates CV, NLP and multimodal understanding capabilities, has reached the top of the list of five authoritative data sets such as MSR-VTT and MSVD.

In May 2022, Mianyuan reached the top of three internationally recognized CLUE lists at the same time, breaking three records in one fell swoop. At the end of last year, Tencent HunYuan further launched China's first low-cost, landable NLP trillion model, and once again topped the list of natural language understanding tasks CLUE.

It is reported that HunYuan is the first trillion NLP model in the industry that can be directly applied in massive business scenarios. It can be trained in one day with 256cards at the earliest, and the cost is directly reduced to the original 1x8.

Ali Damo House

As for the AI model, according to the information released by Ali Research Institute, the Dharma Institute launched the Chinese multimodal pre-training model M6 project at the beginning of 2020, and launched the basic model with 300 million parameters in June of the same year; the model parameter scale reached 10 billion in January 2021; in May 2021, the model with trillion parameter scale was officially put into use; in October 2020, the parameter scale of M6 was expanded to 10 trillion, becoming the largest AI pre-training model in the world at that time.

M6 was first released in March 2021, and then changed to trillion parameters through 480 GPU, and successfully trained the 10 trillion model in 10 days. Its parameter scale is the same as that of GPT-3, but its energy consumption is only 1%.

Aliyun has said that M6 has been used in more than 40 scenarios, with hundreds of millions of daily calls. Within Aliyun, the applications of M6 models include, but are not limited to, the clothing designed by Rhino Intelligence for the brand has been launched on Taobao, writing scripts for Tmall virtual anchors, and improving the search and content cognitive accuracy of platforms such as Taobao and Alipay. especially good at design, writing, question and answer, landing in e-commerce, manufacturing, literature and art, scientific research and other prospects.

The person in charge of M6 is Yang Hongxia. According to public information, Yang Hongxia graduated from Nankai University in 2007 with a bachelor's degree in statistics. After that, she went to study for a doctorate in the Department of Statistical Science at Duke University in the United States, where she studied under Professor David Dunson. After graduating from his PhD, he joined the IBM Global Research and Development Center as a Watson researcher, and then joined Yahoo as chief data scientist. Yang Hongxia has hundreds of top papers in international conferences and top journals in the field of artificial intelligence.

However, it was reported on March 22 that Yang Hongxia, the former leader of Alibaba's language generation model M6, has joined byte jump AI Lab as the research and development leader of the language generation model.

Byte beat

According to media sources, byte jumps are being laid out in large models, and efforts are being made in terms of language and image modes. The person in charge of byte jump-related technology said that the exploration of technology in these areas is still in its infancy and is not yet mature.

According to the report, the language large model team is led by the byte search department and currently has a size of about a dozen people, while the picture large model team is led by an intelligent creation team under the product development and engineering architecture department.

According to people familiar with the matter, the byte jump language large model team was set up this year to explore the direction of combining with downstream businesses such as search and advertising, and the language big model team is expected to launch the big model in the middle of this year.

IFLYTEK

IFLYTEK launched the generative pre-training large model "1x N" task in December 2022, in which "1" is the general cognitive intelligence large model algorithm research and development and efficient training platform, and "N" is a special version of the large model used in education, medical treatment, human-computer interaction, office, translation, industry and many other industries.

Public information disclosure, HKUST Flying ChatGPT technology will be landed in May this year in the company AI learning machine products.

IFLYTEK's years of large-scale application in education, health care, justice and other industries has accumulated more than 50TB industry corpus and more than 1 billion user interaction data per day, which not only provides a large amount of industry text corpus and user feedback data for training and realization of industry cognitive large-scale models that reach the human expert level, but also provides scene guarantee for innovative application research and development and pilot promotion based on large-scale models.

Conclusion: in 1769, the British Watt's improved steam engine opened the prelude to the thermal era. Today, with the sharp increase in the demand for AI computing, which forces the traditional computing architecture to accelerate innovation, the era of computing has quietly begun.

In the future, the influence of AI large model technology may be close to that of printing, steam engine, electricity, chips, and the Internet, which may lead to a leap in the technological development of human society as a whole, resulting in a non-linear transition in the human economy.

Under the attractive prospect, it is also an exponential increase in the difficulty of admission.

First of all, high computing power represents high cost, which means the absolute high threshold of starting a business in the era of AI 2.0. it is impossible to afford the cost of AI training without strong financial support, which also indicates that admission tickets are very rare, and teams of non-big companies and star entrepreneurs have almost no hope of finishing the whole journey.

At the same time, AI 2.0 startups have more stringent requirements on technology. Some investors revealed to the "State ℃" column that as long as there are no more than ten domestic start-up companies that can do low-level models, whether to do large language models or reinforcement learning, as long as they are good at technology and use the thinking of engineering products to do the underlying models.

Finally, the top technical talent of the AI model is a scarcer resource. When the bosses come off, the first battle they fight is to rob people. after all, there are very few top technologies that really dominate the development of deep learning frameworks and artificial intelligence frameworks in China, and outstanding talents can be said to be impossible to find.

In this battle of the gods, who will finally stand out and lead the next era? For Chinese technology companies, there is still a long way to go to explore and climb.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report