In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Today, many of the major contributors to Google's most groundbreaking papers have left, joined competitors such as OpenAI, or started their own companies. Did Google lose the Battle of LLM?
A few days ago, Google almost encountered a public relations crisis, when Jacob Devlin, a former employee of Bert who had changed jobs to OpenAI, revealed that Bard was trained with ChatGPT data.
Google was quick to deny it.
And this controversy has also led to a big discussion: why do more and more top Google researchers switch jobs to OpenAI? Can it still win the battle of LLM?
Intimate friend replied that Rice University doctor, bosom friend "a pile of waste paper" said that in fact, the gap between Google and OpenAI is the data gap.
"OpenAI has a strong obsession with LLM, which is completely unmatched by companies like Google. Of course, the human gap is only one aspect, the data gap and the attitude towards the data is the key to its success. People may be able to spend money to dig back, technology may be able to make up, but the data gap Google will not be able to catch up in a short time. "
Https://arxiv.org/abs/2303.10158 it is clear that a crucial part of the success of ChatGPT is high-quality tagged data.
According to "A pile of waste Paper", OpenAI attaches appalling importance to the quality of data and tags, and is extremely strict in the selection of tagging personnel (with exams). In the end, it even sends out questionnaires. It is this obsession that contributes to the success of the GPT model, which Google simply cannot match.
As hundreds of millions of users continue to provide new data to OpenAI, the gap between Google and OpenAI will only widen.
The concept of Data-centric AI laid the foundation for the success of OpenAI.
Zhang Junlin, an excellent respondent on natural language processing and deep learning topics, said that OpenAI has lost everyone a lot.
Including Google, in fact, the understanding of the development concept of LLM obviously lags behind OpenAI. The reality is that OpenAI performed so well that it got rid of everyone, not just at home. In my opinion, OpenAI is ahead of foreign Google and DeepMind in terms of concept and related technology for about half a year to a year, and about two years ahead of China.
Zhihu respondent "Zhou Daodao" said that Google's recent rollover and OpenAI to compare, must have given these top researchers a huge shock.
In addition, it is said that OpenAI will give researchers more resources and broader requirements than Google. After all, OpenAI is more like a research institution, while Google is more likely to treat AI as a product research and development department.
The answer given by the respondent "Chen Dabao" can be said to be very heartbreaking.
Future business history will record two classic and ironic cases:
1. Kodak invented the digital camera
two。 Google invented transformer
Photo Source: "Chen Dabao" comment area also surprised the emergence of anonymous stakeholders.
Another anonymous person came to the point.
The intimate friend "Flying a Pig" points out Google's "innovator dilemma" as a big company.
Another anonymous stakeholder has emerged.
Bosom friend "Bill" concluded that a big company like Google is to de-humanize, based on policy, everyone is a screw.
And Insider summed up a long article on the top talent lost from Google's artificial intelligence team over the years.
Where are all the talents? Google has contributed some important research to the field of artificial intelligence. However, the company has been slow to turn innovation into products.
As a result, top AI researchers are leaving for startups that can generate more value and impact, such as OpenAI,Character.AI,DeepMind,Cohere,Inceptive.
Google is already in a defensive position and will lose its leading position in the field of artificial intelligence all the time. The departure of top researchers has exacerbated the problem.
Cohere's co-founding teams Ivan Zhang, Aidan Gomez, and Nick Frosst may be in defensive mode, but it doesn't have to be. The company has created many of its own basic technologies to power products such as ChatGPT. It also provides its research as open source, which somewhat satirizes the rapid rise of OpenAI.
Google has long been hesitant to release a chat robot similar to ChatGPT, fearing that the technology will cause reputational damage to its business.
Daniel De Freitas and Noam Shazeer, two researchers behind Google's large language model LaMDA, left the company, frustrated by the company's delay in releasing ChatGPT-like chatbots.
Other former Google researchers also believe that in such an exciting era of artificial intelligence, startups will provide researchers with ownership of the results and exert more value and influence themselves.
Here are some of the most eye-catching papers in the field of artificial intelligence, whose researchers have left Google for another company.
Ilya Sutskever sequence-to-sequence Learning using Neural Networks was published in 2014. This sequence-to-sequence paper explores training language models to transform word sequences from one domain to another. For example, convert an English sentence into a French sentence.
Ilya Sutskever led the study of this paper. He left Google in 2015 after nearly three years as a research scientist. Sutskever is the co-founder of OpenAI and continues to work there as its chief scientist.
Attention is all you need. This Transformer blockbuster paper has been cited more than 70, 000 times. Transformer is considered to be a breakthrough in natural language processing. It collects the nuances of context by observing each word in a sentence at the same time and weighing the importance of each word, so as to help artificial intelligence understand the meaning.
The "T" in ChatGPT represents Transformer, which shows the importance of this paper.
However, the eight authors of this article, with the exception of Llion Jones, have left Google.
Ashish Vaswani left Google brain (Google's deep learning artificial intelligence research team) five years later to start Adept, which recently raised $350 million to build generative artificial intelligence tools to help people use productivity software more effectively. He recently left Adept for a secret startup.
Noam Shazeer is now the CEO of Character.AI.
Niki Parmar left Google brain five years later to become co-founder and chief technology officer of Adept, but like Vaswani, she recently left for a secretive startup.
Jakob Uszkoreit has worked at Google for 13 years, engaged in neural networking and deep learning. He is now a co-founder of Inceptive, a startup that uses deep learning to design new treatments.
Aidan Gomez, co-founder and CEO of Cohere, has raised about $160 million to help developers integrate generative artificial intelligence into their applications and websites. He worked as a researcher at Google brain for a year and a half. Nick Frosst, his co-founder of Cohere, has been a researcher at Google brain for four years.
Lukasz Kaiser left Google brain after working for more than seven years and joined OpenAI in 2021. Kaiser was recently cited as a core contributor to his long-context ability in OpenAI's GPT-4 white paper, allowing chatbots to have longer conversations before they forget the context of the discussion.
Illia Polosukhin has worked in Google's brain for three years in deep learning and natural language understanding. He left in 2017 to start Pagoda, a Web3 startup platform.
Build a human-like open-domain chat robot this paper introduces Google's original chat robot Meena, and explores how chat robots learn to talk about topics by studying data collected from public social media conversations. It also introduces a test created by Google to assess the performance of chatbots.
This paper is another important milestone in large language modeling, and the authors believe that they can make a large language model to produce a similar human response to problems without hard-coding training.
Daniel De Freitas, one of the authors, served as co-founder and president of Character.AI after five years as a researcher at Google brain.
De Freitas's colleague at Character.AI, Romal Thoppilan, also contributed to this article.
Romal Thoppilan; on the left and Daniel De FreitasLaMDA on the right: the language model of dialogue application LaMDA is the abbreviation of the language model of dialogue application and the foundation of chat robot Bard. It was first demonstrated as a Meena in 2020, but Google never released Meena to the public. A former employee of Google's artificial intelligence research department explained that it would be a public relations nightmare because Google was worried that robots would make harmful comments.
Several of the leading researchers behind LaMDA have left Google brain.
Daniel De Freitas and Noam Shazeer, who founded Character.AI last year, recently raised about $200 million to create chatbots that talk in a variety of roles, from Musk to therapists to life coaches.
Romal Thoppilan worked at Google brain for seven years before serving as a founding researcher at Character.AI.
Alicia Jin joined Character.AI as a research engineer towards the end of 2022. She worked at Google brain for three years.
BERTBERT (Bidirectional Encoder Representations from Transformers) is based on the Transformer model of natural language processing. After pre-training, it can accomplish two tasks: masking language modeling and predicting the next sentence. In other words, BERT attempts to predict hidden or "hidden" words, forcing the algorithm to learn more about the surrounding text and better predict hidden words.
If you type "can you buy medicine for someone else's pharmacy", it will understand that "others" is an important part of the query.
Google began to include BERT in its search engine as early as 2019. This is one of the biggest improvements in search accuracy since the introduction of RankBrain, another machine learning algorithm, in 2015.
Jacob Devlin, the lead author of the paper, joined OpenAI shortly before the launch of ChatGPT.
The official name of the T5T5 paper is "exploring the limits of transfer learning with a unified text-to-text Transformer". It is based on BERT and is very suitable for translation and summary tasks.
Colin Raffel, who led the paper, worked as a research scientist at Google brain for about five years before leaving in 2021. He is currently an assistant professor at UNU Chapel Hill and spends one day a week as a researcher at Hugging Face. Hugging Face recently announced that it raised $100 million in May 2022, valuing the company at $2 billion. Users can share large language models and data sets on Hugging Face.
Sharan Narang, another contributor to the T5 paper, left Google brain in 2022 after working there for four years. He is now an artificial intelligence researcher at Meta.
A graphic placement method for rapid chip design A paper led by Google scientists Azalia Mirhoseini and Anna Goldie found that artificial intelligence can complete the chip design process faster than human experts.
Another paper, "Chip layout with Deep reinforcement Learning", led by the two men, provides a way to use artificial intelligence in chip design to maximize performance while minimizing the use of area and power.
These findings help Google design TPU chips specifically for machine learning tasks.
Mirhoseini and Goldie both left Google in 2022 to join Anthropic, a competitor to OpenAI, and are developing their own large language models and a chat robot called Claude.
DeepMindMustafa Suleyman is the co-founder of DeepMind and serves as the company's chief product officer. This is an artificial intelligence laboratory that was acquired by Google in 2014. The lab developed AlphaGo, a machine learning program that beats world champion professionals in go.
Alphabet, Google's parent company, recently announced in its fourth-quarter results that DeepMind's financial performance would be independent of "other investments", marking the importance of artificial intelligence in Google's future strategy. Usually, "other investments" are the general term for the company's new projects, which have not yet reached a profitable level.
Suleyman has been an active advocate for ensuring the safety of new artificial intelligence products. While working at DeepMind, he set up a research department called DeepMind Ethics and Society to study the practical impact of artificial intelligence. In 2019, he was on DeepMind leave on charges of bullying employees. During the investigation, he was transferred back to Google as vice president.
Suleyman has been cited in many research papers related to machine learning. In February 2022, he co-founded artificial intelligence startup Inflection with Reid Hoffman, the founder of LinkedIn.
Many of the major contributors to Google's most groundbreaking artificial intelligence papers have left either to join competitors such as OpenAI or to start their own companies.
When OpenAI's ChatGPT was launched at the end of 2022, Google CEO Sundar Pichai announced a "red alert" internally, summoning the company to respond.
At the time, Pichai said Google would no doubt continue to hire top researchers in the field of artificial intelligence.
However, after losing so many capable players, can Google still regain its glory?
Reference:
Https://www.businessinsider.com/google-ai-teams-brain-drain-researchers-leave-2023-3
Https://www.zhihu.com/question/592975340/answer/2963265672
Https://zhuanlan.zhihu.com/p/597586623
Https://www.zhihu.com/question/592975340/answer/2964598555
This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.