In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Google, Facebook and Microsoft have helped build frameworks for artificial intelligence, but smaller startups are pushing it to the masses, forcing tech giants to speed up AI development. Internal pressure on Meta and Google is also increasing due to the surge in attention around ChatGPT, and may even put aside some potential security concerns in order to move faster, according to sources.
Three months before AI research company OpenAI first launched AI chat robot ChatGPT in November 2011, Meta, the parent company of Facebook, released a similar chat robot. But Yann LeCun, chief artificial intelligence scientist at Meta, said that unlike ChatGPT, which immediately became popular and had more than 1m users within five days of launch, Meta's Blenderbot was boring.
ChatGPT is rapidly becoming mainstream as Microsoft tries to incorporate it into its popular office software and sell rights to other companies to use the tool. Microsoft recently invested billions of dollars in OpenAI, a ChatGPT development company. According to interviews with six outgoing and current Google and Meta employees, the surge in focus on ChatGPT is fuelling pressure within the tech giants, including the two companies, to act faster and may shelve concerns about some potential security issues.
One person said that at Meta, employees recently shared internal memos urging the company to speed up the approval process for AI projects in order to take advantage of the latest technology. Google itself helped create some of the technologies that underpin ChatGPT, which recently released a "red code" around the launch of AI products and proposed a so-called "green lane" to shorten the process of assessing and mitigating potential hazards to AI.
ChatGPT and text-to-image tools such as Dall-E 2 and Stable Diffusion belong to the so-called generative artificial intelligence (AIGC). They use existing, human-created content to determine certain patterns, so as to create their own works. The technology was pioneered by large technology companies such as Google, but they have become more conservative in recent years, releasing only new models or demos and keeping the entire product secret. At the same time, research labs like OpenAI quickly released their latest version, raising questions about corporate products such as Google's language model Lamda.
Tech giants have been nervous ever since failures like Microsoft's AI chat robot Tay. Less than a day after its release in 2016, Tay made inappropriate remarks such as racial discrimination, which led Microsoft to shut it down immediately. Meta defended Blenderbot and abandoned it after his racist remarks. In November, another AI tool called Galactica, which was criticized for being inaccurate and sometimes biased, was also taken offline just three days later.
"people feel that OpenAI's technology is newer, more exciting and has made fewer mistakes than these established companies, and they can now escape a lot of criticism," said an employee in Google's AI division. " He was referring to the public's willingness to accept ChatGPT under less strict circumstances. Some of the top talent in AI has moved to startups with more flexible mechanisms, such as OpenAI and Stable Diffusion.
Some AI ethicists worry that the rush of big tech companies to enter the market could expose billions of people to potential harm before trust and security experts can study these risks, such as sharing inaccurate information, generating fake images or giving students the ability to cheat on school exams. Others in the field agree with OpenAI's idea that releasing tools to the public is the only way to assess real-world hazards. After mitigating some predictable risks, releasing tools to the public is usually nominally in the "testing period".
Joelle Pineau, managing director of Meta Foundation AI Research, said: "Meta is growing incredibly fast and we have been looking at this area and making sure we have an efficient review process, but the top priority is to make the right decision and release the AI model and product that works best for our community."
"We believe that AI is a fundamental and transformative technology that is very useful to individuals, businesses and communities," said Lily Lin, a Google spokeswoman. "We need to consider the broader social impact of these innovations. We continue to test our AI technology internally to make sure it is helpful and safe to use."
Frank Shaw, Microsoft's communications director, said the company would build additional security measures when Microsoft uses AI tools such as DALE-2 in its products and will work with OpenAI. "Microsoft has been committed to driving the development of the AI field for many years and has publicly guided how to create and use these technologies on our platform in a responsible and ethical manner," he said.
Mark Riedl, a computer professor and machine learning expert at the Georgia Institute of Technology in the US, says the technology behind ChatGPT is not necessarily better than that developed by Google and Meta, but its release of language models for public use gives it a huge advantage. "over the past two years, they have been using a group of humans to provide feedback to GPT, such as'no'to inappropriate or unsatisfactory answers, a process known as" reinforcement learning from human feedback, "Liddell said.
Tech giants are willing to risk speeding up the deployment of AI tools at a time when technology stocks are plummeting, Silicon Valley is suddenly willing to consider taking on more risk of reputational damage. When Google cut 12000 jobs last week, its chief executive, Sandal Sundar Pichai, wrote that the company had come under intense scrutiny, focused on its highest priorities and twice mentioned its early investment in AI.
Ten years ago, Google became the undisputed leader in AI. In 2014, Google acquired DeepMind, a cutting-edge AI lab, and opened up its machine learning software TensorFlow in 2015. By 2016, Pichai promised to turn Google into a "AI first" company. The following year, Google released the converter, a key part of its software architecture, setting off the current wave of generative AI.
Google continues to introduce more advanced technology to promote the development of the whole field, and has made many breakthroughs in language understanding, helping to improve Google search. Within large technology companies, the system of checks and balances that examine the ethical impact of cutting-edge AI is not as mature as ensuring privacy or data security. Often, teams of AI researchers and engineers publish papers on their findings, integrate their technology into the company's existing infrastructure, or develop new products, a process that can sometimes conflict with other teams responsible for AI development because they are under pressure to bring innovation to the public faster.
Google unveiled its famous AI principles in 2018 after employees protested against Project Maven, a contract to provide computer vision to the Department of Defense drones, and consumers expressed strong opposition to the demonstration of Duplex, an AI system that can call restaurants and order meals. Last August, Google began offering limited-edition LaMDA to consumers through its app AI Test Kitchen. Blake Lemoine, a former Google software engineer, says he believes LaMDA has perceptual capabilities, but the feature has not yet been fully released to the public, as Google plans to do by the end of 2022.
But the top AI people behind these technological advances are getting restless. In the past year or so, many of Google's top AI researchers have left to set up start-ups around large language models, including Character.AI, Cohere, Adept, Inflection.AI and Inworld AI. In addition, there are search startups that use similar models to develop chat interfaces, such as Neeva, run by former Google executive Sridhar Ramaswamy Ramaswami.
Noam Shazeer, founder of Character.AI, helped invent converters and other core machine learning architectures. He says the flywheel effect of user data is priceless. When he first applied user feedback to Character.AI, participation increased by more than 30 per cent. Character.AI allows anyone to generate a chat robot based on a short description of a real or fictional character.
Nick Frosst, who has worked at Google Brain for three years, says big companies like Google and Microsoft often focus on using AI to improve their vast existing business models. Frost co-founded Cohere in Toronto, a start-up that creates large language models that can be customized to help businesses. His startup partner, Aidan Gomez, also helped develop the converter while working at Google. "the field is growing so fast that it's not surprising to me that small companies are in the lead," Frost said. "
In the past decade, the AI field has experienced several hype cycles, but the enthusiasm for Dall-E and ChatGPT has reached new heights. Shortly after OpenAI launched ChatGPT, tech web celebrities on Twitter began to predict that AIGC would herald the demise of Google search. ChatGPT provides simple answers in an accessible way and does not require users to search in blue links. In addition, 25 years after its launch, Google's search interface has become bloated, filled with advertising and marketers trying to take advantage of the system.
"because of their monopoly, Google has turned their once incredible search experience into a spam-infested, search-engine-driven hell," Can Duruk, a technologist, wrote in his newsletter Margins.
Does AI consumer products have huge profit potential? On the anonymous app Blind, technologists posted dozens of questions about whether Google could compete with ChatGPT. "if Google doesn't take action and starts releasing similar apps, they will go down in history: it trained a whole generation of machine learning researchers and engineers, who later deployed the technology in other companies," David Ha, a prominent research scientist, wrote on Twitter. Recently, David Ha left Google Brain to join Stable Diffusion, an open source text-to-image startup.
Google employees say AI engineers who remain inside the company are as frustrated as he is. For years, employees have been sending memos saying they want to add AI chat to their search. But they also understand that Google has good reason not to rush to change its search products. If the chatbot answers questions directly through Google search, it may increase its responsibility if the answer is found to be harmful or plagiarized.
Chatbots like OpenAI often make factual mistakes and often change answers according to the way questions are asked. A former Google AI researcher said the shift from providing answers to a series of questions linked directly to the original material to using a chat robot to give a single, authoritative answer would be a major shift that would make many people inside Google nervous. 'Google doesn't want to take on the role or responsibility of providing a single answer, 'the person familiar with the matter said. Previous search updates, such as adding instant answers, were very slow and careful.
Within Google, however, part of the disappointment with AI's security process stems from a sense that cutting-edge technology has never been released as a product for fear of negative publicity. For example, some AI models are biased.
Meta employees also have to deal with the company's concerns about poor public relations, according to a person familiar with the company's internal discussions. Before launching a new product or publishing research, Meta employees must answer questions about the potential risks of promoting their work, including how their research can be misunderstood. Some projects are reviewed by public relations staff and internal compliance experts who ensure that the company's products comply with the 2011 FTC agreement on how to handle user data.
For Timnit Gebru, executive director of the distributed AI Institute (Distributed AI Research Institute), a non-profit organization, the prospect of Google marginalizing its responsible AI team does not necessarily mean a shift in power or security concerns, because those who warn of potential hazards are not authorized in the first place.
From Gebru's point of view, Google was slow to release AI tools because the company lacked a strong enough business incentive to risk damage to its reputation. However, after the release of ChatGPT, Google is likely to see a change in its ability to make money from these consumer products, rather than just providing power for search or online advertising. "now they may think it's a threat to their core business, so maybe they should take a risk," says Mr Gebru. "
Rumman Chowdhury led Twitter's machine learning ethics team until last November when Elon Elon Musk disbanded it. She said she expected companies such as Google to increasingly crowd out internal critics and ethicists as they scrambled to catch up with OpenAI. "We thought China was driving AI technology in the United States, but now it looks like startups are behind it," she said. "
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.