In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Although AI research company OpenAI has captured the public imagination with the AI chat robot ChatGPT, in the final analysis, the technology may not change the balance of power among tech giants, according to news on February 1. Tech giants such as Google will benefit more from so-called generative AI (AIGC) than startups like OpenAI.
The following is the translation of the source Pexels:
Google ceded the limelight to OpenAI after spending billions on AI research and development and acquisitions. The AI upstart has released the amazing AI chat robot ChatGPT, and its public testing has captured the public imagination. Google is now worried that ChatGPT AI could reshape what it sees as a cornerstone of its search business.
But as early as 2017, Google claimed to be a "AI first" company and was likely to continue to play an important role in AI. Google's investment in AI, which dates back to the early 21st century, could pay off handsomely and could even boost the company's rapid growth over the next 25 years. Google is 25 years old this year.
OpenAI enhances the capabilities of natural language processing (NLP) models by making them more powerful and providing them with large amounts of training data. The data are selected from free e-books, wiki pages, discussion boards and fan novels on the Internet. However, OpenAI did not invent the NLP model that supports ChatGPT. GPT stands for generative pre-training Converter (generative pretrained transformer). Google developed a converter language model based on BERT (two-way Encoder) in 2018 and now uses it to enhance its search and translation capabilities.
But after BERT, Google didn't stop working on NLP. In fact, Google claims to have developed a NLP chat robot called LaMDA, which is more powerful than ChatGPT, and Google engineers even say it has perception. Google sister company DeepMind is also considering launching an AI chat robot called Sparrow later this year, and Google's AI image generator Imagen is said to compete with OpenAI's Dall-E 2.
So far, Google has been cautious about making these AI tools available to the public. This is not particularly surprising. After all, Google is much larger than OpenAI, and any AI mistake will cost Google even more. Google believes that generative AI is still an immature technology and its risks have yet to be assessed. The company fears that if its AI tools infringe on someone's privacy or copyright, or cause some form of monopoly, it could trigger antitrust lawsuits and be held liable.
But Google has also benefited a lot from AI research and has begun to offer new features based on AI technology support. The company's approach has always been to use its basic AI model to run in the background to better support its applications and services.
Reshaping the search function search is a good example. Google is already using the BERT language model to explain the intentions behind users typing keywords and phrases in the search bar. BERT is designed to infer meaning from text, but more advanced language models, such as the generative AI model, can build search results in the way ChatGPT generates text answers to user questions. Google search may no longer just provide a long list of less relevant links (accompanied by advertising), but generate a complete narrative that directly answers the searcher's questions, supplemented by pictures, videos, statistics and links.
In fact, Google has been working on this generative search method for a long time, and its multitasking unified model (MUM) is designed to help users deal with complex or multifaceted search requests. For example, a user may say or enter "what preparations do I need to make for a trip to Xizang?" MUM will infer all the problems contained in the query and then generate a multimedia packet that includes solutions to all problems and more.
Google also has the advantage that it has long-term experience in crawling and cataloging the web and its vast amounts of content. The company may be able to give its models access to more, higher-quality training data, which may help develop impressive AI applications and services without generating error or toxic content.
"I do think a large language model like ChatGPT could upend the web search business," said Wu Enda, former head of the Google Brain team and founder and CEO of AI startup Landing AI. But he immediately added that turning search into a free conversation with the AI model was not easy: how to use the vast amount of content on the Internet to train a really useful NLP model? How to develop the data for the training model? There is also the question of how to monetize.
"the core driving force of Google's business model is to provide links and place advertising links next to the results," Wu continued. "so if Google is only providing answers directly through the AI model, what impact will this have on the advertising business model? I think these are very complex questions."
It may be easier for Google to integrate the generative AI model into its productivity applications. For example, this technology can be used to help Gmail users draft messages, or to help Docs users write documents or presentations, or to provide video chat summaries for Meet users.
OpenAI partnered with Microsoft until recently, the competition between OpenAI and Google looked like a battle between David and the giant Goliath: a small, flexible company that threatened long-standing tech giants with novel and disruptive technologies. OpenAI may have decided to open ChatGPT to the public, hoping to maintain that impression, improve its public image, and perhaps attract more investment capital.
This strategy seems to be working. OpenAI has established a distribution partnership with Microsoft, which recently announced an expanded partnership with an additional $10 billion in investment. Microsoft now owns a large stake in OpenAI and plans to build the technology into its Bing search engine and productivity applications.
Because of this, Mike Volpi, a partner at Index Ventures, points out that we are no longer talking about the battle between David and Goliath. "OpenAI is no longer a small company, it's basically a subsidiary of Microsoft, and most of the money comes from Microsoft, so I think it's more appropriate to think of it as an extension of Microsoft," he said. "in this context, a lot of other things have become very clear. Bing's share of the search advertising market is very small compared with Google, so Microsoft definitely wants to upend this market."
Cloud computing is crucial Microsoft sees OpenAI as a way to provide advanced generative AI services through its Azure cloud platform, and the scale of Azure is second only to Amazon AWS in the cloud computing market. Microsoft may also use its partnership with OpenAI to enhance the credibility of its claim that Azure is more appropriate than other cloud services in hosting advanced AI models.
Eric Boyd (Eric Boyd), vice president of Azure AI platform, wrote in a blog post this month: "Azure provides best-in-class performance and scalability for computing-intensive AI training and reasoning workloads. This is why the world's leading AI companies (including OpenAI, Meta, Hugging Face, etc.) continue to choose Azure to promote their AI innovation."
Google is trying to promote its cloud computing service, and the business is becoming profitable, but it still lags far behind Azure and Amazon AWS. With the addition of OpenAI services, Azure may make Google's efforts to catch up more difficult.
The AI world is more competitive and it may be difficult to understand what Google management thinks of OpenAI. In mid-December, Jeff Dean, chief executive of Google's AI business, said Google would suffer more if it rushed to launch a production AI tool that was error-prone or published toxic content. But the company is taking OpenAI's bid for dominance in the field seriously. Google has even reportedly hired retired founders Larry Page and Sergey Brin to discuss how to fight OpenAI.
At a strategy meeting in December, Google said it would "realign" the level of risk it was willing to take when it unveiled a new and innovative AI tool for the public.
In other words, ChatGPT may have launched a new AI competition. It now seems likely that Google will launch a number of new AI features and products this year. Some of these tools will be targeted at developers: Google may develop tools like GitHub Copilot that train millions or billions of lines of code to visually determine developers' needs and advise on new code.
Index Ventures's Volpi believes that Google is likely to release natural language chatbots that look and behave like ChatGPT, while DeepMind may also release its Sparrow chatbots, and Google search may provide a chat-like experience for some types of search.
Next year, OpenAI may look more like a forerunner in the field of generative AI, but its model may not be as novel as it looks now. In the final analysis, OpenAI may not significantly change the balance of power among large technology companies. "I'm not sure it will change the current situation," Volpi said. "Google will launch its own NLP chat product and Amazon will launch a similar tool. I'm sure Apple will continue to improve Siri to do the same thing. In the end, my guess is that you will see more competition in this area."
Ordinary people can sit quietly and watch these companies fight it out and enjoy their research. But there are also risk factors. After ChatGPT, the pace of development of AI may accelerate. As competition intensifies, confidentiality is also likely to increase. And as large technology companies spend more money on AI research and development, they may be reluctant to reveal details of their models and how they work in published research papers.
Percy Liang, director of Stanford University's Center for people-oriented AI basic models, says normal collaboration and scientific openness in the field of AI research have declined over the past year. Importantly, technology companies disclose the details of their models, training methods and policies so that other researchers can recreate the models and their output to validate the study. OpenAI does publish papers on its development of new models, but Prof Liang says the company does not always provide enough details to explain its approach.
Professor Liang explained: "for the latest OpenAI models, we have no idea what is behind them. Until recently, we didn't even know their size, what data training they had, and whether they were the same model we accessed and tested with API. Since then, they have only posted a short blog post explaining this, although it is a small step in the right direction, but there is still a lack of transparency. "
In Professor Liang's view, the final output of the AI model is largely affected by downstream fine-tuning and filtering imposed by humans. Developers must do this to prevent the model from outputting incorrect, toxic, or biased text or images from certain users. For example, if you ask an early version of OpenAI's GPT-3 model to write a short story about a doctor, you will find that the role of a doctor is always male. By the time ChatGPT was released, this bias had disappeared and may have been fixed with additional training. But this is just one example, and there are probably hundreds of such problems that need to be solved.
"OpenAI likes to talk about value matching, so these models will be consistent with human values, so they won't do anything bad," says Prof Liang. "but whose values are we talking about? what are these values? OpenAI does not publish details about these values, which is also a sign of some kind of opaque decision-making process."
While the ChatGPT beta is available to the public, OpenAI voluntarily fixes many vulnerabilities. The company even launched a "bias reward" program, where users can report incorrect or biased output in exchange for winning $500 worth of API points. Google, by contrast, has been reluctant to expose AI products and features that have such vulnerabilities. But the accelerated pace of AI could put pressure on Google and all the big technology companies to launch new models before all problems are resolved.
In this respect, the general public may be the losers, and people may see generative AI tools add more fake, toxic and biased content to our online lives.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.