Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Google panicked, want to send papers to be approved, give priority to the development of products: ChatGPT become the best in the field of artificial intelligence?

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

OpenAI punched Google dumb, is the open source era of AI coming to an end?

The rapid development of AI in the past decade has benefited from the cooperation of universities, enterprises and individual developers to make the field of artificial intelligence full of open source code, data and tutorials.

Google has always been a leader in the AI industry, publishing papers across natural language processing, computer vision, reinforcement learning and other fields, contributing many basic models and architectures to the industry, such as Transformer, Bert, PaLM and so on.

But OpenAI broke the rules of the game, not only with Transformer to develop ChatGPT, but also with the advantages of start-ups, such as less affected by law and public opinion, no need to disclose training data, model size, structure and other information, and even poached a lot of employees from Google and other big companies. Google is losing ground.

In the face of OpenAI, which does not talk about martial arts morality, Google can only be beaten passively.

According to anonymous sources, in February this year, the head of Google AI, Jeff Dean, said at the meeting:

Google would take advantage of its own AI discoveries, sharing papers only after the lab work had been turned into products

Google will use its achievements in artificial intelligence and will share papers only after the laboratory results have been translated into products.

Google, which has turned to "defense", may be hoping to get rid of all its AI companies and better protect its core search business and share price.

But if AI turns to monopoly without the open source spirit of these big companies, will there still be a miracle in the field of artificial intelligence?

Google was stolen only because it was "too responsible" for a company with billions of users like Google, even small-scale experiments could affect millions of people and be attacked by public opinion, which is why Google has been reluctant to launch chat robots and stick to the bottom line of "responsible AI".

In 2015, when Google Photos launched the image classification feature to misclassify a black person as Gorillas (gorilla), Google immediately fell into a public relations crisis and quickly apologized and corrected it.

Google's overhaul is to delete the Gorillas tag directly, and even delete categories such as poodle, chimpanzee, monkey and so on.

The result is that image classifiers no longer recognize blacks as orangutans, but they can't recognize real orangutans either.

Although Google has been investing a lot of money in developing artificial intelligence technology for many years, limited by the unexplainable black box of the neural network, Google cannot fully guarantee its controllability after it is put into production, and it takes longer to test its security. and lost the first-mover advantage.

In April, Google CEO Sundar Pichai still made it clear in "60 minutes" that people need to be cautious about artificial intelligence, which can do great harm to society, such as falsifying images and videos.

If Google chooses to be "less responsible", it is bound to attract more attention from regulators, artificial intelligence researchers and business leaders.

But Mustafa Suleyman, co-founder of DeepMind, says it is not because they are too cautious, but because they are unwilling to disrupt existing revenue streams and business models, and they begin to wake up only when there is a real external threat.

And this threat has already come.

There is not much time left for Google. Since 2010, Google has been buying artificial intelligence start-ups and gradually integrating relevant technologies into its own products.

In 2013, Google invited deep learning pioneer and Turing Award winner Hinton to join (just left). A year later, Google bought start-up DeepMind for $625 million.

Shortly after being appointed CEO of Google, Chai announced that Google would take "AI first" as the basic strategy to integrate artificial intelligence technology into all its products.

After years of deep ploughing, Google's artificial intelligence research team has made many breakthroughs, but at the same time, some smaller startups have also made some achievements in the field of artificial intelligence.

OpenAI was set up to counterbalance the acquisition monopoly of artificial intelligence companies by large technology companies, and with the advantages of small companies, OpenAI is less subject to censorship and regulation, and is more willing to deliver artificial intelligence models to ordinary users quickly.

Therefore, the arms race of artificial intelligence is becoming more and more intense without supervision, and the "sense of responsibility" of large enterprises may gradually weaken in the face of competition.

Google executives also have to choose the prospect of general artificial intelligence concepts such as "artificial intelligence technology matching" and "beyond human intelligence" advocated by DeepMind.

On April 21, Chai announced the merger of Google brain, which was originally run by Jeff Dean, and previously acquired DeepMind, to Demis Hassabis, co-founder and CEO of DeepMind, to accelerate Google's progress in artificial intelligence.

Hassabis said that in just a few years, AI levels may be closer to the level of human intelligence than most artificial intelligence experts predict.

Google enters "War Readiness" according to interviews with 11 current and former Google employees by foreign media, Google has carried out a comprehensive reform of its artificial intelligence business in recent months, with the main goal being to launch products quickly, lower the threshold for launching experimental artificial intelligence tools to small users, and develop a new set of evaluation indicators and priorities in areas such as fairness.

Chopping firewood stressed that Google's attempt to speed up research and development does not mean cutting corners.

To ensure that general artificial intelligence develops in a responsible manner, we are preparing a new department to build more capable systems, safer and more responsible.

Brian Kihoon Lee, a former Google brain artificial intelligence researcher who was fired during a wave of mass layoffs in January, described the shift as Google's shift from "peacetime" to "wartime", where everything will change once competition becomes fierce. In wartime, the growth of competitors' market share is also critical.

Google has established an internal management structure and a comprehensive review process in 2018 and has conducted hundreds of reviews in various product areas so far, said Brian Gabriel, a spokesman for Google. Google will continue to apply the process to outside-oriented artificial intelligence-based technology, which remains the company's top priority.

But the changing criteria for determining when artificial intelligence products are ready to go on the market have also sparked unease among employees, such as Google's decision to lower test scores for experimental artificial intelligence products after it decided to launch Bard, provoking opposition from internal employees.

In early 2023, Google announced about 20 policies around Bard set by two artificial intelligence teams (Responsible Innovation and Responsible AI), which employees generally considered to be clear and perfect.

There are also employees who think that these standards are more like a performance for the outside world, rather than public training data or open source models, users can more clearly understand the ability of the model.

The decision to approve Google's decision to speed up research and development is a mixed blessing for employees.

Happily, employees in non-research positions are generally optimistic that the decision will help Google regain its upper hand.

But for researchers, the need for additional approval before publishing relevant artificial intelligence research could mean missing out on the first opportunity in the fast-growing field of generative artificial intelligence.

There are also concerns that Google may quietly suppress controversial papers, such as a 2020 study on the dangers of large language models co-authored by Timnit Gebru and Margaret Mitchell, led by Google's ethical artificial intelligence team.

Over the past year, many of the top artificial intelligence researchers have been poached by start-ups, in part because of Google's lack of attention and over-scrutiny of researchers' work.

Getting approval for the paper may require repeated scrutiny by senior researchers, according to a former Google researcher. Google has promised many researchers to continue to participate in a wider range of research topics in the field, and publication restrictions may force another group of researchers to leave.

AI research and development should slow down? Google's acceleration comes at a time when there are voices that do not sound so harmonious, calling on artificial intelligence manufacturers to slow down their development and believe that the technology has exceeded the inventors' expectations.

Deep learning pioneer Geoffrey Hinton said that super intelligence artificial intelligence is in danger of escaping human control, so he left Google.

Consumers are also beginning to understand the risks and limitations of large language models, such as artificial intelligence's tendency to fabricate facts, which is not clearly stated in the small print disclaimer on ChatGPT.

Downstream applications based on ChatGPT have exposed more problems, such as a study conducted by Stanford professor Percy Liang, in which only about 70 per cent of the citations provided by New Bing are correct.

On May 4, the White House invited the chief executives of Google, OpenAI and Microsoft to meet to discuss public concerns about AI technology and how to regulate AI.

U.S. president Joe Biden made it clear in his invitation that AI must ensure the safety of its products before it can be opened to the public.

Reference:

Https://www.washingtonpost.com/technology/2023/05/04/google-ai-stop-sharing-research

This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report