Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Worried about AI throwing a nuclear bomb at mankind, OpenAI is serious.

2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

This time, science fiction movies are about to catch up with reality.

Author | Lian ran

Editor | Zheng Xuan

In the upcoming Hollywood sci-fi movie the founder of AI, an artificial intelligence that used to serve humans detonated a nuclear bomb in Los Angeles.

More sci-fi than movies, in reality, AI companies have begun to worry that such scenes actually appear in the real world.

Recently, OpenAI said that because of concerns about the security of its AI system, the company is setting up a special team to deal with the "catastrophic risks" that the cutting-edge AI may face, including nuclear threats.

In fact, its CEO Sam Altman has been worried that AI may pose an "extinct" threat to mankind, and has previously called for stronger AI-like regulation on a number of occasions, including advice from the US Congress. However, a group of scientists, including Meta scientist Yann LeCun, take a different view on AI regulation, arguing that the current capacity of AI is still limited, and that premature regulation will not only benefit large companies, but also stifle innovation.

This highlights the fact that there are still differences in the industry over cutting-edge AI regulation. Premature regulation may restrict technological development, but the lack of regulation will be difficult to deal with risks. How to strike a balance between technology first and preventive supervision to make AI both efficient and safe and controllable is still a difficult problem in the industry.

01, AI, Frontier or dangerous these days, OpenAI said in an update that the company is forming a new team "Preparedness" to track, evaluate and predict the development of "cutting-edge models" to prevent so-called "catastrophic risks", including cyber security issues as well as chemical, nuclear and biological threats.

Photo: OpenAI official website the team will be led by Aleksander Madry, who is currently on leave as director of the MIT deployable machine learning center.

In addition, the team's tasks include developing and maintaining a "risk informed development policy", which will explain in detail OpenAI's approach to building AI model assessment and monitoring tools, the company's risk mitigation actions, and the governance structure that oversees the entire model development process. This policy is designed to complement OpenAI's work in the area of AI security and to maintain security and consistency before and after deployment.

OpenAI suggests that managing the possible catastrophic risks of cutting-edge AI models requires answers to the following key questions:

What is the risk of misuse of frontier AI models?

How to establish a sound framework for monitoring, evaluating, predicting and preventing the risk capability of cutting-edge AI models?

If frontier AI models are stolen, how might malicious actors take advantage of them?

OpenAI wrote in the update: "We believe that …" Will surpass the cutting-edge AI model of the most advanced model at present, and may benefit all mankind. But they also pose increasingly serious risks. "

Recently, OpenAI has constantly emphasized the security issues of AI and carried out a series of actions at the corporate level, public opinion level, and even political level.

Earlier, on July 7, OpenAI announced the formation of a new team to explore ways to guide and control Super AI, led by OpenAI co-founder and chief scientist Ilya Sutskever and Alignment director Jan Leike.

Sutskever and Leike have predicted that artificial intelligence beyond human intelligence will emerge within 10 years. They say that this artificial intelligence is not necessarily good, so it is necessary to study ways to control and limit it.

According to reports at the time, the team was given the highest priority and 20 per cent of the company's computing resources were supported, and their goal was to address the core technical challenge of controlling super-AI within the next four years.

To coincide with the launch of the "ready" team, Open AI also held a challenge for outsiders to come up with ideas on how AI could be abused and cause harm in the real world. The top 10 submitters will receive a $25000 bonus and a "ready" job.

02. Worry about "AI may lead to human extinction" OpenAI's CEO Sam Altman has been worried that AI may lead to human extinction.

At a US congressional hearing on the subject of AI in May, Altman said AI needed to be regulated and that without strict regulatory standards for super AI, there would be more dangers in the next 20 years.

At the end of May, Altman signed a brief statement with Google DeepMind, Anthropic's CEO and some well-known AI researchers, claiming that "like epidemics and nuclear wars, reducing the risk of extinction caused by AI should be one of the global priorities."

At the San Francisco Technology Summit in June, Sam Altman mentioned that "you should not trust a company and certainly not a person" in the development of AI technology. He believes that the technology itself, its benefits, its acquisition, and its governance belong to all mankind.

But others, represented by Mr Musk, accuse Altman of "calling for regulation" only to protect OpenAI's leadership. Sam Altman responded at the time, "We believe that there should be more regulation of large companies and proprietary models that exceed a certain high capability threshold, and less regulation of small startups and open source models. We have seen the problems faced by countries trying to over-regulate technology, which is not what we expected. "

He also said, "the model that people train is far larger than any model we have today, but if certain capability thresholds are exceeded, I think there should be a certification process, as well as external audits and safety testing. Moreover, such a model needs to be reported to the government and should be subject to government supervision. "

Contrary to Altman's view, on October 19, Meta scientist Yann LeCun (Yang Likun) expressed his opposition to the premature regulation of AI in an interview with the British media the Financial Times.

Yann LeCun is a member of the National Academy of Sciences, the National Academy of Engineering and the French Academy of Sciences, and is also known for inventing convolution networks and for work on optical character recognition and computer vision using convolution neural networks (CNN).

In 2018, Yann LeCun won the Turing Prize (often referred to as the "Nobel Prize in computing") along with Yoshua Bengio and Geoffrey Hinton, who are often referred to as "godfathers of artificial intelligence" and "godfathers of deep learning".

In the interview, Yann LeCun showed a more negative attitude towards AI regulation as a whole, arguing that the current regulatory AI model is like regulating jet aircraft in 1925 (before they were invented). Prematurely regulating AI will only strengthen the dominance of large technology companies and stifle competition.

"regulating the research and development of AI is incredibly counterproductive," Yann LeCun said, adding that the requirement to regulate AI stems from the "arrogance" or "sense of superiority" of leading technology companies that believe that only they can be trusted to develop AI safely. "they want to regulate under the guise of AI security. "

"but in fact, until we can design a system that can compete with cats in terms of learning ability, it is too early to debate the possible risks of AI," says Yann LeCun. The current generation of AI models is far from as powerful as some researchers claim. "they have no understanding of how the world works, they have neither the ability to plan nor the ability to really reason. "

In his view, OpenAI and Google DeepMind have been "overly optimistic" about the complexity of the issue, and in fact, it will take several "conceptual breakthroughs" before AI reaches the level of human intelligence. But even then, AI can be controlled by coding "moral qualities" in the system, just as it is now possible to enact laws to regulate human behavior.

This article is from the official account of Wechat: geek Park (ID:geekpark), author: Lian ran

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 250

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report