Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

OpenAI predicts that super-intelligent AI will emerge within a decade, and is gathering computing power to save mankind.

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

Beijing, July 6 (Beijing time)-ChatGPT developer OpenAI said on Wednesday local time that he plans to invest more resources and set up a new research team to study how to ensure the security of AI to humans, and finally to use AI to supervise AI.

OpenAI co-founder Ilya Sutskever and AI alignment director Jan Leike said on the official blog: "the huge power of super intelligence can lead to human loss of power or even extinction. At present, we do not have a solution to manipulate or control a possible super-intelligent AI and prevent it from becoming a scoundrel."

The post predicts that super-intelligent AI, that is, systems that are smarter than humans, may emerge in the next decade. Human beings need more powerful technology than at present to control super-intelligent AI, so we need to make a breakthrough in "AI alignment research" to ensure that human AI continues to be beneficial to human beings. AI alignment is the main problem in AI control, that is, it requires that the goals of AI and human values and intentions are consistent.

OpenAI will devote 20 per cent of its computing power to solving this problem over the next four years, the authors write. In addition, the company will form a new "hyper-alignment" team to organize the work.

The team's goal is to develop to the "human level", alignment researchers driven by AI, and then driven by huge computing power. OpenAI said that this means that the AI system will be trained with manual feedback, the AI system will be trained to assist in manual evaluation, and eventually the AI system will be trained for actual alignment research.

However, Connor Leahy, an advocate of artificial intelligence security, said OpenAI's plan was fundamentally flawed because the initial human-level AI could get out of control and wreak havoc, which would force researchers to address AI security issues. "you have to solve the alignment problem before building human-level intelligence, otherwise you can't control it by default," he said in an interview. I personally don't think this is a particularly good and safe plan. "

The potential danger of AI has always been a major concern for AI researchers and the public. In April, a group of AI industry leaders and experts signed an open letter calling for a six-month moratorium on the development of a more powerful system than OpenAI's GPT-4 because of potential risks to society. A survey by Ipsos in May this year found that more than 2% of Americans worry about the possible negative impact of AI, while 61% think that AI may threaten human civilization.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report