Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Musk's open letter calling for the suspension of AI research has been questioned, accused of falsifying signatures and distorting papers.

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

On March 31, Tesla CEO Elon Musk (Elon Musk), Apple co-founder Steve Wozniak (Steve Wozniak) and thousands of other AI researchers signed an open letter calling for a moratorium on more advanced AI technology. However, the letter was questioned by many experts and even signers, accusing them of aggravating AI hype, fake signatures and distorting papers.

The letter was written by the nonprofit Institute for the Future of Life (Future of Life Institute) and its mission is to "reduce the global catastrophe and survival risks posed by powerful technologies." Specifically, the institute focuses on mitigating human long-term "survival" risks, such as super-intelligent AI. Musk is a supporter of the group, to which he donated $10 million in 2015.

"A more powerful AI system should be developed only if we are convinced that the impact of AI is positive and the risks are manageable," the letter wrote. Therefore, we call on all AI labs to immediately suspend the training of AI systems that are more powerful than GPT-4 for at least 6 months. AI Labs and independent experts should use this time to jointly develop and implement a set of shared security protocols designed and developed by advanced AI, which are strictly audited and supervised by independent external experts. "

The letter also clarified: "this does not mean that AI development is generally on a pause, but a retreat from dangerous competitions to larger, unpredictable black box models with emergency functions." It refers to the AI competition between large technology companies such as Microsoft and Google, which have released many new AI products over the past year.

Other prominent signatories include Imad Emad Mostaque, chief executive of image generator startup Stability AI, writer and historian Yuval Yuval Noah Harari, and Pinterest co-founder Evan Sharp. There are also signatures from employees of companies participating in the AI competition, including Google sister companies DeepMind and Microsoft. Although OpenAI developed and commercialized the GPT series AI model, no one signed the letter.

Despite the verification process, the letter initially had many false signers, including those posing as Sam Altman, OpenAI chief executive, and Yann LeCun, Meta's chief AI scientist. After that, the Institute for the Future of Life cleaned up the list and suspended the display of more signatures on the letter while verifying each signature.

However, the release of the open letter has caused an uproar and has been scrutinized by many AI researchers, including many of the signers themselves. Some signatories have changed their positions, some celebrities' signatures have been proved to be fake, and more AI researchers and experts have publicly expressed their opposition to the description and proposed method in the letter.

Gary Marcus, a professor of psychology and neuroscience at New York University, said: "this letter is not perfect, but its spirit is correct." Meanwhile, Stability AI CEO Mustak said on Twitter that OpenAI is a truly "open" AI company, "so I don't think a six-month suspension of training is the best idea, and I don't agree with many of the points in the letter, but there are some interesting things in this letter."

AI experts criticized the letter for further promoting "AI hype", but did not list or call for specific action against the AI hazards that exist today. Some argue that this promotes a long-standing but somewhat unrealistic view that is criticized as harmful and anti-democratic because it is better for the super-rich and allows them to act morally dubiously for some reasons.

Emily M. Bender, a professor of linguistics at the University of Washington and co-author of the paper quoted at the beginning of the letter, wrote on Twitter that the letter was "full of AI hype" and abused her research. "extensive studies have shown that AI systems with comparable human intelligence may pose a major threat to society and mankind," the letter said. " But Bender countered that she specifically pointed out in her study that the threat refers to current large language models and their use in oppressive systems, which is much more specific and urgent than the future AI threat assumed in the open letter.

Bender continued: "We published a full paper at the end of 2020 pointing out the problems with this rush to build a larger language model regardless of risk. But risks and hazards have never been'AI is too powerful'. Instead, they are about the concentration of power in the hands of the people, the re-emergence of oppressive systems, the destruction of information ecosystems, the destruction of natural ecosystems, and so on."

Sasha Luccioni, a research scientist at AI start-up Hugging Face, said in an interview: "the open letter is inherently misleading: make everyone aware of the supposed power and harm of large language models and come up with very vague and almost ineffective solutions instead of focusing on these hazards and solving them at this moment. For example, more transparency is required when it comes to LLM training data and capabilities, or legislation is required to specify where and when they can be used. "

Arvind Narayanan, an associate professor of computer science at Princeton University, said the open letter was full of AI hype and "made it more difficult to deal with real and ongoing AI hazards".

The open letter asks several questions: "should we automate all work, including those that give people a sense of achievement? should we cultivate non-human thinking that may eventually surpass human intelligence and replace us? should we continue to develop AI at the risk of losing control of civilization?"

In this regard, Narayanan said that these questions are "nonsense" and "ridiculous." Whether computers will replace human beings and take over human civilization is a very distant question, part of long-term thinking, which distracts us from current problems. After all, AI has been integrated into people's work, reducing the need for certain professions, rather than a "non-human mind" that makes us "outdated".

Narayanan added: "I think these are reasonable long-term concerns, but they have been mentioned repeatedly, diverting attention from current hazards, including very real information security and security risks!" And addressing these security risks will require our sincere cooperation. Unfortunately, the hype in this letter, including the exaggeration of AI capabilities and risks related to human survival, may lead to more constraints on the AI model, making it more difficult to deal with risks. "

However, many of the signatories to the open letter also defended it. Yoshua Bengio, founder and scientific director of Mila, a research institution, said the six-month moratorium was necessary for governance bodies, including the government, to understand, audit and validate AI systems to ensure that they were safe for the public. He added that there was a dangerous concentration of power, that AI tools had the potential to destabilize democracy, and that "there was a conflict between democratic values and the way these tools were developed".

The worst-case scenario is that humans will gradually lose control of civilization, says Max Tegmark, a professor of physics at the Massachusetts Institute of Technology's NSF AI and basic interaction Institute (IAIFI) and director of the Institute for Future Life. The risk, he says, is that "we have lost control of a group of unelected powerful people in technology companies who have too much influence."

These comments spoke of a broad future and hinted at fear of losing control of civilization, but did not mention any concrete measures other than the call for a six-month moratorium.

Tim Timnit Gebru, a computer scientist and founder of the distributed AI Institute, tweeted that it was ironic that they called for a suspension of training of the more powerful AI model than GPT-4, but failed to address a large number of concerns about GPT-4 itself.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report