In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
After the open letter calling for the suspension of GPT-5, the silent Bengio sent a long message explaining the reasons for his signature, while LeCun continued to stress that the current concerns were simply unfounded and that there was no point in arguing with AI doomsayers.
The open letter calling for the suspension of stronger AI training than GPT-4 has pushed the controversy over Super AI to the public table even more fiercely.
In the past few days, there has been a heated debate in the follow-up, and the positions of several parties have become more and more polarized.
The number of signatures of open letters has soared from just over 1000 to nearly 16000.
In the face of the ensuing accusations and doubts, OpenAI hurriedly responded in the early morning: we will resolutely protect the safety of AI and never "cut corners"!
Bengio, one of the big three in Turing, wrote a long post on his blog yesterday explaining why he signed in favor of suspending the research and development of Super AI.
LeCun, another giant, has repeatedly said that such a pause is unnecessary and that there is no point in arguing with AI doomsayers.
After a war of words with netizens for many days, he and Wu Enda will appear in person tomorrow to explain to the public why they do not approve of suspending super AI training for six months.
LeCun: there's no point in arguing with AI doomsayers. Today, LeCun tweeted once again his position: humans need machine intelligence.
History has repeatedly shown that the more intelligence, the more social and human well-being: we will have better skills, literacy, education, creativity, culture, communication and the free flow of ideas. "
"Human and machine intelligence is the driving force for progress. "
"Machine intelligence is a way to enhance human intelligence, just as machine tools enhance human physical fitness. "
In his opinion, it is absurd to worry that AI will exterminate the human race.
The only reason people worry too much about AI risk is the myth that when you turn on a super-intelligent system, the human race is doomed. This is absurd stupidity, based on a complete misunderstanding of how AI works. "
Moreover, today's discussion about AI safety, like the one about airliner safety in 1890, is all speculative.
"until we have the basic design and demonstration of AI systems that are reliable to the level of human intelligence, it is too early to argue about their risks and security mechanisms. "
Some netizens asked him in the comments section, have you listened to Hinton's interview?
Hinton and I have been friends for 37 years and we don't disagree on a lot of things, LeCun said.
He said that the AI takeover is "not unthinkable", and I have no objection. But I believe that the probability of such a thing is extremely low and it is easy to prevent.
Then, LeCun shared with netizens the posts he debated with Bengio, Russell, Zador and other bigwigs a few years ago, telling you that this is a meaningful debate.
But debating AI ethics and security with extreme AI doomsayers is as pointless as debating evolution with creationists.
Some netizens thanked LeCun for sharing the debate, but stressed that some people simply do not believe that profit-oriented companies will put the safety of human beings and the planet above profits, calling them doomsday prophets of AI. It is arbitrary and unfair.
And LeCun forwarded such a paper.
He commented: "for fear of unrest, the Ottoman Empire limited the spread of printed books. As a result, it missed the Enlightenment, lost its dominant position in science and technology, and eventually lost its economic and military influence. "
In his view, the same is true of preventing AI from replacing our highly skilled jobs.
Bengio: need to press the pause button. In addition, Yoshua Bengio, one of the Turing Big three that signed the open letter, explained his call to join FLI for the first time after a few days of silence:
I might not have signed such a letter a year ago. It is precisely because of the unexpected acceleration in the development of artificial intelligence that we need to take a step back. And my view on these issues has changed.
In response, Bengio published a long article on his website explaining in detail the reasons for signing the open letter-technological progress needs to slow down in order to ensure safety and protect the well-being of the collective.
Bengio believes that we have crossed a key threshold: machines can now talk to us and even pretend to be human.
Moreover, since the launch of ChatGPT, it has been found that players who are less cautious and disciplined have a competitive advantage, and if so, it will be easier to stand out by lowering the level of prudence and moral oversight.
In the face of more fierce business competition, OpenAI may also be eager to develop these huge artificial intelligence systems, leaving behind the good habits of transparency and open science they have developed in artificial intelligence research over the past decade.
Bengio said signing the letter would remind academics and industry that we must take the time to better understand these systems and develop the necessary frameworks at the national and international levels to increase public protection.
Even if the impact of signing this open letter may not be so great, it will at least set off a discussion in the whole society. We will have to make collective choices over the next few years, including what we want to do with the powerful tools we are developing.
In terms of short-and medium-term risks, it is easy to predict that public opinion may be manipulated for political purposes, especially the spread of false information, while the long-term risk is to deviate from the goals set by programmers and cause harm to human beings. this risk is difficult to predict.
But this open letter is not intended to stop all AI research, nor does it mean that GPT-4 will become an autonomous AI and threaten human beings. On the contrary, the danger is that human beings with bad intentions or who have no idea the consequences of their actions may use these tools to endanger the future of mankind in the coming years.
Now, we need to standardize these systems, improve the transparency and supervision of artificial intelligence systems, and protect society. Risks and uncertainties are already so serious that our governance mechanisms should also be accelerated.
Society needs time to adapt to change, laws need time to pass, and regulatory frameworks need time to implement. Bengio believes that it is important to raise awareness quickly and put the issue on more occasions for more public debate.
Fundamentally, however, Bengio is optimistic about technology, which he believes will help us overcome the great challenges facing mankind. However, in order to face these challenges, what is needed now is to think about how to adjust our society, or even reshape them completely.
"should we give up? Of course not. To reduce damage to the future, there are always ways to do better, and every step in the right direction will be beneficial. "
Marcus of New York University also expressed support for this, although the two had previously disagreed.
Musk: this is the other side of the streetcar problem. Musk, a supporter of the open letter, said that those of you who oppose the suspension fall into the following category:
"the other way is too roundabout, it will slow down the development of AI, so you might as well just run it over. "
And some netizens pointed out the "Huadian"-"if you don't put the powerful AI in every Tesla, I will believe your worry more." "
"Why overregulate some AI models rather than actual applications? Language models cannot say or choose what they "want", just as Tesla's FSD system cannot decide where to drive. "
LeCun retweeted the netizen's tweet, agreeing: "Yes, we should regulate applications, not research and development." "
OpenAI hastened to clarify that seeing AI security has become a target of criticism, OpenAI hurriedly came forward, saying that he also attaches great importance to AI security.
In the long article "Our approach to AI safety" released yesterday, OpenAI focuses on ways to build, deploy, and use artificial intelligence systems securely.
OpenAI said that powerful artificial intelligence systems should be subject to rigorous security assessment.
GPT-4 conducted rigorous testing before the release, and it took internal staff six months to ensure that the model was more secure and consistent before the release.
In addition, external experts are involved in feedback, and techniques such as human feedback reinforcement learning (RLHF) are used to improve the behavior of the model.
In addition, other ways to ensure AI security are listed:
-learn from real-world use to improve safeguards
-Protection of children
-respect for privacy
-improve the accuracy of the facts
Finally, OpenAI suggested that we should "continue to study and participate". This can be said to be a positive response to the recent call to suspend the research and development of Super AI.
We believe that the practical way to solve the security problem of artificial intelligence is to invest more time and resources to study effective mitigation measures and adjustment techniques, and to test against real-world abuse.
Is there enough time for a six-month suspension? According to a survey, 69% of American adults strongly support or to some extent support a six-month moratorium on development, but others think it seems to be futile.
And people who care about the topic that AI will destroy human beings, and those who don't, account for about half.
Reference:
Https://yoshuabengio.org/2023/04/05/slowing-down-development-of-ai-systems-passing-the-turing-test/
Https://twitter.com/GaryMarcus/status/1643754841221783553
This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.