In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
One day after the release of the AI Non-Proliferation Treaty signed by thousands of people, various big shots have had follow-up responses, which are intriguing. "AI whistleblower": do not block super AI research and development, we all have to die!
Yesterday, a joint letter written by thousands of big shots to suspend super AI training for six months exploded on the Internet at home and abroad like a bomb.
After a day of verbal sparring, several key figures and other big names came out to publicly respond.
Some are very official, some are very personal, and some do not face the problem directly. However, one thing is certain. Whether these big shots have their own views or the interest groups they represent behind them, they are worth careful examination.
Interestingly, of the Turing Troika, one signed first, one disagreed strongly, and one said nothing.
Bengio signature, Hinton silence, LeCun opposition Enda Wu (opposition) In this incident, Enda Wu, a former Google brain member and founder of online education platform Coursera, is a clear-cut opposition.
He made his attitude clear: the idea of suspending "AI progress beyond GPT-4" for six months was bad.
He said he has seen many new AI applications in education, health care, food and other fields, and many people will benefit from it. Improving GPT-4 would also be beneficial.
What we should do is strike a balance between the great value created by AI and the real risks.
For the joint letter mentioned that "if the training of super AI cannot be suspended quickly, the government should be involved", Wu Enda also said that this idea is very bad.
Asking governments to suspend emerging technologies they don't understand is anticompetitive, he said, setting a bad precedent and a terrible policy innovation.
He acknowledges that responsible AI is important and that AI does have risks.
However, the media rendering of "AI companies are frantically releasing dangerous code" is obviously too exaggerated. The vast majority of AI teams place great emphasis on responsible AI and safety. But he also admitted,"Unfortunately, not all."
Finally, he emphasized again:
A six-month moratorium is not a practical proposal. To improve AI security, regulations around transparency and auditing will be more practical and have a greater impact. As we advance technology, let us also invest more in security rather than stifling progress.
However, under his Twitter, netizens have expressed strong opposition: the reason why the big shots are calm is probably because the pain of unemployment will not fall on them.
As soon as the LeCun (opposition) joint letter was sent out, some netizens rushed to tell each other: Turing Award giants Bengio and LeCun signed the letter!
LeCun, who is always surfing on the front line of the Internet, immediately refuted the rumor: No, I didn't sign it. I don't agree with the premise of this letter.
Some netizens said, I also disagree with this letter, but I am very curious: Do you disagree with this letter because you think LLM is not advanced enough to threaten mankind at all, or other reasons?
But LeCun did not answer these questions.
After 20 hours of enigmatic silence, LeCun suddenly retweeted a tweet from a netizen:
OpenAI waited 6 months to release GPT4! They even wrote a white paper about it..."
LeCun praised this: Yes, the so-called "suspension of research and development" is nothing more than "secret research and development", which is exactly the opposite of what some signatories hope.
It seems that nothing can be hidden from LeCun's eyes.
Those who asked earlier agreed: That's why I oppose this petition-no "bad guy" will ever really stop.
"So it's like an arms treaty that nobody abides by? Are there not many examples of this in history?"
After a while, he retweeted another big shot's tweet.
He said,"I didn't sign either. This letter is full of terrible rhetoric and ineffective/nonexistent policy prescriptions." LeCun said,"I agree."
Bengio and Marcus (pro) open letter signed the first big man, is the famous Turing Prize winner Yoshua Bengio.
Of course, NYU professor Marcus voted yes. He seems to have been the first to expose the letter.
After the discussion gradually became noisy, he also quickly posted a blog explaining his position, which was still full of bright spots.
Breaking News: The letter I mentioned earlier is now public. The letter calls for a six-month suspension of training for AI that is "more powerful than GPT-4." Many prominent people signed it. I'm in.
I didn't participate in drafting it because there were other things to be fussed about (e.g., AI more powerful than GPT-4, which AI exactly? Since the details of GPT-4's architecture or training set have not yet been announced, how do we know?)-- But the spirit of the letter is one I support: until we can better manage risks and rewards, we should proceed with caution.
It will be interesting to see what happens next.
Another point that Marcus agrees with 100% is also very interesting. This point says:
GPT-5 is not AGI. Almost certainly, no GPT model will be AGI. Any model optimized by the method we use today (gradient descent) is completely unlikely to be AGI. The upcoming GPT model is sure to change the world, but overhyping is crazy.
Altman (non-committal) So far, Sam Altman has not made a clear statement on this open letter.
However, he did express some views on general artificial intelligence.
What constitutes good general AI:
1. Aligning the technological capabilities of superintelligence
2. Full coordination between most leading AGI efforts
3. An effective global regulatory framework
Some netizens questioned: "Aligned with what?" Aligned with who? Aligning with some people means not aligning with others."
And this comment lit up: "Well, then you'd better open it."
Greg Brockman, another founder of OpenAI, retweeted Altman's tweet, reiterating OpenAI's mission "to ensure AGI benefits all of humanity."
Once again, some netizens pointed out that China points out: You big guys say "aligned with the designer's intention" all day long, but no one knows what alignment means.
Yudkowsky (Radicals) There is another decision theorist named Eliezer Yudkowsky who is even more radical:
It's not enough to suspend AI development, we need to shut it down! All shut down!
If this continues, we'll all die.
As soon as the letter was published, Yudkowsky wrote a long article for TIME magazine.
He said he didn't sign because he thought the letter was too mild.
The letter underestimated the seriousness of the situation and asked for too few problems to be solved.
The key issue, he says, is not intelligence that "competes with humans," it's clear that when AI becomes smarter than humans, that step is obvious.
Crucially, many researchers, including him, believe that the most likely outcome of building AI with superhuman intelligence is that everyone on Earth will die.
Not "maybe," but "definitely."
Without sufficient precision, the most likely outcome is that we build AI that doesn't do what we want, doesn't care about us, and doesn't care about other sentient beings.
In theory, we should be able to teach AI to care, but right now we don't know how.
Without this kind of care, we get the result: AI doesn't love you, doesn't hate you, you're just a bunch of atomic materials that it can use to do anything.
And if humans try to resist superhuman AI, they will inevitably fail, just like "the 11th century trying to defeat the 21st century" or "Australopithecus trying to defeat Homo sapiens."
Yudkowsky said that the AI we imagine doing bad things is a thinker who lives on the Internet and sends malicious emails to humans every day, but in fact, it may be a hostile superhuman AI, an alien civilization that thinks millions of times faster than humans.
When the AI is smart enough, it doesn't just stay in the computer. It can send DNA sequences by mail to laboratories, which will produce proteins on demand, and AI will have life forms. Then all life on Earth dies.
How should humans survive under these circumstances? We have no plans at the moment.
OpenAI is just an open call, and future AI needs alignment. DeepMind, another leading AI lab, has no plans at all.
Whether AI is conscious or not, these dangers exist. Its powerful cognitive system can strive to optimize and compute standard outputs that satisfy complex outcomes.
Indeed, current AI may simply mimic self-awareness from training data. But we actually know very little about the internal structure of these systems.
If we are still ignorant of GPT-4 and GPT-5 evolves amazing abilities, as it did from GPT-3 to GPT-4, it is difficult to know whether humans created GPT-5 or AI itself.
On February 7, Microsoft CEO Nadella also gloated publicly that the new Bing forced Google to "come out and dance."
His behavior was irrational.
We should have thought about this 30 years ago, and six months is not enough to close the gap.
It has been more than 60 years since the concept of AI was proposed. It should take at least 30 years to make sure Superman AI "doesn't kill anyone."
We can't learn from mistakes because if you're wrong, you're dead.
If a six-month pause would allow the planet to survive, I would agree, but I wouldn't.
So what we're going to do is these things:
1. Training of new large language models will not only be suspended indefinitely, but implemented globally.
And there can be no exceptions, including government or military.
2. Shut down all large GPU clusters, which are the large computing facilities that train the most powerful AI.
Pause all models that are training large AI, set an upper limit on the amount of computing power everyone uses to train AI systems, and gradually lower this limit over the next few years to compensate for more efficient training algorithms.
The Government and the military were no exception, and multinational agreements were immediately put in place to prevent prohibited acts from being transferred elsewhere.
Track all GPUs sold, and if intelligence indicates that a country outside the protocol is building GPU clusters, the offending data center should be destroyed by air strikes.
It is the violation of the moratorium that should be feared more than armed conflict between States. Don't see anything as a conflict of national interests, and be clear that anyone who talks about an arms race is a fool.
In that regard, it is not policy but a fact of nature that we all either live together or die together.
Because the signatures were so hot, the team decided to pause the display first so that the review could catch up. (The signatures at the top of the list are directly verified)
References:
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
https://twitter.com/AndrewYNg/status/1641121451611947009
https://garymarcus.substack.com/p/a-temporary-pause-on-training-extra
This article comes from Weixin Official Accounts: Xinzhiyuan (ID: AI_era)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.