Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Musk and other thousands of people jointly called for a moratorium on the development of a more powerful AI, and artificial intelligence experts called it "insanity."

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

On the morning of April 3, Beijing time, it was reported that the research results of four artificial intelligence experts were quoted by an open letter, which was jointly signed by Elon Musk and called for an emergency suspension of artificial intelligence research. Experts expressed concern about this.

As of Friday local time, the open letter, dated March 22, had received more than 1800 signatures. The letter called for a six-month hiatus in the process of developing a "more powerful" system than OpenAI, a Microsoft-backed artificial intelligence research company. GPT-4 can conduct human-like conversations, compose music and summarize lengthy documents.

Since ChatGPT, the predecessor of GPT-4, was launched last year, competitors have been racing to launch similar products.

Artificial intelligence systems with "human competitive intelligence" pose far-reaching risks to humans, the letter said, citing the results of 12 studies by experts, including university scholars, as well as current and former employees of OpenAI, Google and its subsidiary DeepMind.

Since then, civil society groups in the United States and the European Union have been pressing lawmakers to rein in OpenAI research. OpenAI didn't immediately respond to a request for comment.

The open letter was sent by the Future Life Institute, which is funded by the Musk Foundation. Critics accuse the institute of giving priority to imagined apocalyptic scenarios over more pressing concerns posed by artificial intelligence, such as racism or sexism in programming into the machine.

The studies cited in the letter include the famous paper on the dangers of Random Parrots, co-authored by Margaret Mitchell. She herself was in charge of ethics research in artificial intelligence at Google and is now the chief ethics scientist at Hugging Face, an artificial intelligence company.

But Michelle herself criticized the letter, saying it was not clear what was "more powerful than GPT4". "the letter treats many questionable ideas as hypothetical facts and sets out a series of artificial intelligence priorities and narratives that benefit supporters of future life research institutes," she said. "right now, we have no privilege to ignore active harm."

Michelle's co-authors Timnit Gebru and Emily M. Bender also criticized the letter on Twitter, which called some references to "insanity."

Max Tegmark (Max Tegmark), president of the Future Life Research Institute, said the move was not an attempt to hinder OpenAI's corporate advantage.

"this is hilarious. I saw someone say, 'Elon Musk is trying to slow down competition,'" he said, adding that Musk was not involved in drafting the letter. "this is not about a company."

Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also challenged her research mentioned in the letter. Last year she co-authored a research paper that argues that the widespread use of artificial intelligence has brought serious risks.

Her research suggests that the use of artificial intelligence systems today may have an impact on decisions related to climate change, nuclear war and other threats to survival.

"artificial intelligence does not need to reach the human level to exacerbate these risks," she says. "

"some non-survival risks are also really important, but these risks are not taken seriously."

When asked to comment on these criticisms, Tegmark, president of the Future Life Research Institute, said the short-term and long-term risks of artificial intelligence should be taken seriously.

"if we quote someone, it just means that we think they agree with it. That doesn't mean they approve of the letter, or that we agree with all their ideas," he said.

Another expert cited in the letter is Dan Hendrycks, director of the California-based Center for artificial Intelligence Security. He supported the letter and said it was wise to consider the Black Swan incident. Black Swan events are events that seem unlikely but can have devastating consequences.

The letter also warned that generative artificial intelligence tools could be used to fill the Internet with "propaganda and lies".

Mr Dory-Haakoen said Mr Musk was "quite rich" when he signed the letter, citing sources recorded by civil society groups such as Common Cause that misinformation on Twitter had increased after Mr Musk acquired the Twitter platform.

Twitter will soon introduce a new charging structure for access to its research data, which could hinder research on the subject.

"this directly affects my lab work, as well as the work of other people who study errors and false information," Dory-Haakorn said. "We are bound."

Mr Musk and Twitter did not immediately respond to requests for comment.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report