Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Experts say they are more worried about false information and user manipulation than artificial intelligence leads to human extinction.

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

With the rapid development and popularization of artificial intelligence technology, many insiders worry that unrestricted artificial intelligence may lead to the extinction of mankind. But experts say the biggest negative impact of artificial intelligence is less likely to be nuclear war scenes in sci-fi movies, but more likely to be the deteriorating social environment caused by false information, manipulation of users and so on.

Source Pexels The following is the translation:

In recent months, the industry has become increasingly worried about artificial intelligence. Just this week, more than 300 industry leaders issued a joint open letter warning that artificial intelligence could lead to human extinction and should be treated as seriously as "epidemics and nuclear war."

Terms like "AI doomsday" conjure up images of robots ruling the world in sci-fi movies, but what are the consequences of allowing AI to develop? Experts say reality may not be as dramatic as the plot of a movie, and it won't be artificial intelligence that activates a nuclear bomb, but a gradual deterioration of the basic environment of society.

Jessica Newman, director of the AI Security Program at the University of California, Berkeley, said: "I don't think people should worry that AI will go bad or that AI will have some kind of malicious desire. "The danger comes from something simpler, which is that people may program AI to do harmful things, or we end up integrating inherently inaccurate AI systems into more and more areas of society, causing harm." "

That's not to say we shouldn't worry about AI. Even if the doomsday scenario is unlikely, powerful AI has the ability to destabilize society in the form of misinformation problems, manipulation of human users, and dramatic changes in labor markets.

While AI technology has been around for decades, the popularity of language-learning models like ChatGPT has exacerbated long-standing concerns. At the same time, tech companies are scrambling to incorporate artificial intelligence into their products, competing fiercely with each other and causing a lot of trouble, Newman said.

"I'm very worried about the path we're on," she said. "We are in a particularly dangerous period for the entire field of artificial intelligence because these systems, while seemingly extraordinary, are still very inaccurate and have inherent vulnerabilities. "

Experts interviewed said there were several areas they were most worried about.

Errors and disinformation Many areas have already initiated the so-called AI revolution. Machine learning technology underpins social media newsfeed algorithms, which have long been blamed for exacerbating problems such as inherent bias and misinformation.

Experts warn that these unresolved problems will only intensify as AI models evolve. The worst-case scenario may affect people's understanding of truth and valid information, leading to more incidents based on lies. Experts say an increase in errors and disinformation could trigger further social unrest.

"Arguably, the collapse of social media is the first time we've encountered truly stupid AI. Because recommendation systems are really just simple machine learning models,"said Peter Wang, CEO and co-founder of data science platform Anaconda. "We really failed completely. "

Peter Wang adds that these errors can cause the system to fall into an endless vicious circle, because language learning models are also trained on false information, creating flawed datasets for future models. This can lead to a "model cannibalism" effect, where future models are biased by the outputs of past models and are forever affected.

Experts say inaccurate misinformation and misleading misinformation are amplified by artificial intelligence. Large language models like ChatGPT are prone to so-called "hallucinations," repeated fabrication of false information. A study by NewsGuard, the news industry watchdog, found that dozens of online "news" sites that write material entirely by artificial intelligence contain inaccuracies in much of their content.

Gordon Crovitz and Steven Brill, co-chief executives of NewsGuard, said the system could be exploited by bad people to deliberately spread misinformation on a massive scale.

"Some malicious actors can make false statements and then exploit the multiplier effect of this system to spread false information on a large scale," Crovitz said. "Some say the dangers of AI are exaggerated, but in the realm of news information, it is having an amazing impact. "

Rebecca Finlay, of the Partnership on AI, a global nonprofit, said: "In terms of potential harm on a larger scale, misinformation is the aspect of AI that is most likely to cause harm to individuals, and the highest risk. "The question is how do we create an ecosystem that allows us to understand what is real? "How do we verify what we see online? "

While most experts agree that misinformation is the most immediate and widespread concern, there's still a lot of debate about how much the technology could negatively affect users 'thoughts or behavior.

In fact, these concerns have led to many tragedies. A Belgian man reportedly committed suicide after being inspired by a chatbot. There are chatbots telling users to break up with their partners, or asking users with eating disorders to lose weight.

By design, Newman said, because chatbots communicate with users in a conversational manner, more trust may be generated.

"The big language model is particularly capable of persuading or manipulating people to subtly change their beliefs or behaviors," she said. "Loneliness and mental health are already big issues around the world, and we need to see what cognitive impact chatbots will have on the world. "

So experts are more worried that AI chatbots will not gain perception and outperform human users, but that the big language models behind them may manipulate people to do themselves harm that they otherwise would not. This is especially true for language models that operate on an advertising profit model, Newman said, trying to manipulate user behavior to use the platform for as long as possible.

In many cases, users are harmed not because they want to, but because the system fails to follow security protocols,"says newman." "

Newman added that the humanoid nature of chatbots makes users particularly vulnerable to manipulation.

"If you talk to something that uses the first person pronoun, talk about its feelings and its situation, even if you know it's not real, it's still more likely to trigger a human-like reaction that makes people want to believe it more easily," she said. "The language model makes people willing to trust it and treat it as a friend rather than a tool. "

Another long-standing concern about the labor problem is that digital automation will replace a lot of human work. Some studies have concluded that AI will replace 85 million jobs globally by 2025 and more than 300 million jobs in the future.

There are many industries and jobs affected by artificial intelligence, including screenwriters and data scientists. AI can now pass the bar exam like a real lawyer and answer health questions better than a real doctor.

Experts warn that the rise of artificial intelligence could lead to mass unemployment and social instability.

Peter Wang warned that mass layoffs would occur in the near future, that "many jobs were at risk" and that there were few plans to deal with the consequences.

"There is no framework in the United States for how people survive when they are unemployed," he said. "This will lead to a lot of chaos and unrest. To me, this is the most concrete and realistic unintended consequence of this. "

Despite growing concerns about the negative impact of the tech industry and social media, there are few measures to regulate the tech industry and social media platforms in the United States. Experts fear the same is true for artificial intelligence.

Peter Wang said: "One of the reasons many of us are worried about the development of artificial intelligence is that in the past 40 years, as a society, the United States has basically abandoned the regulation of technology. "

Still, Congress has made an aggressive move in recent months, holding hearings in which OpenAI chief executive Sam Altman testifies about regulatory measures that should be implemented. Finley said she was "encouraged" by the initiatives, but more work was needed on developing specifications for AI technology and how it would be released.

"It's difficult to predict the responsiveness of legislative and regulatory authorities," she said. "We need a critical review of technology at this level. "

While the dangers of AI are the top concern for most people in the industry, not all experts are doomsayers. Many are excited about the potential applications of this technology.

Peter Wang said: "In fact, I think the new generation of artificial intelligence technology can really release huge potential for human beings to prosper on a larger scale than in the past 100 years or even 200 years." "I'm actually very, very optimistic about its positive impact. "

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report