Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Viewpoint: don't be intimidated by AI's intelligence. The real scary thing is that it is overrated and abused.

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

April 5 (Xinhua) over the past six months, powerful new tools of artificial intelligence (AI) have been spreading at an alarming rate, from chat robots that can conduct similar human conversations, to coding robots that automatically run software, to image generators that make up something out of nothing, so-called generative artificial intelligence (AIGC) has suddenly become omnipresent and more and more powerful.

Tuyuan Pexels but last week, a rebound against the AI boom began to emerge. Thousands of technical experts and scholars led by Tesla and Elon Musk, chief executive of Twitter, signed an open letter warning of "serious risks facing mankind" and called for a six-month moratorium on the development of the AI language model.

At the same time, an AI research non-profit organization filed a complaint asking the Federal Trade Commission (FTC) to investigate OpenAI, the founder of ChatGPT, and stop further commercial release of its GPT-4 software.

Italian regulators have also taken action to completely block ChatGPT on the grounds that data privacy has been violated.

Perhaps it is understandable that there are calls for suspending or slowing down AI research. AI applications, which seemed incredible or even unfathomable a few years ago, are now rapidly infiltrating social media, schools, workplaces, and even politics. In the face of this dazzling change, some people have issued a pessimistic prediction that AI may eventually lead to the demise of human civilization.

The good news is that the hype and fear of the omnipotent AI may have been exaggerated. Although Google's Bard and Microsoft's Bing are impressive, they are still a long way from becoming Skynet.

The bad news is that concerns about the rapid evolution of AI have become a reality. This is not because AI will become smarter than humans, but because humans are already using AI to suppress, exploit and deceive each other in ways that existing organizations are not prepared for. In addition, people think that the more powerful AI is, the more likely they and companies are to delegate tasks that they are not up to.

In addition to those pessimistic doomsday predictions, we can get a preliminary picture of the impact of AI on the foreseeable future from two reports released last week. The first report, released by the US investment bank Goldman Sachs, assesses the impact of AI on the economy and labour market; the second report, published by Europol, focuses on the possible criminal abuse of AI.

From an economic point of view, the latest AI trend is mainly to automate tasks that once could only be accomplished by human beings. Like power looms, mechanized assembly lines and ATMs, AIGC promises to do some types of work in a cheaper and more efficient way than humans.

But being cheaper and more efficient doesn't always mean better, as anyone dealing with grocery store self-checkout machines, automated phone answering systems or customer service chatbots can attest. Unlike previous waves of automation, AIGC can imitate humans and, in some cases, even impersonate them. This can lead to widespread deception or tempt employers to think that AI can replace human workers, even if this is not the case.

A Goldman Sachs research analysis estimates that AIGC will change about 300m jobs around the world, causing tens of millions of job losses, but also contributing to significant economic growth. However, Goldman's estimates are not necessarily accurate, after all, they have a history of mispredictions. In 2016, the bank predicted that virtual reality headsets might become as ubiquitous as smartphones.

The most interesting thing about Goldman's AI analysis is that they break down industries, that is, which jobs are likely to be enhanced by language models and which may be completely replaced. Researchers at Goldman Sachs rated white-collar tasks on a scale of 1 to 7, with "reviewing the integrity of forms" at level 1, tasks that could be automated at level 4, and "deciding on a complex motion in court" at level 6. It is concluded that administrative support and paralegal work are most likely to be replaced by AI, while professions such as management and software development will be more productive.

The report optimistically predicts that this generation of AI could eventually increase global GDP by 7 per cent as companies benefit more from employees with AI skills. But Goldman Sachs also expects that in the process, about 7% of Americans will find their careers eliminated, and more will have to learn the technology to stay employed. In other words, even if AIGC has a more positive impact, the result could lead to mass unemployment, and human beings in offices and daily life will gradually be replaced by robots.

At the same time, many companies are so eager to take shortcuts that they automate tasks that AI can't handle, such as CNET, a technology site, that automatically generates financial articles full of errors. When something goes wrong with AI, groups that have been marginalized may be more affected. Despite the excitement of ChatGPT and similar products, today's developers of large language models have not solved the problem of data set bias, which has embedded racial discrimination bias into AI applications such as face recognition and crime risk assessment algorithms. Last week, a black man was wrongly imprisoned again for mismatching facial recognition.

More worryingly, AIGC may be used for intentional injury in some cases. The Europol report details how AIGC is used to help people commit crimes, such as fraud and cyber attacks.

For example, chatbots can generate specific styles of language and even imitate the voices of some people, which may make them a powerful tool for fishing scams. The advantages of language models in writing software scripts may popularize the generation of malicious code. They can provide personalized, situational, step-by-step advice, and they can serve as a universal guide for criminals who want to break into homes, blackmail someone or make pipe bombs. We have seen how composite images spread false narratives on social media, reviving concerns that deep fraud could distort campaigning.

It is worth noting that what makes language models vulnerable to abuse is not only their extensive wisdom, but also their fundamental defects in knowledge. The current leading chatbots are trained to remain silent when they detect attempts to use them for evil purposes. But as Europol points out, "security measures to prevent ChatGPT from providing potentially malicious code are effective only if the model understands what it is doing." As demonstrated by a large number of documented techniques and loopholes, self-awareness remains one of the weaknesses of this technology.

Given all these risks, you don't have to worry about the apocalyptic scenario, and you'll see that the pace of AIGC slows down, giving society more time to adapt. OpenAI itself was established as a non-profit organization, provided that AI could be built in a more responsible manner without pressure to meet quarterly earnings targets.

But OpenAI is now leading a close race, the tech giants are laying off their AI ethicists, and in any case, the horse may have left the stable. As academic AI experts Sayash Kapoor (Sayash Kapoor) and Alvin de Narayanan (Arvind Narayanan) have pointed out, the main driving force of language model innovation today is not to drive larger and larger models, but to integrate the models we have into a variety of applications and tools. They argue that regulators should look at AI tools from the perspective of product safety and consumer protection, rather than trying to contain AI like nuclear weapons.

Perhaps the most important thing in the short term is to get technocrats, business leaders and regulators to put aside panic and hype and gain a deeper understanding of the pros and cons of AIGC so that they can be more cautious in adopting it. If AI continues to work, no matter what happens, its impact will be subversive. But overestimating its capabilities will make it more harmful, not less.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report