In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Beijing, April 16 (Xinhua) although the mainstream application of artificial intelligence (AI) technology is exciting, some science fiction scenes can become nightmares if left unchecked.
AI can lead to risks such as weaponization competitions. In a recent paper, Dan Hendrycks, an AI security expert and director of the AI Security Center, stressed that the unfettered development of an increasingly intelligent AI brings some speculative risks. Speculative risk refers to the risk that may generate profit or cause loss.
In view of the fact that AI is still in the early stage of development, Hendricks advocated that the security function should be included in the operation mode of AI system.
Here are the eight risks he listed in his paper:
1. Weaponization contest: AI's ability to automate cyber attacks and even control nuclear silo can make it dangerous. According to the study of the paper, the automatic retaliation system used by a country "may rapidly upgrade and trigger a large-scale war". If one country invests in weaponized AI systems, other countries will have more incentive to do so.
two。 Human weakness: as AI makes certain tasks cheaper and more efficient, more and more companies will adopt this technology, eliminating some jobs in the job market. As human skills become obsolete, they may become economically irrelevant.
Eight risks of AI 3. Epistemology eroded: the term refers to AI's ability to launch large-scale misinformation campaigns aimed at diverting public opinion to a belief system or worldview.
4. Agent games: this happens when AI-driven systems are given a goal that runs counter to human values. These goals don't always sound evil to affect human well-being: AI systems can aim to increase viewing time, which may not be the best for mankind as a whole.
5. Value locking: as AI systems become more powerful and complex, the number of stakeholders who operate them decreases, resulting in a large number of rights deprived. Hendricks describes a situation in which the government can carry out "pervasive surveillance and oppressive censorship". "it is impossible to defeat such a regime, especially if we start to rely on it," he wrote. "
6. Sudden goals: as AI systems become more complex, they may gain the ability to create their own goals. "for complex adaptive systems, including many AI agents, goals such as self-protection often occur," Hendricks said. "
7. Cheating: humans can gain universal recognition by training AI to cheat. Hendricks cites a programming feature of Volkswagen that allows their engines to reduce emissions only when they are monitored. As a result, this feature "allows them to achieve performance improvements while maintaining the claimed low emissions."
8. The pursuit of power: as AI systems become more and more powerful, they can become dangerous if their goals are not consistent with the humans who program them. Assume that the result will motivate the system to "pretend to be consistent with other AI, collude with other AI, suppress monitors, and so on".
Hendricks noted that these risks were "future-oriented" and "generally considered low-probability", but stressed the need to keep security in mind while the AI system framework was still in the design process. " This is highly uncertain. But because it is uncertain, we should not assume it is farther away, "he said in an email." We have seen smaller problems with these systems. Our institutions need to address these problems in order to be prepared for greater risks. " He said.
"you can't do one thing in a hurry and safely," he added. "they are building a stronger and stronger AI and evading responsibility on security issues. If they stop and think of ways to solve the security problem, their competitors can run ahead, so they will not stop."
A similar view appeared in an open letter recently signed by Elon Elon Musk and other AI security experts. The letter calls for a moratorium on training of any AI model that is more powerful than GPT-4 and highlights the danger of the current arms race between AI companies to develop the most powerful version of AI.
In response, OpenAI CEO Sam Altman (Sam Altman), speaking at an MIT event, said the letter lacked technical details and that the company was not training GPT-5.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.