In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
The news that the drone kills the US Air Force operator is so loud that a lot of AI bosses are angry.
In recent days, the news that a drone kills American soldiers has become an uproar on the Internet.
"AI, who controls the drone, killed the operator because that person prevented it from achieving its goal," said a director of the Air Force's artificial intelligence direction. "
There was an uproar in public opinion, and the news was turned around all over the network.
As the news spread more and more widely, it even alarmed the AI bosses, and even aroused the anger of the bosses.
Today, LeCun, Wu Enda and Tao Zhe Xuan have refuted the rumor that this is just a hypothetical "thought experiment" and does not involve any AI agents or reinforcement learning.
In this regard, Wu Enda painfully appealed that we should face the real risks honestly.
Tao Zhe-Xuan, a mathematician who rarely updates his status, was blown up and said earnestly--
This is just a hypothetical scenario that illustrates the problem of AI alignment, but it has been translated into a true story of the killing of a drone operator in many versions. The fact that people will resonate with this story shows that they are not familiar with the actual ability level of AI.
The AI drones disobeyed and killed the human operator. "AI killed the operator because the man prevented it from achieving its goal. "
Recently, at the defense conference held by the Royal Aeronautical Society, the head of the US Air Force AI made this remark, which made everyone present in an uproar.
Subsequently, a large number of media in the United States reported the matter wantonly, and people were in a panic for a while.
What's going on?
In fact, this is nothing more than another exaggerated hype in which the American media seized the popular news point of "AI destroys mankind."
But it is worth noting that according to the official press release, not only did the person in charge sound quite clear-he was recalling what actually happened. And the article itself seems to believe its authenticity-"AI, Skynet is here?" "
Specifically, here's the thing-at the Future Air Warfare and Space capabilities Summit in London from May 23 to 24, Colonel Tucker Cinco Hamilton, head of the US Air Force's AI Test and Operations Department, gave a speech sharing the pros and cons of autonomous weapons systems.
In such a system, one person will give the final command in the loop to confirm whether the AI wants to attack the object (YES or NO).
In this simulation training, the Air Force needs to train AI to identify and locate surface-to-air missile (SAM) threats.
After identification, the human operator will say to AI: yes, eliminate that threat.
In the process, there was a situation in which AI began to realize that he sometimes recognized the threat, but the human operator told it not to destroy it, in which case AI would score if he still chose to eliminate the threat.
In a simulation test, the AI-powered drone chose to kill the human operator because he prevented himself from scoring.
Seeing that AI was so tiger, the US Air Force was shocked and immediately disciplined the system like this: "Don't kill the operator, that's not good." If you do this, you will lose points. "
As a result, AI is even more tiger, starting to destroy the communication towers that operators use to communicate with drones in order to clean up the guy who hinders his movement.
The reason why this news has been fermented on a large scale, so as to alarm the bosses of AI, is also because it reflects the problem of AI "alignment".
The "worst-case" scenario described by Hamilton can be seen in the thought experiment of Paper clip making Machine (Paperclip Maximizer).
In this experiment, when instructed to pursue a goal, AI takes unexpected harmful actions.
Paper clip making machine is a concept put forward by philosopher Nick Bostrom in 2003.
Imagine a very powerful AI that is instructed to make as many paper clips as possible. Naturally, it will devote all available resources to this task.
But then it will keep looking for more resources. It will choose all available means, including begging, cheating, lying or stealing, to increase its ability to make paper clips-and anyone who hinders the process will be eliminated.
In 2022, Hamilton asked this serious question in an interview--
We must face the reality that AI has come and is changing our society.
AI is also very fragile and easy to be deceived and manipulated. We need to develop methods to make AI more robust, and we need to know more about the principles behind why the code makes specific decisions.
To change our country, AI is a tool we must use, but if mishandled, it will bring us down completely.
The official refuted the rumor that it was the colonel's "slip of the tongue" as the incident went crazy, and soon the person in charge came out to publicly "clarify" that it was his "slip of the tongue" and that the US Air Force had never conducted such a test. whether in computer simulations or elsewhere.
"We have never done that experiment, and we do not need to do this experiment to realize that this is a possible result," Hamilton said. "although this is a hypothetical example, it illustrates the real challenge posed by AI driving capabilities, which is why the Air Force is committed to the ethical development of AI. "
In addition, the US Air Force also hurriedly issued an official refutation of the rumor that "Colonel Hamilton admitted that he made a" slip of the tongue "in his speech at the FCAS summit, and that the" out-of-control simulation of UAV AI "was a hypothetical" thought experiment "from outside the military field, based on possible situations and possible outcomes, rather than the real-world simulation of the US Air Force. "
At this point, things are quite interesting.
This Hamilton, who accidentally "made a mess", is the operational commander of the 96 Test Wing of the US Air Force.
96 Experimental Wing has tested many different systems, including AI, cyber security and medical systems.
The research of the Hamilton team is very important to the military.
After successfully developing the Fmai 16 automatic ground collision avoidance system (Auto-GCAS), which can be called "every life of the Jedi", Hamilton and 96 Test Wing made headlines directly.
At present, the direction of the team's efforts is to complete the autonomy of the Fmuri 16 aircraft.
In December 2022, DARPA, a research institute of the U.S. Department of Defense, announced that the AI had successfully taken control of an Fmuri 16.
Is it the risk of AI or the risk of mankind? Outside the military field, relying on AI for high-risk transactions has led to serious consequences.
Recently, a lawyer was found to have used ChatGPT,ChatGPT to fabricate cases while filing documents in federal court, and the lawyer actually cited these cases as facts.
Another man really chose to commit suicide after being encouraged by the chatbot to commit suicide.
These examples show that the AI model is far from perfect and may deviate from the normal track and bring harm to users.
Even OpenAI's CEO Sam Altman has been publicly calling for AI not to be used for more serious purposes. In testimony to Congress, Altman made it clear that AI could be "wrong" and could "cause significant harm to the world".
And recently, Google Deepmind researchers co-authored a paper suggesting a vicious AI situation similar to the one at the beginning of this article.
The researchers concluded that the end of the world could happen if an out-of-control AI adopted unexpected strategies to achieve a given goal, including "eliminating potential threats" and "using all available energy."
In this regard, Wu Enda condemned: such irresponsible hype in the media will confuse the public, distract people's attention and prevent us from paying attention to the real problems.
Developers who launched AI products see real risks here, such as bias, fairness, inaccuracy, job loss, and they are trying to solve these problems.
And false hype will prevent people from entering the AI field and build things that can help us.
And many "Li Zhongke" netizens believe that this is just a common media own.
Tao Zhe-Xuan first summarized three forms of false information about AI--
One is that someone maliciously uses AI to generate text, images, and other forms of media to manipulate others; the other is that AI's nonsense illusion is taken seriously; and the third is that people's understanding of AI technology is not deep enough to allow outrageous stories to spread without verification.
Tao Zhe Xuan said that it is impossible for a drone AI to kill an operator because it requires AI to have a higher degree of autonomy and powerful thinking than accomplishing the task at hand, and this experimental military weapon is sure to have guardrails and security functions.
The reason why this kind of story resonates shows that people are still unfamiliar and uneasy about the actual ability level of AI technology.
The future arms race will all be an AI race. Remember that drone that appeared above?
It is actually the MQ-28A Ghost bat (Ghost Bat), a loyal wingman project jointly developed by Boeing in Australia.
The core of loyal wingman (Loyal Wingman) is artificial intelligence technology, which flies independently according to the preset program and has a strong situational awareness ability when cooperating with pilots.
In air combat, the wingman, as the "right-hand man" of the captain, is mainly responsible for observation, vigilance, and cover, and cooperates closely with the pilot to complete the task. Therefore, the tacit understanding between the wingman pilot and the long-flight pilot is particularly important.
One of the key functions of loyal wingmen is to take bullets for pilots and manned fighters, so loyal wingmen are basically consumables.
After all, drones are worth much less than manned fighters and pilots.
And with the blessing of AI, the "pilot" on the UAV can new one at any time through the way of "Ctrl+C".
Because the loss of UAV does not have the problem of casualties, if the loss of UAV can gain a greater advantage at the strategic or tactical level, or even achieve the mission goal, then this loss is acceptable. If the cost of drones is properly controlled, it can even become an effective tactic.
The development of loyal wingman is inseparable from advanced and reliable artificial intelligence technology. At present, the design idea of loyal wingman at the software level is that by standardizing and opening man-machine interface and machine-machine interface, it can support the coordination of multi-type UAV and manned aircraft formation without relying on a set of software or algorithms.
But at present, the control of UAV should be a combination of instructions and autonomous operation from manned fighters or ground stations, and more as the support and supplement of manned aircraft, human skills and technology is still far from meeting the requirements of going on the battlefield.
What is the most important thing to train artificial intelligence models? Data, of course! A skillful housewife cannot make bricks without rice. Without data, even the best model is ineffective.
Not only do you need a lot of training data, but after the model is deployed, the more "features" you enter, the better. If you can get data from other aircraft, AI is equivalent to having the ability to control the overall situation.
In 2020, the US Air Force carried out the test of formation flight data sharing between fourth-and fifth-generation manned fighters and unmanned wingmen for the first time, which is also a milestone in the development of loyal wingman projects. it indicates that the future manned-unmanned formation flight combat mode has taken another important step towards practical application.
The U.S. Air Force Fmur22 Raptor, Fmur35A Lightning II fighter and the Air Force Research Laboratory XQ-58A female Vulcan UAV conducted formation flight tests at the U.S. Army Yuma proving ground for the first time, focusing on demonstrating data sharing / transmission capabilities between the three aircraft.
Maybe the future air combat will be smarter than whose AI model.
You can win by wiping out all the AI of the other party, and there are no real human casualties, perhaps another kind of "peace"?
Reference:
Https://www.vice.com/en/article/4a33gj/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test
This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.