Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The Economist: artificial intelligence is subverting traditional wars, and a new arms race may begin

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

2019-10-09 18:04:41

Source: Economist

The two big killers of this year's parade, DF-17 and DF-41, have attracted a lot of attention, but Abstract bacteria saw something unusual in the three unmanned combat teams.

Unmanned warfare is closely related to the development of artificial intelligence. According to the Economist, the US Department of Defense announced in its first AI strategy document in February that "artificial intelligence is about to change the battlefield of the future." in the summer of 2018, the Pentagon established the Joint artificial Intelligence Center (JAIC), and in March this year, the first meeting of the National artificial Intelligence Security Council opened.

In the 2020 budget, the Pentagon spent nearly $1 billion on artificial intelligence, while the budget for self-driving and autonomous capabilities was four times higher.

In Vietnam, the US military first tried the "algorithm" to fight the war.

Operation Igloo White in 1970 was seen as a rehearsal for future wars.

Navy warplanes dived in the middle of the jungle, throwing some equipment into the canopy below.

Some of these devices are microphones that listen to the footsteps of guerrillas or the ignition of trucks, others are seismic detectors that record tiny vibrations on the ground, the most novel of which is the olfactory sensor. it can smell ammonia in human urine.

These thousands of electronic devices send the captured data back to drones and computers in time. Within minutes, the fighters will carpet bomb the grid area specified by the algorithm.

The United States tried unsuccessfully to cut off the Ho Chi Minh Trail from Laos to Vietnam in these ways, and the Vietnamese are said to have trained monkeys to destroy all the sensors. In these operations, the US military spends about $1 billion a year (about $7.3 billion today) and $100000 ($730000 today) per truck on the Vietnamese side, but these have not effectively stopped the infiltration of Vietnamese forces.

But the charm of using "algorithms" in war has not faded. The strategy of collecting data with sensors, processing with algorithms with stronger processing power than before, and acting faster than the enemy according to the processing results has become the core of military thinking of major countries in the world. Today, the development of artificial intelligence (AI) makes this concept more deeply rooted in the hearts of the people.

The rise of machines

A similar thing is happening in China: China hopes to lead the world in artificial intelligence by 2030, while Russian President Vladimir Putin has a famous saying, "whoever becomes the leader in this field, who will become the ruler of the world."

AI is a broad and vague term that covers a variety of technologies from the original rule-following system in the 1950s to modern probability-based machine learning (computers solve tasks by self-learning). Paradoxically, if artificial intelligence technology is allowed to develop at the current speed and situation, it is likely to make the situation of modern warfare complicated and confusing because of the opacity of artificial intelligence itself.

Deep learning is a particularly popular and effective machine learning method, involving multi-layer neural networks that mimic the brain. It has been proved to be very good at handling a variety of tasks, such as translation, image recognition and games (see chart).

Michael Horowitz of the University of Pennsylvania compares AI to an internal combustion engine or electricity to illustrate its wide range of uses. He divided the application of artificial intelligence in military affairs into three categories: one is to allow machines to run unsupervised, the other is to process and interpret large amounts of data, and the third is to assist or even directly command and control war.

As far as the battlefield is concerned, the attraction of automation is obvious-robots are cheaper, stronger and easier to expand than humans. However, a machine that can move or even fight on the battlefield must be intelligent enough to carry out tasks, and unintelligent drones cannot survive in combat for a long time; to make matters worse, an ignorant armed robot is likely to cause an accident.

All of this requires artificial intelligence to give machines the necessary skills, including some simple skills-perception and navigation, as well as other more advanced skills, such as working with other military personnel.

Intelligent machines that combine these capabilities can accomplish tasks that individuals cannot. Kenneth Payne of King's College London said: "in simulated air combat, artificial intelligence systems have outperformed experienced military pilots."

In February, the Blue Sky thinking (blue-sky-thinking branch) division of the Defense Advanced Research projects Agency (DARPA) conducted the latest tests on the six most powerful drones that can collaborate in "high-threat" environments, including unmanned ones.

Nevertheless, the intelligence of most of these systems is narrow and fragile-it can perform a task well in a well-defined environment, but it is easy to fail in an unfamiliar environment.

Therefore, the existing automatic weapons are either cruise missiles that can attack radar or rapid-fire guns used to defend ships and bases. These weapons are useful, but they are not revolutionary and do not use the advanced machine learning technologies that have emerged in recent years.

Need to improve the "smart" weapons

Don't think that AI can only do some battlefield coolie work. Robots, killers, or anything else, have to react to what they "see".

But for many military platforms such as spy planes and satellites, the key is to send back raw data, which can only be processed into useful intelligence. Now, there is more intelligence than ever before. In 2011 alone, about 11000 drones in the United States sent back more than 327000 hours (37 years) of images.

Most of the data are still in time to be processed. Therefore, the second major application of artificial intelligence in the military is to process data. Stanford University's annual AI Progress Index shows that in lab-based tests, the algorithm outperformed humans in image classification by 2015, and almost doubled its performance in a more difficult task-image segmentation (selecting multiple objects from a single image) between 2015 and 2018.

Computer vision is far from perfect, but it can also be used by people. The human visual system is not sensitive to subtle changes, while computer vision is not. In one study, for example, the system misjudged a gibbon by changing 0.04% of the pixels in the panda image (which cannot be detected by humans).

Despite its weaknesses, the Pentagon concluded in February 2017 that deep learning algorithms "can be implemented at a near-human level". Accordingly, it set up the "Algorithmic Warfare" (algorithmic Warfare) team, codenamed Project Maven (expert Project), to identify objects and suspicious actions through deep learning and other techniques. The project, which was originally used to process images taken during the war against Iraq, is now more widely used. The purpose of this operation is to produce "operable" intelligence, which often ends with guided missile bombardment or special forces breaking in.

'The benefits of the project to analysts are still negligible in terms of saving time and providing new insights, 'said an insider familiar with Project Maven. For example, you can see that wide-angle cameras throughout the city send a large number of false positives. "the nature of these systems is highly iterative," he said. " AI is making rapid progress, and Project Maven is just the tip of the iceberg.

Sean Corbett, a retired RAF major general who now works for Earth-i, says the company can identify different types of military aircraft at dozens of bases by applying machine learning algorithms from a series of satellites with an accuracy of more than 98 per cent (see main chart). "the next smart thing to do is to develop an algorithm that can automatically tell whether an object is normal," he said. " With constant observation of those bases, the software can distinguish between regular deployments and abnormal operations, and alert analysts to major changes.

Of course, the algorithm is an omnivore, and you can feed it any type of data, not just images. In December, Sir Alex Younger, head of the British intelligence agency MI6, said: "the combination of large amounts of data and modern analytical techniques will make the modern world transparent." In 2012, a document leaked by the US signal intelligence agency, the National Security Agency (NSA), described a project (safely called "Skynet") to apply machine learning to mobile phone data in Pakistan to identify people who might be couriers of terrorist groups. For example, who has traveled from Lahore to the border town of Peshawar in the past month and turned off or changed their phones more frequently than usual? Sir Richard Barrons, who commanded the British coalition forces before 2016 and is now retired, said: "in the past, it was usually commanders who asked questions and intelligence agencies gathered physical data to find answers, but now the answer is in the cloud."

In fact, the data in question is not always directed at the enemy. JAIC's first project was neither weapons nor spy tools, but worked with special forces to predict engine failures in its Black Hawk helicopters. The first version of the algorithm was delivered in April. Tests conducted by the air force on command, control and transport aircraft show that such predictive maintenance can reduce unplanned work by nearly 1/3 and could significantly reduce the $78 billion currently spent on maintenance at the Pentagon.

How AI influences War decision

However, access to information is only a prerequisite, and the key is to make decisions based on intelligence. Therefore, the third way for AI to change the traditional war is to be at the decision-making level.

AI can participate in decisions ranging from platoon-level decisions to heads of state decisions. Northern Arrow, a product of UNIQAI, an Israeli AI company, can help war commanders deploy combat by processing large amounts of data, which usually includes information about enemy location, weapon type, geographical location and weather conditions.

In traditional wars, it usually takes half a day or a day to look at the relevant maps and charts to process the data. The data needed for the algorithm come from books or manuals, such as the speed of tanks at different altitudes, as well as interviews with experienced commanders. The algorithm then provides options for busy decision makers, with reasons attached.

"expert system" platforms such as the Arrow of the North and CADET in the United States are much faster than human thinking. In a test, humans take 16 hours, while CADET takes only two minutes. However, they tend to adopt rule-following techniques that are simple and clear in algorithm. By historical standards, this is AI, but most expert systems use deterministic algorithms, that is, if the inputs are the same, the outputs are the same. This feeling is all too familiar to soldiers who have used the artillery firing table generated by the ENIAC, the world's first general-purpose computer.

In the real world, randomness often prevents people from making correct decisions, so many modern artificial intelligence systems combine rule-following systems with randomness to deal with more complex decision-making situations. DARPA's real-time confrontation intelligence and decision-making software RAID can be used to predict the position, movement and even possible emotional state of the enemy in the next five hours. Based on a game theory, the system simplifies the problem to a smaller game, thus reducing the requirement for computing power.

In early tests between 2004 and 2008, RAID showed more accurate and faster execution than professionals. During a two-hour combat exercise in Baghdad, a team had to be an enemy of RAID or others, and RAID accurately identified friends and foes in less than an hour. Boris Stilman, one of the designers of the software, pointed out that retired colonels involved in simulating Iraqi insurgents were "so afraid" of the software that "they stopped talking to each other and used gestures instead." RAID is being continuously improved for use by the army.

The latest deep learning system is mysterious. In March 2016, AlphaGo, a deep learning algorithm developed by DeepMind, beat Lee se-dol, one of the best players in the go world. The team of experts was puzzled by AlphaGo's creative moves during the game. The following month, the Chinese Academy of military Sciences held a seminar on the competition. Elsa Kania, a military innovation expert, said: "for Chinese military strategists, the lesson of AlphaGo's victory is that in go, a game similar to war games, artificial intelligence strategies may be better than human strategies."

Learn War from AI Game skills

In December 2018, AlphaStar, another algorithm built by DeepMind, defeated StarCraft Ⅱ 's top human contestants MaNa and TLO. Unlike go, StarCraft Ⅱ is a real-time strategy rather than a turnaround game, and players have more information concealment and freedom of movement than go. Many officers hope that AI's talent for playing games can be transferred to the military, thus making it an important part of military history. Michael Brown, director of defense innovation at the Pentagon responsible for developing business technology, said that using AI to achieve "strategic reasoning" is one of their key research directions.

However, if the algorithm is too smart for human beings to understand, it is bound to give rise to legal, ethical and trust problems. The human law of war requires a series of judgments about the concepts of proportionality (such as between civilian casualties and military interests) and necessity. Algorithms that cannot explain why the target is chosen are likely not to follow these rules. Even if it obeys the law of war, it is impossible for humans to believe that this looks like a decision made by Magic 8. (note: magic Magic 8-Ball is a toy with random answers. There are usually 20 answers, and one appears at random after a shake. )

Royal Air Force Commander Keith Dear said: "what should we do when artificial intelligence is applied to military strategy, calculates multiple probability inferences of interactions, and then gives an action plan that we don't understand?" He cited an example: AI might suggest funding an opera by Baku in response to Russia's military invasion of Moldova-a surreal strategy that could easily confuse its own army, let alone the enemy. However, this may be the result of AI's mastery of a series of political events that are not immediately noticed by commanders.

Even so, he predicts that people will accept the trade-off between credibility and efficiency. "even under the limitations of today's technology, AI may support or even replace real-world war decisions through 'large-scale near-real-time simulation'."

This is not as far-fetched as it sounds. Sir Richard Barrons pointed out that the British Ministry of Defence had bought a simulation software to simulate a complex military environment, which was actually the military version of Fortnite, a global popular competitive online game. The software, created by Improbable, a gaming company, and CAE, a Canadian avionics company famous for flight simulators, uses open standards that allow information ranging from real-time weather data to secret intelligence to be loaded into the software. "as long as there is sufficient data, a network of mobile data and cloud computing that processes data, it will revolutionize the way it commands and controls," said Sir Richard. "it will be a single integrated command tool from the National Security Council to tactical commanders."

Will the war eventually become unmanned?

Western governments insist that humans will "participate in all cycles" and supervise things, but even their own officials do not believe this.

Commander Dear said: "at present, from tactical decision-making to strategic decision-making, human beings are gradually jumping out of this circle." "China also believes that future wars will go beyond people's cognitive abilities," Ms. Kania said. What will emerge in the future is not only automatic weapons, but also the battlefield of automation. Once the war begins, crisscross AI systems quickly target targets ranging from guided projectile launchers to aircraft carriers, and then design fast and precise ways to destroy them in the most effective order.

The consequences of a war on this scale are unknown. In a recent article for Lawrence Livermore National Laboratory (the Lawrence Livermore National Laboratory), Zachary Davis pointed out that precise and rapid strikes "may increase the risk of surprise attacks, thereby destabilizing." Similarly, AI can also be used to detect signals of sudden attacks to help defenders resist such attacks. Or, just as the United States spread sensors in the jungles of Vietnam in the 1960s, such a plan could end in an expensive and ill-considered failure. However, no big country dares to risk falling behind its rivals, and at this level, it is politics, not just technology, that is at work.

The Pentagon's spending on AI was only a fraction of the $20 billion to $30 billion invested by large US technology companies on AI in 2016. While many American companies are happy to negotiate contracts with the military, such as Amazon and Microsoft, which are currently competing for a $10 billion cloud computing contract from the Department of Defense, others are cautious. (note: the Pentagon launched a $10 billion cloud service contract called Joint Venture Defense Infrastructure (JEDI) in 2018.) in June 2018, Google withdrew from the controversial military project Maven by the end of the year, despite pressure from 4000 employees against being involved in "technology for war", even though it was worth $9 million.

China has a demographic advantage, which brings data advantage, and Robert Work, a former US deputy secretary of defense, warned in June that if the data were fuel for artificial intelligence, China could have a structural advantage over the rest of the world. JAIC Director General Jack Shanahan expressed his concern on August 30th: the future I don't want to see is that our potential adversaries have the power supported entirely by artificial intelligence, while we don't.

Https://www.toutiao.com/i6745742127205990924/

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report