In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
2020-05-09 18:31:45
Fang Binxing
While promoting the development of economic and social innovation, artificial intelligence is also reshaping the future of human security. From a macro point of view, what is the relationship between security and new technology? When a new technology is applied to security, there are two situations.
One is to apply new technology to serve the field of security, we say that new technology enables the field of security. Of course, it can serve the defense behavior, and big data can be used to analyze the network security situation, which can be regarded as enabling defense; it can also serve the attack behavior, for example, the emergence of quantum computers makes the existing cryptography lose its original meaning. This is an enabling attack.
The second is that every emergence of a new technology will bring new security problems, we say that this is a security concomitant technology. It also has two situations, one is that the immaturity of the new technology itself brings its own security problems, for example, the base station of the mobile Internet brings the problem of pseudo-base station attack, which belongs to endogenous security. Another situation is that the problem of the new technology has little impact on itself, but harms other areas, such as the decentralization of the blockchain, which is originally a defect of "lack of controllability", which has little impact on the blockchain itself. But it chooses the central management system (for example, Bitcoin cannot confiscate illegal income), which belongs to derivative security.
Artificial intelligence can be applied to security field.
First of all, artificial intelligence can empower defense. Artificial intelligence machine learning model brings a new way to product extremely active network defense. The intelligent model adopts an active approach rather than a traditional passive coping style; at the same time, the predictive ability of artificial intelligence and the evolutionary ability of machine learning can provide us with a means to resist complex network threats. In essence, the most important change is to give early warning and take blocking measures before a cyber attack occurs. AI ², a cyber security platform based on artificial intelligence developed by the Massachusetts Institute of Technology, uses artificial intelligence methods to analyze cyber attacks and help cyber security analysts do things like "looking for a needle in a haystack." The AI ²system first uses machine learning technology to independently scan data and activities and feed back the findings to the network security analyst. The cyber security analyst will mark which are real cyber attacks and incorporate engineer feedback into the AI ²system for automatic analysis of new logs. In tests, the team found that AI ²was about three times as accurate as the automated analysis tools in use today, greatly reducing the probability of false positives. In addition, AI ²can constantly generate new models during the analysis process, which means that it can quickly improve its prediction rate. The more attacks the system detects, the more feedback it receives from analysts, which can relatively improve the accuracy of future predictions. It is reported that AI ²has been trained in more than 360 million lines of log files to analyze 85 per cent of attacks in order to alert suspicious behavior.
Secondly, artificial intelligence can empower network attacks, which are called automated or intelligent network attacks in the industry. Through the robot, the computer attack is carried out automatically without human intervention. In recent years, a series of major hacker events, including the leakage of core databases, the intrusion of hundreds of millions of accounts, WannaCry blackmail virus and so on, have the characteristics of automatic attacks. With the help of automation tools, attackers can scan and detect vulnerabilities on a large number of different websites in a more efficient and covert way in a short time, especially the network-wide detection of 0day/Nday vulnerabilities will be more frequent and efficient. The powerful data mining and analysis capabilities of artificial intelligence, as well as the resulting intelligent services, are often used by hacker organizations to form a more anthropomorphic and sophisticated automatic attack trend with the help of artificial intelligence technology. this kind of robot simulates human behavior will be smarter, bolder, and more difficult to trace and trace. At present, automatic and intelligent network attacks are constantly losing the line of defense of network security, which obviously needs to be paid enough attention by the network security industry, and it is necessary to start with understanding the characteristics of automated network attacks and take timely measures.
As early as 2013, DARPA launched the Network Super Challenge (Cyber Grand Challenge,CGC), which aims to promote the development of automated network attack and defense technology, that is, real-time identification of system vulnerabilities, and automatic completion of patches and system defense, and finally achieve fully automatic network security attack and defense. DARPA gave the teams two years to prepare and asked them to first develop a fully automated network inference system (Cyber Reasoning System,CRS) to rely on artificial intelligence technology to support cyber attacks. CRS can automatically mine vulnerabilities and generate patches (defense) for the challenge sequence (Challenge Binary,CB) dynamically given by the organizer, and need to automatically generate patched reinforcement programs (Replacement CB,RCB); automatically generate vulnerability exploitation programs (attacks), that is, automatically generate attack programs (Proof of Vulnerability,PoV); and automatically generate intrusion detection (IDS) rules. Within 24 hours of the preliminary round on June 3, 2015, the CRS of each team automatically downloaded 131 CB provided by the organizer with known memory processing vulnerabilities, containing a total of 590 vulnerabilities covering 53 different types of generic defect lists (Common Weakness Enumeration,CWE). The CRS needs to automatically analyze the program and look for vulnerabilities, and then submit the automatically generated RCB and PoV. The result of the competition was that all the reserved vulnerabilities were successfully discovered and patched by different CRS. In the end, seven teams reached the finals. The final will be held on August 4, 2016. The finals added real-time rivalry between online teams and increased the evaluation of cyber defense capabilities (CRS can automatically generate IDS rules). The referee machine uses the RCB, PoV and IDS rules submitted by CRS to cross-verify in an independent and isolated environment (attack team B's RCB with team A's PoV), and judge by comprehensive attack performance, defense performance, function loss and performance loss. In the end, Mayhem of Carnegie Mellon University's For All Secure team won the CGC championship and qualified for human Defcon CTF. At Defcon CTF on August 5-7, 2016, Carnegie Mellon University's Mayhem Robot CTF team competed with 14 other top human CTF teams, and at one point surpassed two human teams to rank 13th. Automatic attack system can stand on the Defcon CTF field, creating a new situation of "machine intelligence" and "automatic attack and defense". Thus it can be seen that artificial intelligence is still very powerful in enabling attacks.
Security problems associated with artificial intelligence
Artificial intelligence has its own fragility, for example, antagonizing samples is the endogenous security problem of artificial intelligence. Antagonistic sample is an interesting phenomenon in machine learning model, which reflects the weakness of artificial intelligence algorithm. Attackers add subtle changes to the source data that are difficult for humans to recognize through the senses, but they can make machine learning models accept and make wrong classification decisions. A typical scene is the antagonistic sample of the image classification model, which makes the classification model misjudge by superimposing carefully constructed changes on the image, which is difficult to detect by the naked eye. Antagonistic samples exist not only in the field of image recognition, but also in other fields, such as speech, text and so on. From the field of network security, there are also attacks similar to those against samples, and attackers may deceive the artificial intelligence model by inserting disturbance operations into malicious code. For example, someone has designed a malicious sample to let the classifier identify a software with malicious behavior as a benign variant, so that it can construct an attack method that can automatically escape the PDF malware classifier, so as to counter the application of machine learning in security. All of the above security problems may lead to the same consequence, that is, the wrong decision, judgment and control of the artificial intelligence system.
There are great security challenges in artificial intelligence technology. At present, artificial intelligence system can not go beyond the inherent scene or the understanding of specific context. Artificial intelligence technology generally does not expose its vulnerability within the scope of fixed rules such as chess or game. when the environmental data is quite different from the environment in which the intelligent system is trained, or the actual application scene changes, or this change is beyond the scope that the machine can understand, the artificial intelligence system may lose its judgment ability immediately. The report "artificial Intelligence: what every decision maker needs to know" recently released by the New American Security Center, a US think-tank, shows that some weaknesses of artificial intelligence may have a huge impact on areas such as national security.
The failure of artificial intelligence may bring disaster to human beings, which will lead to security problems. On May 7, 2016, a Tesla Model S in autopilot mode crashed into a white trailer around a corner at 74 mph on a Florida highway. Model S passed through the bottom of the van, the roof was completely lifted off, and the 40-year-old driver, Joshua Brown, died. The speed limit on the accident section is 65 miles per hour. Because the HD camera in front of the "autopilot" mode car is a telephoto lens, when a white towed truck enters the visual area, the camera can only see the middle of the truck suspended on the ground, but not the whole vehicle; in addition, at that time, the sun was so strong (blue sky and white clouds) that the autopilot system could not recognize that the obstacle was a truck and was more like a cloud floating in the sky, causing the automatic brake not to work. The accident sparked controversy over the safety of self-driving cars. This defect of autopilot leads to human casualties, which is a typical case of safety derived from artificial intelligence.
At present, people have begun to pay attention to the security of artificial intelligence itself. Hawking put forward the "threat theory" of artificial intelligence during his question-and-answer interaction with Reddit in August 2015, and then wrote articles in world-renowned journals many times to emphasize similar views. Bill Gates said that humans have made great progress in the field of artificial intelligence, which will enable robots to learn to drive and do housework in the next 10 years, even better than humans in certain areas. But he warned before that "if artificial intelligence advances too fast, it may pose a threat to human beings in the future." Musk, the founder of Tesla, also predicted the future of intelligent robots at the Code conference. He believes that human life in the future will be inseparable from virtual reality technology, and the high development of this technology will make it difficult for human beings to tell the difference between real and games; coupled with the rapid development of artificial intelligence, human intelligence will stop. The most serious consequence is that robots surpass humans to become the main body of the actual operating world, and human beings may exist like pets in the hearts of robots.
A scheme to prevent the behavior of artificial agents from getting out of control
With the rapid development of artificial intelligence technology, artificial intelligence actors are more and more likely to become an important part of human life in the near future. At present, experts in related research fields have realized that there are great risks in artificial intelligence and called for it from the aspects of artificial intelligence safety design principles, standards and ethics. But how to design a device to prevent the artificial intelligence system with behavioral ability from getting out of control? What kind of control function and performance index should the device have? What is the hardware and software configuration of the device? At present, there are no research results.
Why does artificial intelligence harm human beings? The premise is that there must be an actor with behavioral ability and operated by artificial intelligence. Artificial intelligence actors are a kind of autonomous hardware entities that can perceive the external environment and take it as input, make decisions through internal algorithms, and use their own driving devices to interact with the physical world. Automatic walking robots, self-driving cars and artificial intelligence weapons are all types of artificial intelligence actors. Artificial intelligence actors need to perceive the external environment, internal control logic, motion-driven devices and autonomous ability (self-learning). The external environment includes the natural environment and related organisms; the internal control logic refers to the program prefabricated in the artificial intelligence agent to generate the motion behavior; the motion driver is the hardware that can interact with the physical world or change the spatial coordinates of the artificial intelligence behavior. Autonomy means that artificial intelligence actors can set the goal function or independent decision to be achieved by themselves, rather than set goals by human beings.
Under what circumstances will artificial intelligence harm human beings? Three conditions need to be met at the same time: first, it has the ability to act, and AlphaGo is a chess robot, so it cannot move, so it will not harm human beings; second, it has enough destructive kinetic energy and harmfulness, and the floor-sweeping robot does not have destructive kinetic energy, so it will not harm human beings; third, a system with autonomy and complete obedience to human beings will not take the initiative to harm human beings, but it will accidentally harm human beings.
First of all, the problem of activity has been solved; second, destructive robots also exist, which is a risk factor; and third, autonomous actors. Sports have been everywhere, the destructive power has been broken, the key is to be able to be independent. But we should not believe too much that robots will not evolve to the extent that they are harmful to human beings, so we should restrict them in advance. There is an international standard for robots, and four kinds of constraints are proposed. The first is to stop the security-level monitoring, and when something goes wrong, it has the ability to stop it; the second is to guide it manually and start doing something for it to do, if the robot can only start doing it manually, it will not be able to set aggressive targets for itself; the third is speed and distance monitoring, which must slow down when it is close to people. The fourth is the limitation of power and force, which must be reduced rapidly when it is close to people. These are all things that need to be done to protect mankind.
We propose a method to prevent artificial intelligence actors from getting out of control-AI safety hoop, as shown in figure 1. In the figure, the series module is used to connect with the decision-making system and driver of the artificial intelligence actor; the anti-removal module is used to destroy the artificial intelligence actor in the event of violent demolition, which ensures that the device cannot be removed from the artificial intelligence actor. The core points of the AI safety hoop method include: the driver of the ① artificial intelligence actor needs to adopt methods such as active detection or passive monitoring to detect the existence of an authorized, authenticated and trusted control system (AI safety hoop) and accept its complete control; ② should stop all work when the artificial intelligence actor is unable to detect the existence of an authorized, authenticated and trusted control system ③ speed and distance monitoring, when the distance between a dangerous part and a person in the artificial intelligence actor is less than the safe distance, trigger the protection stop and trigger the security level function connected with the artificial intelligence actor; when the artificial intelligence actor is out of control, the system can realize the remote control of the artificial intelligence actor according to the remote control command, so that it can not harm human beings or control the harm to a minimum. The ⑤ system will identify the risks of the artificial intelligence actors and issue an alarm when the risks are identified to further prevent the damage caused by the artificial intelligence actors out of control.
Fig. 1 AI safety hoop-- A method of preventing and controlling AI actors
Concluding remarks
As the most subversive and strategic core key technology, artificial intelligence has attracted great attention from industry, academia and governments all over the world. At present, the application of artificial intelligence technology in the field of security is becoming more and more urgent. at the same time, the security problems of artificial intelligence itself should not be underestimated, and the integration, development and innovation of both sides are important boosting factors that can not be ignored in our strategy of strengthening the country.
Selected from the Newsletter of Chinese Society of artificial Intelligence
Special Topics on artificial Intelligence and Security Volume 10, No. 4, 2020
Fang Binxing
Academician of the Chinese Academy of Engineering, Chief Scientist of China Electronic Information Industry Group (CEC), Chairman of China Chinese Information Society, China Cyberspace Security Talent Education Alliance and China Cyberspace New Technology Security Innovation Alliance, Honorary Dean of Advanced Technology Research Institute of Cyberspace of Guangzhou University, Chief Advisor of computer College of Harbin Institute of Technology (Shenzhou), Director of National Engineering Laboratory of Information content Security Technology Director of the key Laboratory of trusted distributed Computing and Services, Ministry of Education.
Https://www.toutiao.com/a6824790024593080846/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.