In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Http://blog.sina.com.cn/s/blog_cfa68e330102zo7u.html
"the pace of human creation of technology is accelerating, and the power of technology is growing at an exponential rate. Exponential growth is confusing, starting with tiny growth and then exploding at an incredible rate-if one does not pay close attention to its trends, this growth will be completely unexpected."
Ray Kurzweil, known as "the heir apparent to Thomas Edison" in Inc. Magazine, wrote in his book Singularity approaching. The world's leading inventor, with 13 honorary doctorates, paints a picture of the future of artificial intelligence society.
Ray Kurzweil believes that due to the existence of Moore's law, technology will grow exponentially, and human society will reach the singularity of artificial intelligence in 2045. Secondly, human beings based on biological forms are essentially just an algorithm system under a highly complex neural network, which will be replaced by more advanced algorithm systems in the future.
"Blind optimism may be the deadliest weapon of mass destruction." Pierrot Sgarufi said: "artificial intelligence is not a new concept, it originated in 1956 or more, but in the past, artificial intelligence did not develop rapidly because computer processing systems were not powerful enough."
Judging from the extent of the application of artificial intelligence in reality, the current progress in the field of self-driving in AI also seems to confirm Pierrot Sgarufi's point of view. Reviewing the major changes in human history, it is not difficult to find that whether it is the improvement of the steam engine or the invention of the internal combustion engine, the field of travel has always been at the forefront of the application of advanced technology.
Looking back to the source, the technical basis of the outbreak of self-driving technology in recent years also stems from the revolutionary achievements made by Hinton in the field of deep learning in 2006, from which the deep learning algorithm based on neural network can be deeply applied in computer vision, speech recognition, and computer behavior decision-making, thus forming the technical basis of self-driving software, and in the engineering application of self-driving. There are no major technical obstacles, so the ceiling of self-driving still lies in the limitations of AI technology based on deep learning.
On the other hand, L4 autopilot based on AI technology has begun to enter the commercial stage. At present, google Waymo, Tesla AutoPilot, Baidu Apollo and GM Cruise have all achieved L4-level autopilot.
Driverless Achilles' heel
In the United States in 2016, a self-driving Tesla Models crashed into a white trailer, killing the driver in the first self-driving car accident.
Afterwards, according to the environmental analysis of the accident site, some professionals pointed out that under the strong light, the image recognition system relying on the camera failed to detect the white truck crossing the road in time, and because the position of the millimeter wave radar was low, while the vertical viewing angle of the general millimeter wave radar was less than ±5 °, when the Tesla was close to the side of the towed truck, the radar beam passed through the truck from the lower side, resulting in missed detection. Thus causing the accident to happen. After the accident, Tesla improved the self-driving system and revised the interpretation of AutoPilot on the official website.
In fact, the safety issue is indeed the Achilles heel of the full landing of self-driving technology. So far, the self-driving system constructed by AI technology based on deep learning algorithm has not really solved the problem of driving safety caused by "computer understanding deviation".
From the point of view of the evolution of AI technology, the "intelligence" with deep learning algorithm as the core is actually not real intelligence, but based on the statistical "optimal solution" achieved by big data and deep learning algorithm under the principle of "dynamic programming". Therefore, in order to solve the safety problem of self-driving, the possibility of "unsafety" must be reduced to below a red line below the probability of human car accidents under this framework in order to have the "acceptance bottom line" for self-driving to enter thousands of households.
In May this year, Zheng Nanning, academician of the Chinese Academy of Engineering, delivered a speech on the theme of "intuitive AI and self-driving" at the sixth China Robot Summit held in Ningbo. Academician Zheng Nanning proposed that it is impossible to establish a full scene model under the algorithm model, but "to construct a humanoid autonomous driving based on cognitive construction, so that AI autonomous driving has a humanoid decision-making mechanism, it can cope with highly dynamic and strongly random traffic scene changes.
From the editor's point of view, the establishment of an algorithm model based on the human thinking decision-making mechanism makes it impossible for AI to have a humanlike "consciousness" with current technical conditions. On the one hand, human decisions are often made through their own experience in many aspects, rather than forming a single decision-making mechanism in a fixed driving scene, on the other hand, perceptual factors often dominate in the decision-making process of most people. The algorithm decision-making is 100% rational decision-making, but in some specific cases, rational decision-making is often not the "optimal choice".
In the movie "Mechanical Enemy" (also known as "I, Robot"), Dale Spooner, starring will Smith, falls into the water with a little girl in a car accident. After calculation, the artificial intelligence robot chooses the more productive Dale Spooner to give up the little girl's life, and if a similar event happens in reality, rescuers as human beings will obviously give priority to saving the girl. Because this is the "optimal solution" under the constraint of human nature.
The "Singularity" of AI driving Technology under "AI Safety Trap"
Looking to the future, self-driving will be fully applied to the field of travel at some point in the future, when there will be new changes in existing traffic rules and even road patterns. From the initial application of self-driving to the advent of the era of self-driving, people will be in a mixed travel era of "human + AI driving" for a long time. In this process, the corresponding laws and regulations must also be adapted to it.
If the safety issue is the "ticket" for AI self-driving landing, then the adaptation between self-driving and the existing traffic system and rules is a direct "game" between AI and human beings.
In essence, the evolution of AI self-driving is a process in which human beings gradually hand over the travel part to AI on the premise of improving convenience and safety. In this process, human beings retain the dominant right in the field of travel and hand over the right of travel safety and control to AI to achieve the liberation of manpower.
In this process, human beings, as a part of the game, have a very contradictory psychology. On the one hand, people hope to liberate the manpower through AI to obtain the "comfort" of the travel experience, on the other hand, people worry that the decision of AI will bring safety hazard and moral hazard under the existing technical conditions. Therefore, the landing of self-driving is not only the landing of technical level, but also the systematic adaptation of public recognition and self-driving traffic laws and regulations.
At the decision-making level, AI based on deep learning will not have a "humanoid" decision-making model for a long time, so people can expect AI self-driving, which is essentially a traffic aid under low safety risk. On the contrary, the progress of AI self-driving will increase the human drivers to fall into the "AI safety trap": on the one hand, the "inhuman" AI can not really give the driver safety protection, on the other hand, the increasing progress of AI self-driving technology will increase the driver's "inertia" and cause potential safety risks.
In the editor's opinion, the key for self-driving to cross the "AI safety trap" lies in whether it can accurately judge the singularity of the evolution of AI self-driving technology, and the principle of judging whether self-driving has reached the technical singularity can be considered from two aspects: first, AI has the analysis and decision-making ability as a "human" (that is, artificial intelligence to realize independent thinking). Second, the accident rate of AI self-driving based on deep learning in actual road driving is much lower than that of human driving.
Secondly, from a practical point of view, software program is an indispensable component of AI technology. Under the condition of networking, AI which obtains vehicle control is also more vulnerable to attacks by network hackers. Therefore, in addition to driving safety, the problem of network security is also a problem that self-driving really needs to solve.
So how long will it take for a real driverless landing?
From the perspective of the development of AI technology, since the breakthrough in the field of deep learning in 2006, deep learning based on neural network has developed rapidly. Big data, deep learning algorithm and computing power have become the three core technologies in the field of AI. At present, the computing power of the three elements of AI technology still depends on powerful computers as logistics support, but with the failure of Moore's Law, the traditional semiconductor industry has gradually ushered in a technical bottleneck. The progress of AI technology may face new stagnation.
The failure of Moore's Law means that under the existing size, computer computing power is also facing physical bottlenecks, and the growth of AI technology needs a lot of computing power support, so it can be predicted that the growth of AI technology will fall into a new dilemma, at the same time, the stagnation of AI technology development will further limit the application of AI technology in the self-driving field.
Under the existing AI technology and its growth space, in the future, the landing of self-driving will inevitably be divided into two stages, namely, the commercial landing under the closed scene and the commercial landing as a driving auxiliary function. In order to truly realize intelligent self-driving, there is still a long way to go.
Conclusion:
Ray Kurzweil's "Singularity approaching" makes people lament that the age of artificial intelligence seems to be at hand, but as he writes in his book: "people always overestimate what can be achieved in the short term." but it is easy to underestimate goals that take longer to achieve. " Perhaps, we know little about the far-reaching impact of real artificial intelligence on human society, but people should also give a more rational understanding of the practical application of AI, which is the key to the enduring prosperity of AI technology.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.