Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Is it possible for artificial intelligence to surpass human beings in the future?

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

Inscription: a person who has paper, pen and eraser and adheres to a strict code of conduct is essentially a general-purpose Turing machine.

-- Alan Turing

Artificial intelligence (Artificial Intelligence), abbreviated as AI. It is a new technical science that studies and develops theories, methods, technologies and application systems used to simulate, extend and expand human intelligence.

Artificial intelligence is a branch of computer science, which attempts to understand the essence of intelligence and produce a new intelligent machine that can respond in a similar way to human intelligence. research in this field includes robots, language recognition, image recognition, natural language processing and expert systems. Since the birth of artificial intelligence, the theory and technology have become increasingly mature, and the field of application has been constantly expanding. It can be imagined that the scientific and technological products brought by artificial intelligence in the future will be the "containers" of human wisdom. Artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but it can think like human and may surpass human intelligence.

When it comes to the competition between artificial intelligence and human intelligence, it is easy to think of it: AlphaGo, an artificial intelligence program for go developed by Google's Deep Mind company, was a "spectacular" in 2016. It first easily beat Lee se-dol, who had been famous for many years with 4 ∶ 1 in five games, and was also one of the top 10 go players in the world at that time. Six months later, they met one by one against the go masters who were recognized as the highest level and strongest in the world at that time, including Chinese "genius boy" Ke Jie, who scored first in the world, and won 50 games in a row in the Kuaiqi tournament. He maintained a complete victory except for a draw caused by technical problems. Go was once regarded as a field in which artificial intelligence could not defeat human beings for a long time, and after that, it was also declared "occupied".

In the face of AlphaGo's victory, commentators in the scientific community split into two camps.

One is the "pessimists" who believe that artificial intelligence is developing so fast that it is even threatening human security. The "intellectual weapons crisis" and even the "Matrix" that robots often see in literature and science fiction movies dominate human beings are coming.

The other is the "optimists", who believe that even if they can beat the strongest go players in the field of go, AlphaGo and the supercomputing program it represents are still some way from the real "artificial intelligence". Because compared with its learning, memory and computing ability, AlphaGo is still a blank in the field of "emotion" and "thinking". Human beings lose to AlphaGo at go, just as human beings can't beat cars. At least at present, artificial intelligence will not pose a great threat to human survival.

Which view is more realistic? It's also hard to tell. However, we can sort out the development of artificial intelligence in recent decades to see if we can see a thing or two from the process of historical development.

Human imagination about artificial intelligence has existed for a long time. As early as in China's ancient "Liezi Tang Wen", it was recorded that a craftsman named "Yanshi" in the Western Zhou Dynasty created an "intelligent robot" who could not only talk but also sing and dance. Hiro, a famous mathematician in ancient Greece, also claimed that he had built a robot similar to a "vending machine", but it is impossible to verify whether these are only legends.

The first person who really put forward the principle of artificial intelligence in history was the British mathematician Alan Matheson Turing (Alan Mathison Turing). He comprehensively analyzed the process of human calculation and summed up the calculation as the simplest, most basic and most definite operation, thus using a simple method to describe the basic calculation program. This simple method is based on an abstract automaton concept, and the result is that the algorithmic computable function is the function of this kind of automatic function calculation-- this not only defines computing, but also connects computing with automata for the first time, which has a great impact on later generations. This kind of "automaton" was later called "Turing machine". Turing also proposed a test method to determine whether the machine is intelligent, which is often referred to as the "Turing test".

The Turing test means that when the tester is separated from the tester (a person and a machine), the tester is asked random questions through some devices (such as keyboard).

After many tests, if the machine causes the average participant to make more than 30% misjudgment, then the machine passes the test and is considered to have human intelligence.

Through this thought experiment, Turing was able to convincingly demonstrate that the "thinking machine" was possible, and the Turing test became the first more serious proposal in artificial intelligence.

The word "artificial intelligence" really appeared in 1956 (two years after Turing's death). A number of scholars from mathematics, psychology, neurology, computer science and electrical engineering and other fields gathered at Dartmouth College in the United States to discuss how to use computers to simulate human intelligence. According to the suggestion of computer scientist John Mc Carthy, this field is officially named "artificial intelligence". Two cognitive psychologists, Herbert Simon and Alan Newell, attended the historic meeting as representatives of the field of psychology. and the "logic theorist" they brought to the conference was the only artificial intelligence software that worked at the time. As a result, Simon, Newell and Dartmouth conference sponsors George McCarthy and Marvin Minsky are recognized as the founders of artificial intelligence, also known as the "father of artificial intelligence".

McCarthy and Minsky launched the conference with an ambitious goal of a dozen people working together for two months to design a machine with real intelligence. In fact, the years after Dartmouth were indeed the golden age of artificial intelligence development. Using clunky transistor computers, they have developed a series of amazing AI applications: solving algebra problems, proving geometric theorems, learning and using English. These young researchers expressed considerable optimism in private communication and published papers. In 1970, Marvin Minsky said in a speech: "in three to eight years we will get a machine with average human intelligence."

It was also during this period that ELIZA, the first robot to chat with people, was invented to talk to users according to the answers set in its own library. However, unlike the iPhone software we use now, Siri or Microsoft Xiaoice, ELIZA doesn't really know what he's talking about. It just talks to humans in a preset way, or just repeats the problem in a grammatical way.

The research and development of artificial intelligence soon ran into a bottleneck-on the one hand, computer hardware failed to keep up, on the other hand, scientists found that some seemingly simple tasks, such as face recognition or letting robots control themselves to walk around the house, is extremely difficult to achieve. They can make an AI that can easily solve junior high school geometry problems, but it can't control its feet to walk out of a small room. In the famous sci-fi movie Star Wars series in the 1980s, two intelligent robots more or less reflected what artificial intelligence looked like at that time: funny, loyal and clumsy.

McCarthy and Munster, the two giants of artificial intelligence, also disagree. The artificial intelligence that Munster wants is an AI that can really understand human language, understand the meaning of stories, and is no different from the human brain, and even let robots, like humans, make judgments that are not based on logical algorithms-or let artificial intelligence have "perception." Their faction is called the "Wuza faction". Correspondingly, another group, represented by McCarthy, is called "minimalism". They don't want robots to think the same way as humans, they just want a "machine" that can solve problems in accordance with established procedures.

But with the rapid advances in computer technology and the study of human brain neuroscience, in the 1980s, a whole new way of thinking emerged: they believed that in order to achieve true intelligence, a machine must have a body-it needs to perceive, move, survive, and interact with the world. During this period, both the United States and Japan have filmed a large number of entertainment programs featuring giant robots, the most famous of which are, of course, the Transformers series and the ever-changing Lion series that our generation was obsessed with when we were young.

But whether it's Optimus Prime or Megatron, these giant robots from other planets are at least one thing different from the artificial intelligence we see: the "thoughts" and "emotions" in their minds are innate, not man-made.

It is not easy for the pexels to give real life to the machine. However, with the progress of computer hardware, artificial intelligence is also growing rapidly. According to Moore's Law (Moore's Law is the experience of Gordon Moore, one of Intel's founders, its core content is that the number of transistors that can hold on an integrated circuit doubles about every 18 months. ), the computing speed and memory capacity of computers double every two years. The computing speed of any computer is now tens of millions of times faster than the one used by McCarthy in the 1950s. In the face of the rapid increase in computing power, many problems that never seemed to be solved before have been easily solved.

On May 11, 1997, Deep Blue, a super artificial intelligence produced by IBM, beat world champion Kasparov in a chess match. This has also become a landmark event in the progress of artificial intelligence, and even people have made up many jokes to play up the horrors of artificial intelligence.

The 1999 film Matrix is popular all over the world, which more or less reflects people's "worship and fear" of artificial intelligence. In this film, Neo, a young cyber hacker, discovers that the seemingly normal real world is actually controlled by a computer artificial intelligence system called Matrix, and real humans have long been slaves to artificial intelligence. immersed in a nutrient solution to become a biological battery.

But for nearly two decades, artificial intelligence has failed to show any hostility to humans-or maybe we have already been controlled by them. In recent years, it has been widely recognized that many problems to be solved in the study of AI have become research topics in the fields of mathematics, economics and operations research. The sharing of mathematical languages not only enables AI to cooperate with other disciplines at a higher level, but also makes the research results easier to evaluate and prove. AI has become a more stringent branch of science. However, the topic of "artificial intelligence rules mankind" has been rarely mentioned except in science fiction circles.

However, the emergence of AlphaGo still adds to people's worries. This is because it is designed to break through the forbidden area where artificial intelligence chess players will not vaguely select points, and will "think" like human beings. So over time, can the real Turing machine really appear? Will this kind of artificial intelligence, which can crush human beings in terms of intelligence, really serve us?

Speaking of which, I have to mention Isaac Asimov, a scientist who is a part-time popular science writer. It was he who put forward the famous "three laws of Robot" in his 1950 collection "I, Robot", namely:

The first law: robots must not harm individual human beings, or stand idly by when they witness that individual human beings will be in danger.

The second law: the robot must obey the command given to it by man, except when the command conflicts with the first law.

The third law: robots should protect their survival as much as possible without violating the first and second laws.

On the surface, these three laws are all "nonsense", but a careful study will find that they are logically interlinked, putting a yoke on artificial intelligence that "can protect itself without harming human beings." Throughout the history of the development of artificial intelligence, we can draw a definite conclusion: is it possible for artificial intelligence to surpass human beings in the future? Yes! There is not only hope, but also great hope, with the progress of hardware technology, this day will come soon. So is it necessary to beware of artificial intelligence? No need! Because as long as the three laws of robots remain, they will not be able to turn the sky.

If one day the three laws are cracked by the robot, please help yourself.

Wen Yuan: comprehensive from "gossip Psychology: I know what you are thinking", "artificial Intelligence" Baidu Encyclopedia Picture Source online copyright belongs to the original author Editor: Zhang Runxin ★ Book author introduction ★

An Xiaoliang, whose online name is Andes Chenfeng, is a national second-class psychological counselor with a non-professional background and a less professional online literature critic. I am a prison guard by profession and like psychology and writing. I hope to use my own efforts to make everyone have a more scientific understanding of psychology. I know what you are thinking: gossip Psychology author: an Xiaoliang Tsinghua University Press this book takes many psychologists in history as the main characters, and combs the whole history of psychology through the description of their academic achievements and deeds. and interspersed with the introduction of interesting psychological common sense suitable for ordinary readers. Use plain and humorous language to make readers who have never come into contact with psychology understand the development history of psychology and become interested in psychology. This article comes from the official account of Wechat: Origin Reading (ID:tupydread), author: an Xiaoliang, Editor: Zhang Runxin

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report