In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Read a brief history of AI: some of the wishes made by various countries at that time have not yet been realized.
Introduction: recently, Internet bigwigs such as Jack Ma, Ma Huateng and Robin Li have appeared at the 2018 World artificial Intelligence Conference and gave speeches on the stage. On the current situation and future of artificial intelligence, they put forward their own views, but also caused heated discussion among netizens. Some people think that the sharing of practical information by the bosses is full, while others have a different point of view that we do not really understand artificial intelligence, let alone predict the future.
If you look back at history, you will find that artificial intelligence has always been a favorite topic for computer and Internet bosses at home and abroad. Some of their views and predictions have become the norm in today's life, while others have not come true. This article will take you to review the important historical stages in the development of artificial intelligence, as well as the wonderful remarks made by the bosses.
Author: Qian Gang
This article is extracted from "A brief History of Silicon Valley: the Road to artificial Intelligence". If you need to reprint it, please contact us.
Before we review the history of artificial intelligence, let's take a look at the popular definition of artificial intelligence. At present, the most recognized artificial intelligence is defined as a machine that can think and act as rationally as human beings. Action is broadly understood as the decision to take action and make action, not physical movements.
Artificial intelligence is divided into two categories: strong and weak. Strong artificial intelligence is an intelligent machine capable of reasoning (Reasoning) and problem solving (Problem solving). It is a perceptual and self-conscious machine. Strong artificial intelligence is divided into two categories: humanoid artificial intelligence, that is, machines that can think and reason like human beings, and non-humanoid artificial intelligence, that is, machines with different perception and consciousness from human beings, whose reasoning mode is different from that of human beings. People who hold a weak view of artificial intelligence think that it is impossible to build intelligent machines that can really reason and solve problems. Artificial intelligence that looks like intelligence is neither real intelligence nor autonomous consciousness.
The core problem of artificial intelligence is to make machines and software have the ability of knowledge, learning and reasoning like human beings. Specifically, it is to make artificial intelligence reach or exceed the level and efficiency of human beings in some work based on reasoning and analysis. At present, the level of computer hardware has been unimaginable in the past, paving the way for the realization of artificial intelligence. In fact, in the proof of mathematical problems, in chess competitions, and even in stock investment, artificial intelligence has surpassed human beings.
At present, the main means used to study artificial intelligence and the machine to realize artificial intelligence is the computer. The history of artificial intelligence is related to the history of computer science and technology. However, artificial intelligence also involves many disciplines, such as information theory, cybernetics, automation, bionics, biology, psychology, mathematical logic, linguistics, medicine and philosophy.
01 early artificial intelligence
The earliest artificial intelligence was in the myths and legends of our ancestors. Whether in ancient Greece or in ancient China, there are stories of giving intelligence to mechanical devices. In 1863, Samuel Butler (Samuel Butler)'s paper Darwin in machines explored the possibility that mechanical installers evolved intelligence through natural selection.
Artificial intelligence has a basic assumption that human beings can use machines to simulate the human thinking process. This is a kind of formal reasoning. In ancient times, Aristotle's formal logic and Euclid's "geometric primitive" were models of formal reasoning.
In the 17th century, European philosophers and mathematicians Leibniz and Descartes tried to transform the form of thinking into mathematics. Leibniz proposed a universal language for reasoning, which makes reasoning into computation, so that arguments among philosophers can be judged by logic. These early philosophers already knew that formal reasoning depended on formal language systems.
At the beginning of the 20th century, mathematical logic has made great progress. Hilbert posed a fundamental question: "can all mathematical reasoning be formalized?" Soon this problem was solved by Godel's incomplete theorem, and his answer was: any form of language system is incomplete. Godel also pointed out that any form of mathematical reasoning can be reduced to a mechanized step under some constraints.
In 1936, Alan Turing, a 24-year-old English mathematician, proposed the famous Turing machine model, a complete formal language system. In 1945, he published a series of papers on the design ideas of electronic digital computers.
With the improvement of computer technology, people begin to have the technical means to realize artificial intelligence. The earliest artificial intelligence uses electronic networks to simulate human neurons. The excitation levels of this network are only "1" and "", and there is no intermediate state. Wiener's cybernetics describes the control and stability of electronic networks very well. Claude Shannon's information theory describes how to use digital signals to achieve logical functions. Turing's computer theory proves that binary digital signals are sufficient to describe any form of calculation. All these have laid a solid foundation for artificial intelligence.
The first scholars to propose neural networks are Walter Pitts and Warren McCulloch. They analyzed the ideal artificial neural networks and gave the mechanism of using them to perform simple logic operations. Marvin Minsky, one of the founders of artificial intelligence theory, was only 24 years old at the time and was their student. Minsky and Dean Edmunds (Dean Edmonds) built the first neural network machine, SNARC, in 1951.
In 1950, Turing published an epoch-making paper, Computing machines and Intelligence, which pointed out that it was possible to make machines with true intelligence. At the same time, Turing gives the exact definition of intelligence, that is, an intelligent machine that can pass the Turing test. The Turing test goes like this: if a machine can not be identified when talking to a human, then the machine is intelligent. Turing's work has laid a solid foundation for artificial intelligence.
In 1951, Christopher Strachey (Christopher Strachey) wrote the first checkers program; someone soon wrote a chess program. The chess program of the mid-1950s has become an amateur level. Artificial intelligence in games has always been a kind of standard to evaluate the progress of artificial intelligence.
In the mid-1950s, scientists began to operate symbols with machines. In 1955, Alan Newell (Allen Newell) and Herbert Hervert Simon developed the logic theorist (Logic Theorist) program. The program proves 38 of the first 52 theorems in "Mathematical principles". Some of the proof methods of this program are better than the original.
In 1956, the first academic conference on artificial intelligence was held at Dartmouth College, sponsored by Minsky, John John McCarthy, Shannon and others. McCarthy put forward the term artificial intelligence at the conference. Participants included Ray Solomonoff, Oliver Selfridge, Arthur Samuel, Newell and Simon, who later made important contributions to artificial intelligence research. This conference is a sign that artificial intelligence has been established as a discipline.
McCarthy, the father of artificial intelligence and inventor of the LISP language, defined artificial intelligence for the first time at Dartmouth: "artificial intelligence is to make machines look like intelligent behavior shown by human beings." McCarthy's definition of artificial intelligence, while popular, is not comprehensive.
02 the best part of the first time
The years after Dartmouth were the era of rapid development of artificial intelligence. At this stage, people have developed some intelligent programs: solving algebra problems, proving geometric theorems, learning and using English. At that time, most people could not believe that machines could solve these intelligent problems. The scholars who developed these programs believe that fully intelligent machines will emerge in the next 20 years. The Advanced Research projects Agency of the Ministry of Defense has allocated large amounts of research funding for these projects.
The most influential of these early artificial intelligence programs are search-based reasoning, Newell and Simon's general problem-solving program and Herbert Gelernter's geometric theorem proving program.
An important goal of artificial intelligence is to enable computers to communicate with human beings through natural language. An early example of success is Daniel Bobrow's program STUDENT, which can also solve high school algebra problems.
Soon, someone developed a chat program that can speak English. Some of the users who chat with it will think they are talking to humans. In fact, the program doesn't know what it's talking about. It answers according to the fixed pattern and grammar.
In 1958, Newell and Simon pointed out: "within 10 years, digital computers will become the world chess champion."within 10 years, digital computers will discover and prove an important mathematical theorem."
In 1965, Simon said, "within 20 years, machines will be able to do everything that adults can do."
At that time, the research funding for artificial intelligence provided by the US government was almost unconditional. MIT, Carnegie Mellon University, Stanford University and the University of Edinburgh in the UK were the research centers of artificial intelligence at that time.
03 the difficult 1970s
In the early 1970s, artificial intelligence encountered a bottleneck. At that time, the best artificial intelligence programs could only solve some of the simplest problems, and artificial intelligence was just a "toy" in the eyes of many people.
In 1976, Hans Moravik (Hans Moravec) proposed the famous Moravik paradox: problems that are difficult for human beings, such as proving theorems, are relatively easy for computer programs, while some extremely simple tasks, such as face recognition, are difficult to be realized by computer programs. In the face of the Moravik paradox, artificial intelligence experts at that time were at a loss.
The computing power of computers is also the bottleneck of artificial intelligence. At that time, the computer memory and speed were not enough to solve any practical artificial intelligence problems. Moravik points out that the ability of computers is millions of times less than the requirements of artificial intelligence.
In 1972, Richard Richard Karp proved a depressing conclusion that the computing time of many problems is proportional to the power of the input size. Except in the simplest case, the solution time is close to infinity. In other words, artificial intelligence may never have practical value.
Due to the lack of substantial progress, the government has gradually stopped funding for artificial intelligence research. In 1974, it was difficult to find government funding for artificial intelligence projects.
In addition, experts from other fields are also beginning to oppose artificial intelligence. Some philosophers say that Godel's incomplete theorem proves that formal systems (such as computer programs) cannot judge the truth of certain statements, while others think that human reasoning actually involves only a small amount of "symbolic processing". And most of them are concrete, intuitive and subconscious. Others point out that the program does not understand the symbols it uses, that is, intentionality, and that if symbols have no meaning to the machine, the machine is not thinking.
Artificial intelligence experts don't take criticism from other fields seriously, but computational complexity and "making programs have common sense" are issues they have to face seriously.
In 1976, Joseph Weisenbaum (Joseph Weizeubaum) published a monograph, the Power of computers and Human reasoning (Computer Power and Human Reason), which indicated that the abuse of artificial intelligence may damage the value of human life.
As early as 1958, McCarthy put forward the idea of introducing logic into artificial intelligence. In 1963, J. Alan Robinson (J.Alan Robinson) discovered the algorithms for reasoning on computers: the resolution and unification algorithms. In the late 1960s, McCarthy found that the computational complexity of using this idea to achieve logical reasoning directly was extremely high: even proving very simple theorems required astronomical steps. In the 1970s, Robert Kowalski, Alan Alain Colmerauer and Philip Phillipe Roussel developed the logic programming language Prolog at the University of Edinburgh.
04 prosperous 1980s
In the 1980s, expert system programs in artificial intelligence began to be adopted by enterprises. Soon, knowledge processing has become the mainstream of artificial intelligence. An expert system is a program that can answer or solve problems in a particular field according to a set of logical rules derived from expertise. The earliest expert system program was developed by the Edward Feigenbaum (Edward Feigenbaum) team. The 1965 DENDRAL expert system could distinguish mixtures from spectrometer readings, and the 1972 MYCIN could diagnose blood infections.
Expert system is applied in a very small field of knowledge, which avoids the problem of common sense. Because it is simple and easy to implement or modify, it has a wide range of applications. However, people can see the practicability of this kind of program from the expert system. Artificial intelligence has finally become practical.
In 1980, Carnegie Mellon University designed the expert system XCON for digital equipment companies, which was a great success. XCON saves the company $40 million a year. As a result, companies all over the world began to develop and apply expert systems. By 1985, artificial intelligence had received more than 1 billion dollars of investment from major enterprises. As a result, the industry that provides support for expert systems arises at the historic moment, such as Symbolics and LISP Machines for hardware companies and IntelliCorp and Aion for software companies.
The capabilities of expert systems are based on their storage expertise. The experience of 1970s tells people that intelligent behavior is closely related to knowledge processing. Knowledge base system and knowledge engineering are the main research directions of artificial intelligence in the 1980s.
In 1981, the Ministry of economy, Trade and Industry of Japan allocated 850 million US dollars to support the research and development of the fifth generation computer. The goal is to build a computer that can talk to people, translate languages, interpret images, and reason like people. They chose Prolog as the main programming language for the project.
Other countries have also responded. Britain launched the £350 million Alvey project. The American Enterprise Association organized the Microelectronics and computer Technology Group (Microelectronics and Computer Technology Corporation, MCC) to provide funding for large-scale projects in artificial intelligence and information technology. DARPA organized the Strategic Computing Promotion Association (Strategic Computing Initiative) and began to invest heavily in artificial intelligence.
In 1982, physicist Hopfield proved that a new type of neural network can learn and process information in an entirely new way. At the same time, Rumelhart popularized a neural network training method. These discoveries led to the commercial success of neural networks in the 1990s, and they are widely used in optical character recognition and speech recognition software.
05 fell to the bottom again
In 1987, the market demand for artificial intelligence hardware suddenly dropped, while the performance of personal computers continued to improve, which has surpassed the expensive LISP machines produced by Symbolics and other manufacturers. Artificial intelligence hardware manufacturers lost their reason for existence, and an industry worth $500 million collapsed in an instant.
The maintenance costs of some once successful expert systems remain high. They are difficult to upgrade, difficult to use, and fall victim to a variety of problems that have been exposed before. The practicability of expert system is limited to some specific situations.
In the late 1980s, the Strategic Computing Promotion Association slashed funding for artificial intelligence. DARPA's new leader believes that artificial intelligence is not "the next wave" and that funding will tend to be projects that seem easy to bear fruit.
The "Fifth Generation computer Engineering" was not realized until 1991. Some of these goals, such as "talking to humans", were not achieved until 2010.
06 prosper again
In the late 1980s, some scholars put forward a new artificial intelligence scheme. They believe that in order to achieve true intelligence, machines must be able to perceive, move, survive, and interact with the world. They believe that these perceptual motor skills are essential for high-level skills such as common sense, while abstract reasoning is the least important and boring skill of human beings. They advocate the "bottom-up" creation of intelligence.
In the 1990s, artificial intelligence finally achieved some of its original goals. It has been successfully used in many technology industries. These achievements are mainly due to the improvement of computer performance.
On May 11, 1997, Deep Blue beat chess world champion Kasparov.
In 2005, a robot developed by Stanford University successfully traveled 131miles on a desert trail, winning the first prize in the DARPA Challenge Competition.
In 2009, the Blue brain Project successfully simulated part of the mouse brain.
In 2011, IBM's Watson appeared on the Edge of danger, beating human contestants in the final episode.
However, these achievements are not due to the revolution in paradigm. They are just complex applications of engineering technology. It is widely recognized that many problems that need to be solved in artificial intelligence have become topics in the field of mathematics and operational research. The sharing of mathematical language makes artificial intelligence cooperate with other disciplines at a higher level, and its research results are easier to evaluate and prove. Today, artificial intelligence has become a very strict branch of science.
Today, the computing power of computers has increased to an unprecedented level. In theory, the development of artificial intelligence is almost unlimited, so it will affect our daily life and values as profoundly as the Internet today.
About the author: Qian Gang, now works for Texas Instruments, engaged in the development and research of semiconductor technology and semiconductor devices. The popular author of Science Network, whose works have received more than 10 million views online. Qian Gang's works are mainly essays and essays related to history, science and technology. His main works include American history and chronicle "American past", Silicon Valley history "Silicon Valley brief History" and so on.
This article is an excerpt from A brief History of Silicon Valley: the Road to artificial Intelligence, which is authorized by the publisher.
Https://www.cnblogs.com/DicksonJYL/p/9711418.html
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.