Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Four basic problems of General artificial Intelligence

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Https://mp.weixin.qq.com/s/AqyYSItYAiscJFdJ6bcQnw

Written by Wang Pei (Department of computer and Information Science, Temple University, USA)

Like many other areas of research, the basic questions of artificial intelligence can be abstracted into four: "do what?"can you do it?" "how do you do it?" "should I do it?" The following is my brief analysis of these problems.

Do what?

Can you do it?

How?

Should I do it?

one

Research target

In "what exactly are you talking about when you talk about artificial intelligence?" (click "read the original text" at the end of the article, the same below) has listed five different research objectives under the name of "artificial intelligence", and the current popular "artificial intelligence is to use computers to solve problems that only the human brain can solve" is one of the "ability school". The advantage of this school is that it is easy to understand and works directly, but the disadvantage is that the circle is so large that the work formerly called "automation" and "computer application" is now changed to "artificial intelligence". Because this definition makes AI cover a large number of completely different systems, it is unlikely to establish a unified artificial intelligence theory within this range.

In artificial Intelligence lost: is High computer skill equal to High Intelligence? I introduced the concept of "general artificial intelligence" and its ups and downs in history. Today, the word appears more and more in all kinds of discussions, but there are still many misunderstandings about its meaning. For example, a common saying is to call a general-purpose system "strong artificial intelligence" and a special system "weak artificial intelligence". This distinction is reasonable (because the goal of the former is much higher than that of the latter), but the difference between the two is not the strength of capabilities (dedicated systems are often much better than people in capabilities), but the scope of application and working principles. So this term will make people misjudge the difference between the "quality" between the two as the "quantity" difference, thinking that the integration of various dedicated systems is a general-purpose system.

Even among the current general artificial intelligence researchers, the exact setting of the research purpose is different. Some people try to simulate the brain structure as faithfully as possible, some people try to replace people in as many fields as possible, and some people (including me) try to make computers follow basically the same "laws of thinking" as people do.

Some readers may think that they are not even clear about the basic concepts and objectives, but do not realize that the accurate description of many complex phenomena cannot take place at the beginning of the study, but will be part of the research results. therefore, it is unrealistic to "unify thinking" first. On the other hand, the idea that there is no need to argue about the definition of "intelligence" and just follow the intuitive usage is precisely the important reason for the confusion in this field, and it is also undesirable. Many arguments in artificial intelligence research can be traced back to different understandings of intelligence, which cannot be solved by dictionaries, authorities or opinion polls. If the research objectives are different, the answers to other related questions will not be the same. There is no consensus on this issue, which precisely means that we should pay attention to identifying different research objectives and avoid making general assertions about "artificial intelligence".

two

Success can be achieved.

The possibility of success of this effort has been controversial ever since "artificial intelligence" and "thinking machine" became the object of study. With the success of technologies such as deep learning, the possibility of artificial intelligence (reaching or exceeding human ability to solve a specific problem) in popular discourse is no longer a problem, but the possibility of general artificial intelligence is still widely doubted.

The affirmative argument on this issue will not be generally accepted by the public long after the academic community thinks it is done, such as insisting that it has no "soul", no matter what it has done. Therefore, I will only briefly explain why the existing negative arguments are untenable. There are several different situations.

A kind of "artificial intelligence impossible" assertion is out of a misunderstanding of the research objectives in this field, so it is attacking a scarecrow. People with this attitude tend to think that the goal of this field is to make computers that are the same as human beings in all respects. It's not difficult to find some evidence that it's impossible, but the problem is, I don't know that any researcher is really aiming at that goal. In fact, all people who study artificial intelligence (including general artificial intelligence) just think that computers can be similar to the human brain in some ways and to some extent. Many researchers believe that there is a more general "intelligence" mechanism among the many phenomena of "human intelligence", and "artificial intelligence" is another way to realize this mechanism. According to this point of view, even if artificial intelligence is fully realized, it will not be exactly the same as human intelligence in external performance. Therefore, this assertion that "artificial intelligence is impossible" will not have any impact on research in this field.

In contrast, another kind of argument that "artificial intelligence is impossible" is worthy of attention, because they point directly to some "dead points" of artificial intelligence technology. The more common ones include "computers must follow programs, so they cannot be flexible and creative", "computers can only use symbols according to form, but cannot obtain their meaning" and "some truth people can find out, but computers can never". Here, I will not discuss them in detail (see my previous article), but just point out a common problem: in fact, each argument here is aimed at a specific intelligent technology or computer use. but the conclusion is often how "artificial intelligence", "computer" how, the result is to exaggerate the scope of application of its conclusions. These discussions are beneficial to the development of artificial intelligence, because they provide a reference for the research and development of new theories and technologies. Unfortunately, many people still think that they limit the heights that can be reached by all artificial intelligence research.

Ironically, in the recent discussion of the limits of artificial intelligence, many are shaped like "artificial intelligence will never be able to."... " Instead, the assertion comes from "artificial intelligence experts". In fact, this also stems from the avoidance of "big problems" by mainstream artificial intelligence after setbacks. Many people have studied "artificial intelligence" for many years, but only focus on the implementation of a particular function and the solution of a particular problem, so when they say, "No one knows how to achieve universal intelligence", what they actually say is, "I don't know what to do. And the celebrities I follow don't know what to do. Other people's work is not worth paying attention to, because they haven't done it yet." Since there is reason to think that general intelligent systems and dedicated systems are very different fields, the authority of people who have become famous in the latter's research on the former is actually very limited. and "it hasn't been done yet" and "it can never be done" are obviously not the same thing.

In short, general artificial intelligence should at least be seen as possible for now, because there is no strong enough negative reason.

three

Real path

Since the implementation of the dedicated system varies from problem to problem, I will only discuss general artificial intelligence here, and only focus on a few common viewpoints, leaving the introduction of my own research approach to other articles.

Among those who believe that general artificial intelligence is possible, the most promising technology at present is of course deep learning. Whenever a new use of deep learning emerges, someone will say, "this marks another step towards general artificial intelligence." it seems to go on in this direction. In "will deep neural networks produce human intelligence?" I have explained the difficulties of achieving general intelligence through deep learning and related machine learning techniques. One thing to add here is that some people think that deep learning is already "universal" because this technology can be applied in many different fields. But this is not the meaning of "general artificial intelligence". It is true that deep neural networks can be trained to play go and recognize photos, but the same network cannot do both at the same time. Since previous machine learning studies are basically directed at "approaching a single function", it is by no means easy to extend them to multiple objectives (especially those that have not been considered at the time of design). Because it requires a fundamental change in the whole research specification. Today, there is no complete road map for achieving general artificial intelligence with deep learning, and people who believe in this possibility tend to make simple extrapolations from existing results.

Another idea is to integrate various specialized "modules" into a "framework" so that they can work together and become a general system. It's a natural idea, and a lot of people are trying. But the road is not nearly as natural as it seems. In any artificial intelligence textbook, there must be hundreds of algorithms or designs mentioned, each with different uses. It is possible in principle to implement them all in the same computer system, but deciding when to use which tools probably requires general intelligence, not to mention that the theoretical presuppositions of these tools often conflict with each other. so they can't coordinate with each other. Another big problem is that the division of cognitive functions generally follows the traditions of psychology (such as reasoning, learning, memory, association, perception, movement, language, emotion, consciousness, etc.), although the relationship between them is obviously very close. If intelligence is indeed "meandering mountains and steep peaks", it is natural to depict different "ridges" and "peaks" from different angles and distances, but if the aim is to build a model for Lushan Mountain, it would be wrong to construct these "ridges" and "peaks" respectively, and then "assemble" them, because these "components" should be regarded as different sides of the same object rather than different parts.

Some people try to achieve the unified reproduction of each cognitive function by constructing a model that is more "faithful to the human brain". As I said in "do deep neural networks produce human intelligence?" The biggest problem with this approach is not its difficulty, but its necessity. If we think of intelligence as a cognitive mechanism with different ways to achieve it, there is no reason to think that the human brain is the only way to achieve it, although it is indeed the most familiar way. The model closest to the implementation details of the human brain is not necessarily the most appropriate model for artificial intelligence, although this model is valuable for brain science.

In short, technologies that are effective for other purposes do not necessarily contribute much to general artificial intelligence, because the goals and constraints here are very different. When choosing the technical route, we should proceed from the characteristics of intelligence and consider the realistic conditions of the computer system.

four

Ethical choice

In the end, even if we find a way to build a thinking machine, it doesn't necessarily mean we have to do it. There have been many celebrities clamoring for artificial intelligence to slow down or even stop because they are afraid that human beings will lose their status as the "spirit of all things" and its consequences. With regard to this kind of heckling, I have written "is artificial intelligence dangerous?" "to respond, and here is just a few additions.

First of all, the guarantee made by many "artificial intelligence experts" to the security of AI is often only related to the systems they build or can imagine, in which there is no adaptability, flexibility, autonomy, creativity and other characteristics that general artificial intelligence systems can have, so what they are talking about is basically another problem. Because of these characteristics, the ethical and moral problems brought about by general artificial intelligence are fundamentally different from traditional technologies, so different solutions are required.

As an adaptive system, one of the biggest features of general artificial intelligence is that its behavior depends not only on design (innate factors), but also on experience (acquired factors). Therefore, the control of this system needs to be realized through the experience that affects it, just as society restricts the individual. Therefore, artificial intelligence workers can not be expected to design systems that will never make mistakes, nor can we expect that the research on the security of artificial intelligence can rule out all the dangers in advance. On the other hand, research on such systems can greatly enrich our understanding of adaptive systems (including people and animals). Expand the research scope of pedagogy and sociology (and even economics and law) to include intelligent machines.

As with other problems, fear of artificial intelligence often comes from a misunderstanding of its research goals. Many people think that "universal artificial intelligence" will surpass human beings in all fields, so that it is almost omniscient, so the emergence of such a system will create a "singularity" in human history, and the subsequent development will be beyond our control or even understanding. So far, I have not seen enough evidence to make me believe this conclusion. I think general artificial intelligence can be built, and this kind of system will have cognitive functions very similar to human beings. However, this does not mean that computers can fully achieve or exceed people's problem-solving ability, because the behavior of adaptive systems depends on their experience, and an artificial intelligence system will not have exactly the same experience as human beings. Therefore, the specific capabilities of people and machines will overlap, but there will still be people who can solve problems that machines cannot solve. Like me in "artificial Intelligence lost: is High computer skill equal to High Intelligence?" General-purpose "intelligence" and dedicated "skills" are not the same thing. Different forms of intelligence, whether human or man-made, are similar in the former, but not necessarily comparable in the latter, just as it is impossible to say who is smarter, Zhuge Liang, Leonardo da Vinci or Mozart. This also shows that the working principle of general artificial intelligence is still understandable, and its behavior can be controlled by influencing its experience, although its operation speed may be fast, its storage capacity may be very large, and experience may be very different from ours, so its specific behavior may not be so easy to explain or predict.

In a word, the legitimacy of artificial intelligence research comes not only from the long-term desire of human beings to understand the general law of thinking, but also from the actual demand of social development for complex information processing. This study also presents new challenges, which we must not take lightly, but we should not be blindly afraid. To avoid the danger caused by AI, you should at least figure out what's going on with AI, right? Assertions that "AI will inevitably lead to disaster" tend to fail in this respect, resulting in fighting windmills and defenseless the much more likely danger. Unless we have enough evidence to believe that a certain technology (including artificial intelligence in all senses) will indeed do more harm than good, we still have good reason to continue this exploration while rejecting cheap guarantees. Be prepared to respond as appropriately as possible to the consequences of the technology.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report