In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Author: bluemin2020-06-22 10:23 introduction: this article focuses on whether AI needs to have the same attribute of stupidity as human beings. I believe it will give you some thinking and inspiration.
Like people, it seems to be the ultimate goal of AI development.
However, as we all know, human beings have both wisdom and stupidity. If AI wants to be humanoid, does it also need to have these two sides?
From the Turing test at the beginning to test whether AI can be born like a human, when human beings complete the calculation and other tasks, the calculated answers may have a certain error rate and take a certain time to complete the calculation. When AI, which can immediately say the correct answer, also imitates the calculation process like human beings, humans and AI can not distinguish. This is one aspect of AI imitating human stupidity, thus realizing "humanoid".
In terms of human growth and evolution, the mistakes or stupidity made by human beings at the beginning of their own growth and evolution will pave the way for future experience and wisdom. Thus, seemingly useless "stupidity" may offer another possibility for AI to become smarter.
This article focuses on whether AI needs to have the same attribute of stupidity as human beings. I believe it will give you some thinking and inspiration.
one。 Stupid or wisdom
We usually seem to know what it means to judge a person to be smart. On the other hand, when you label a person as "stupid", the question arises. What does it mean? For example, does stupidity mean a lack of wisdom in a zero-sum manner, or does stupidity occupy its own space and live next to wisdom as a parallel equivalent?
In view of this heavy problem, we might as well do a thinking experiment. Suppose we have a bucket full of wisdom, and we regard intelligence as a tangible thing. We can pour it into a bucket at hand, or we can pour it out of it. Let the bucket full of wisdom fall to the ground, what is left?
One answer is that the bucket is now completely empty and there is nothing in it, that is to say, the bucket has become empty. Another answer is that the wisdom in the bucket is emptied, leaving a wreckage of stupidity.
In other words, once you lose your so-called intelligence, the rest is stupidity.
This is a seemingly esoteric discussion, but as you will see later, this view has a profound impact on many important things, especially for the development and rise of artificial intelligence.
Can wisdom exist without stupidity? Or in a practical sense, if wisdom and stupidity coexist, must there always be a certain amount of stupidity? Some people assert that wisdom and stupidity are Zen-like yin and yang. From this point of view, unless you also have a foolish appearance that can be used as a yardstick, you will not be able to understand the nature of wisdom.
It is said that with the passage of time, human beings become smarter and smarter, thus lowering their level of stupidity. You may think that wisdom and stupidity are playing a zero-sum game, that is, as your wisdom goes up, your level of stupidity is falling (again, if your stupidity goes up, it means your wisdom goes down). Can human beings achieve 100% intelligence and 0% stupidity? Or no matter how hard we try to be 100% smart, we are doomed to be some stupid?
Going back to the bucket analogy, some people will say that people will never be completely and completely smart, and completely get rid of stupidity. There is always some stupidity sitting upright in that bucket. If you are smart and work hard, although there is still some stupidity left in the bucket, you may be able to reduce your stupidity.
two。 Is stupidity good or bad for wisdom?
You may take it for granted that any stupidity is a bad thing, so we must always try to "close" it or avoid it. But what we need to ask is whether the simple idea of classifying stupidity as "bad" and intelligence as "good" is likely to leave out something more complex. You might argue that sometimes being stupid on a limited scale may provide a way to improve your wisdom.
When you were a child, suppose you foolishly tripped over your foot. After doing so, you will realize that the reason for the trip is that you did not lift your foot carefully. From then on, you will begin to pay more attention to how to walk and become smarter when walking. Perhaps in your later years, when you walk along a narrow path, you can try to avoid falling off the side of the road, in part because the early life lessons caused by stupidity have become part of your wisdom.
Of course, stupidity can also get us into trouble. Although you have learned to walk carefully from your stupidity, if one day you plan to swagger on the edge of the Grand Canyon, you will fall into the abyss. Is it wise to walk like this on the edge of a cliff? Apparently not.
Therefore, we should note that stupidity can be either a friend or an enemy, and which one in any particular situation and at any particular moment depends on the wisdom part. You might imagine that there is an eternal struggle between wisdom and stupidity.
On the other hand, you may also think that smart and stupid are partners and pull each other together, so this is not a special battle. because it's more like a subtle dance or game around which side has the upper hand and how to coordinate and help each other.
three。 It's time to think about artificial stupidity.
Every day we hear about how the advent of artificial intelligence will change our lives. Artificial intelligence technology is being loaded into our smartphones, refrigerators, cars and so on. If we are going to incorporate artificial intelligence into the objects we use, it raises the question: do we need to consider that there is yang in yin, especially whether we need to fully understand artificial stupidity?
Most people can't help laughing when they hear or see the word "artificial stupidity". They think that the mention of such a thing must be a joke of someone in the know. Admittedly, the combination of artificial and stupid seems stupid in itself.
However, by reviewing previous discussions about the role of wisdom and stupidity in human beings, you can reshape your point of view and may find that whenever you have a discussion about wisdom, you inevitably need to consider the role of stupidity.
Some people suggest that we should use another way to express artificial stupidity to reduce snickering, grandiose phrases including artificial mental retardation, artificial human nature, artificial dullness, etc., but these words have not gone deep into my heart. Please tolerate me to use the term "artificial stupidity" for the time being, and firmly believe that it is not stupid to discuss "artificial stupidity".
Indeed, you can argue that it is foolish to discuss artificial stupidity, because you do not want or do not accept the understanding that stupidity exists in the real world and therefore in the virtual world in which we try to reproduce intelligent computer systems. in this way, you will ignore or ignore the stupidity that is originally the other half of the equation.
In short, some people say that true artificial intelligence needs to combine what we now think of as "smart" or good artificial intelligence with artificial stupidity (no cover-up), and this combination must be achieved in a smart way.
There is a view that by incorporating artificial stupidity into artificial intelligence, you will fundamentally irreparably introduce stupidity into artificial intelligence systems, which may eventually make AI stupid. In fact, we can deal with the direct knee-jerk reaction of many people to this concept by refuting this view.
Of course, if you introduce stupidity in a stupid way, you are likely to harm the AI system and make it stupid. On the other hand, understanding how humans work and introducing folly into deliberate situations may eventually help AI improve his wisdom (think of the story of tripping over his foot as a child).
So far, AI is far from reaching the level of human intelligence, and perhaps the only way to achieve true and complete artificial intelligence is to integrate artificial stupidity into artificial intelligence; therefore, once we keep our distance from artificial stupidity or treat it as an outcast, we will be in prison, and human-like intelligent AI will be out of reach. In other words, if we exclude artificial stupidity from our minds, we may prevent AI from reaching its peak. This is like a heavy blow and so counterintuitive that it often hinders human beings from developing AI in this direction.
The reality, however, is that there are growing signs that revealing and using the importance of artificial stupidity (or whatever it should be called) is of great benefit to the development of AI.
I conclude that the birth of a true self-driving car may involve the inclusion of artificial stupidity in the system. Isn't that shocking? Maybe so. Now we might as well analyze the matter.
four。 Benefit from artificial stupidity
When it comes to real self-driving cars, I will focus on L4 and L5 self-driving cars. These two levels of self-driving cars, driven by artificial intelligence systems, are unnecessary and usually not equipped with human drivers. Artificial intelligence is responsible for all driving operations, and all human passengers are passengers.
With regard to the topic of artificial stupidity, it is necessary to quickly review the history of terminology. In the 1950s, Alan Turing, a famous mathematician and pioneer in computer science, proposed the well-known AI Turing test.
In short, the Turing test is a test that allows you to interact with a computer system equipped with artificial intelligence and with another person at the same time, and you are not told in advance which is AI and which is human (assuming they are all hidden from sight), requiring you to determine which is AI and which is human by talking to each other. If you can't tell the two participants apart, you can declare that AI passed the test. In this sense, AI cannot be distinguished from human participants and must be treated equally in intelligent interactions.
In fact, the original Turing test had a sharp turn. If the questioner is "cunning", ask the two participants to calculate pi to the percentile. Presumably, AI will tell you the answer very well and easily in the blink of an eye, and it is very accurate and completely correct. But it is difficult for humans to do this, and if you work hard with paper and pencil, it will take a long time to answer this question, and the answer is likely to be wrong in the end.
Turing is aware of this and believes that by asking such arithmetic questions, he can basically unravel the mystery of artificial intelligence. Then he took another step, which some thought opened Pandora's box and hinted that AI might deliberately mistakenly answer arithmetic questions. In short, AI can try to fool the questioner by acting like a human, such as deliberately giving the wrong answer and spending roughly the same time as a human manual calculation.
The Leibner Prize (Loebner Prize), established in the early 1990s, is a competition similar to the Turing test, which gives winners generous cash rewards. In this competition, artificial intelligence systems are often added with similar human errors to deceive questioners into judging AI as human. But there is controversy behind this, and there is not too much discussion here. In 1991, the Economist published a classic article about the competition. Once again, it is ironic that stupidity is actually introduced to depict that something is smart. This brief historical experience paves the way for the next element of this discussion.
Let's sum up the topic of artificial stupidity into two main aspects or two definitions:
1) artificial stupidity is to purposefully incorporate humanoid stupidity into artificial intelligence systems, which is to make AI look more like human beings, not to improve AI itself, but to shape human perception of AI and make it look smarter.
2) artificial stupidity is an admission of countless human shortcomings, and this "stupidity" may integrate into AI or go hand in hand with AI in a joint way, which, if handled properly, may improve AI performance.
With regard to the first definition, I would like to clarify a common misunderstanding that the assumption involved is flawed, that is, computers may deliberately miscalculate certain problems. Some people scream in shock and disdain that there may be hints that computers may deliberately seek miscalculations, such as calculating pi in an inaccurate way! In fact, this definition does not necessarily mean this. This is probably because the computer may correctly calculate pi to 1/1000 digits, and then choose to fine-tune some numbers to answer the adjusted recorded answers, which are completed in the blink of an eye. Then wait for the same amount of time as the human manual calculation before displaying the results. In this way, there is already a correct answer inside the computer, but it only shows that there seems to be a wrong answer.
This situation is undoubtedly disadvantageous to those who rely on the data reported by the computer, but please note that this is very different from the data that the computer actually miscalculated! In fact, there is still a lot to say about the nuances between the two, but for now, let's continue with the previous analysis. Both aspects of artificial stupidity can be applied to real self-driving cars, but doing so will bring some problems and are worth considering.
five。 Artificial stupidity and real driverless cars
Nowadays, self-driving cars on trial on public roads have become famous for their excellent driving attainments. Generally speaking, driverless cars are like a young novice driver, timid and hesitant about driving tasks. When you see a driverless car, you will find that it usually tries to create a larger buffer zone between it and the car in front of it, and tries to follow our empirical rules of keeping distance when we are beginners to drive a license. Human drivers seldom consider the safe zone of distance and often like to squeeze other cars aside, which is easy to cause danger to themselves.
Here is another example of practicing driving in a self-driving car. When you see a stop sign, the driverless car usually stalls completely. It will wait to see if the lane is safe and then proceed cautiously. However, in the human world, no one will completely stall at the stop sign, and rolling parking is now the norm. You can conclude that the human way of driving is reckless and a little stupid. Because there is not enough distance between your car and the car in front, this will increase the chance of a rear-end collision. Not turning off completely at the stop sign will increase the risk of colliding with another car or pedestrian.
In the way of the Turing test, you can stand on the sidewalk and watch the cars passing you, just observe their driving behavior, you can determine which are AI-driven cars and which are human-driven cars. Does that sound familiar? This is probably the case, because it is similar to the problem of arithmetic accuracy raised earlier.
How to solve this problem? One way is to introduce the artificial stupidity of the above definition. First, you can let the AI in the car deliberately shorten the distance buffer to make it look like a human being is driving. Similarly, you can set AI to scroll through the stop sign, all of which is easy to deploy. In this way, when looking at cars that have been driven, it is difficult to tell whether AI or human beings are driving, because they both make the same mistakes in the process of driving. This seems to solve a problem, because it concerns our human perception of whether the self-driving car AI is smart or not.
But wait a minute, isn't this turning AI into a riskier "driver"? Are we trying to increase the incidence of dangerous accidents caused by human driving? Rationally speaking, no.
Let's turn to the second definition of artificial stupidity, that is, by incorporating these "stupid" driving methods into the AI system in a substantial way, so that AI can take these factors into account when driving a car, while at the same time having a strong sense of avoiding them or carefully applying them at special times. Instead of letting AI drive blindly in the wrong way, it is better to develop AI so that it has a strong ability to deal with the shortcomings of human driving, discover these shortcomings and make good use of them in necessary situations, so that AI can become a truly safe "driver".
six。 Conclusion
One of the most obvious bottlenecks of artificial intelligence today is that it does not have any common sense reasoning ability and does not have the ability of human integrated reasoning (many people call this kind of artificial intelligence artificial general intelligence or AGI). Therefore, some people think that today's AI is closer to the "artificial stupidity" side than the real "artificial intelligence" side. If humans have dualistic wisdom and stupidity, then AI systems may need to have similar duality in order to demonstrate human intelligence (but there are voices that AI may not have to follow in the footsteps).
We are putting so-called AI self-driving cars on the road, but AI is unconscious, and not all parts can have perception. Will self-driving cars succeed only if they can climb the smart ladder further? No one knows at present, but this is by no means a stupid question.
Https://medium.com/@lance.eliot/why-artificial-stupidity-could-be-crucial-to-ai-self-driving-cars-2b4955f92321
Https://www.leiphone.com/news/202006/hevNEYG5HHXR90g0.html
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.