Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The next Frontier of artificial Intelligence: interpretability

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

2019-07-01 22:56:51

Uncertainty is a feature of AI, not a bug.

The question of a billion dollars

For more than 50 years, computers have become a part of our lives. They started out with these huge devices used by big companies, and their latest iteration is the small smartphones in our pockets.

Throughout our history of interaction with computers, we have mainly used them as a means of expanding our capabilities. They enable us to write more effectively and edit photos and videos more easily for instant access to information.

In doing so, they follow the old machine paradigm: a structure that operates in a reliable way and exists solely to make it easier for our lives to accomplish a particular task. From chariots to engines and cars to computers, the sole purpose of creating all these things is to help us solve specific problems.

Although we humans like to complain about the ways in which our machines do not serve us correctly, from the decomposition of prehistoric chariots to modern computers showing terrible "death" blue screens, the truth is that most of the time these machines work very well.

Consider how often your car breaks down. If you are unlucky, it may be once every three months. Or consider how often you restart your computer now. Once a week? Maybe every other week? This is a very small percentage of their functional cycle.

More importantly, there is a clear dividing line between working machines and non-working machines. Your car either moves forward or moves forward, and if it is one or the other, you will know very well. Whether your computer is running-again, there are no blurred boundaries.

This makes our interaction with the machine very simple. What we need is some kind of practicality. Our cars should let us go from point A to point B. they can do it or not. Our computer should be able to help us write, edit photos or surf the Internet, and vice versa. As users, we can immediately determine whether it is one of them.

More importantly, we never think twice about whether there is some intermediate state, an uncertain state, that is, things are both effective and ineffective. Of course, there may be some problems with our car engine that we ignore, but our car will be happy to continue until the problem becomes serious enough to cause it to collapse. Or there may be some unknown virus worsening in our computer operating system, but we won't realize it until things start to fail seriously.

Our interaction with the machine is based on this basic binary mode of operation. They are either "on" or "off", or do not work at all, or are 100% reliable.

But some changes have taken place in the past few years. Our machines can run in different modes, which is a new state in which things don't work "normally". They have now entered an uncertain field, the hitherto untapped field of computers, and so far it is an exclusive privilege for human beings.

Uncertainty

What is uncertainty? This is a concept related to specific parts of the world. When you think about the weather tomorrow, there will be a certain degree of uncertainty. Or the stock market is notorious for the uncertainty it brings. Even so-called simple tasks have related uncertainties: does this information really exist? Is this really the coffee machine I want to buy? Am I really sick, or am I just allergic?

These problems do not arise to ourselves or to another person until recently. In most cases, we will not give a simple yes or no answer. Even if we do, we will (or should) ask for some reason. Sure enough, sometimes we may deal with some higher level of authority: such as a doctor, a lawyer or a priest. But even they are not immune from giving their own opinions on topics of interest to them.

Why do we refuse to accept unreasonable opinions? This is because we know that there is an inherent concept of uncertainty in every point of view. This is also because the answers to these questions will provide a stepping stone for further decision-making. We want to have a solid foundation to build our lives, which is why we need to have confidence in what others tell us.

trust

Trust takes a lot of effort, but it's easy to lose.

The only way for human beings to get rid of the need to prove their views to others is to cultivate incredible trust so that their views are not censored. This will enable them to avoid justifying all their answers because they have proved their track record of doing the right thing in the past.

Trust takes a lot of effort, but it's easy to lose. This is a well-known human fact, especially because of the loss of trust in those who have been proved to have led us astray, which is a very useful defence mechanism against those who try to take advantage of their authority.

Interpretable

They are wrong because any entity's task of making choices under uncertainty has the ability to influence human life, and then that entity becomes part of our social ecosystem and thus gains the ability to change our world. -good or bad.

How does this relate to computers? As mentioned above, they are now moving into areas of uncertainty. They are asked to express their opinions or make decisions involving uncertainty. We ask Amazon to find the right product for us at the right price. We asked Google to provide us with accurate information and provide Facebook with the latest news. Our algorithm can monitor our health, determine the likelihood of us committing a crime again, or look at a pile of resumes to find the right candidate.

We know that there is no definite way to meet these requirements, and we certainly will not require human beings to make 100% correct decisions. However, if we cannot carefully examine their generosity and reliability, we will never believe that human experts have made all these decisions for us.

In fact, if a person is responsible for all those who only act according to their own wishes and have never been asked to explain their actions, we will think that this is absolutely offensive to our democracy. It would be even more absurd to provide any type of explanation beyond its mental capacity. We will certainly never allow a person to do this, will we? Will we?

Why do we allow machines? It sounds interesting because until recently this idea has not gone beyond our ideas. Remember how we got used to thinking about our machines, because these limited lives always run between two states, "on" and "off"?

Remember that we never expected some kind of uncertainty, a Schrodinger's box, our machines would turn on and off at the same time, or better yet, they would enter what we humans think of as uncertainty. The last frontier of machine intelligence?

When we decided to use algorithms to make these important decisions for us, we decided to use them to simulate parts of the world with inherent element uncertainty.

Before we encounter this moral dilemma, we think we need to create a machine with human mental abilities, a real universal AI. That's why we're happy to put our algorithms in the role of human experts, because we never thought there would be any problems.

Even today, people are talking about cold computing entities such as machines, which are impartial in making decisions and therefore better than humans. They will never be unfair or biased, because they have no feelings or experience prejudice. Unfortunately, this conversation also comes from people at the forefront of data science today.

The truth is that they are wrong.

Not to mention privacy issues, balancing datasets, fairness or any other slogan, these slogans are currently attracting the machine learning community. They are wrong because any entity's task of making choices under uncertainty has the ability to influence human life, and then that entity becomes part of our social ecosystem and thus gains the ability to change our world. -good or bad.

This is where we find ourselves after Google, and others have gained such a prominent place in our society. This idea doesn't remind us that these algorithms might fail under the hood, which is part of their design-a function, not a bug. We didn't notice it before the problems began to appear.

Any decisions that involve uncertainty, any uncertain part of the world, cannot mimic the reliability we are used to from machines. The weather can be accurately predicted by 90%. Maybe 99%. Maybe 99.9999999%. Never mind. There are always mistakes, and this mistake leads to the transition to uncertainty.

When are you right? When were you wrong? When can I trust you? These are the questions that every rational person should ask their machine when asking them one of the questions. If there is someone at the receiving end, we will ask these questions.

But we didn't. We are fascinated by our long history of interacting with machines, which are clearly "on" or "off", and we have let our guard down. We are allowed to be ruled by mechanical "Oracles" machines designed to answer all our questions truthfully.

Check and accept

Would you be satisfied if your doctor said they got 95% in the medical school exam? Unfortunately, can't they provide additional reasons for their professional advice? Are you willing to accept their words?

How can we get out of this mess? How can we let the public know that the machines they are dealing with are unreliable and will never be reliable? How do we deal with the consequences of this understanding? How can we return and help smart machines regain their rightful place in our world, albeit in a more responsible way?

There is only one way to deal with this uncertainty. As human beings, we constantly defend our decisions and beliefs, and the power (or lack) of our argument determines how seriously others think of us. We should keep the algorithm at the same high standard.

About the way we are currently used to create intelligent machines. Let's define a problem. The question can be simple and clear: do you see a cat or a dog in this picture? Then we collected a lot of pictures of cats and dogs, and we repeatedly showed them to our algorithm to answer our questions and let him know when he answered the questions correctly. Then we tested its newly discovered knowledge in a series of pictures that we had never seen before. We measure the percentage of correct answers it gets and happily report the accuracy of our algorithm. Then we deploy the algorithm in a real-world system to try to answer the same question posed by different humans.

If we just want our smartphone to automatically mark our dog in our photos, it may be quite innocent, and it may even be cute when it is mistaken for our neighbor's cat, but if we talk about more serious apps, things will get ugly.

Suppose that after you take the exam, you will go into the doctor's office and the doctor will tell you that you have cancer. You might say, "how did you come to this conclusion?"

Would you be satisfied if your doctor said they got 95% in the medical school exam? Unfortunately, can't they provide additional reasons for their professional advice? Are you willing to accept their words?

Aren't you happy that the doctor said: your tests showed this and that, which is why I think you have cancer? Even if you don't fully understand what they are talking about, don't you trust that person?

As scientists and practitioners, this is the standard that we should abide by.

Go ahead

We can no longer think of ourselves as surpassing the moral standards associated with every member of a functional society.

We don't have it either. Whether we like it or not, the wider public will soon begin to adhere to our algorithms to the same standards. In some areas, they have already started.

We can continue to deny this fact, we can insist that machine learning algorithms do not need to explain themselves, and they go beyond that to some extent (just like priests once did! We should only focus on them to make them perform better.

We can, but when the public surpasses us, we should not complain. When rules and rules come to our way. We can protest against them as much as we want, and we can talk about retrogressive regulators that cannot see progress. Because we are scientists, we can think that we are superior in spirit. We can look down on the society we should serve.

Or we can admit the fact that we haven't reached our goal yet. We can acknowledge the public's fear. We can focus on how our algorithms can really improve human life and stay away from those who do more harm than good. We can no longer think of ourselves as exceeding the moral standards associated with every member of a functional society.

This is what happens when you leave academia and enter the real world.

Https://www.toutiao.com/i6708708718810235403/

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report