In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
The full text has a total of 1925 words and is expected to last 4 minutes.
Photo Source: unsplash.com/@multamedia
Are you curious about how machine learning models work? How do these models work internally, and are they trustworthy?
This article provides a comprehensive overview of what explainable artificial intelligence (XAI) is and why XAI is needed. After reading this article, you should be able to understand the need for XAI and consider whether you need to apply it to ML projects or products.
What is XAI?
Explainable artificial intelligence (XAI) is a quite new field in machine learning. In this field, researchers try to develop a new model to explain the decision-making process behind the machine learning (ML) model.
XAI has many different research branches, but in general, it is either an attempt to explain the results of a complex black box model or an attempt to incorporate interpretability into the current model architecture. The first method is widely used by researchers, which does not consider the basic system of the trial model, but only tries to explain what the ML model does. It is called the XAI of model agnostic.
Why do I need XAI?
We can give an example to illustrate. With the current development of deep learning (DL), it is very typical for the DL model to have millions of parameters! Compare these parameters with the simple linear regression model that has been used for decades to understand how complex the DL model is.
In fact, the DL model has had a huge impact on many industries, but many of them are still used as black box systems. This is not a good thing, especially when using these models to make decisions that can have a huge social impact.
What problem did XAI solve?
XAI tries to answer three different types of questions: why? About what time? How? If you encounter questions with these three keywords when developing a ML product, you may need to turn to XAI. Here are some typical problems encountered in ML projects:
Why does the ML model make such a prediction?
When can we believe the prediction of this model?
When will this model make a wrong prediction?
What can be done to correct the mistakes in this model?
To answer these questions, you need to integrate XAI models / concepts into M1 projects / products.
When do you need XAI?
In each ML project, to some extent, you may need to provide customers or colleagues with the decision-making process of the ML model. XAI is very important in the application of ML. The decision of ML system directly affects people's life and has a great impact on the society.
Some people may think that, in most cases, the final decision is made by people. However, many experts use complex ML systems to help them make decisions. If the ML system cannot explain how it makes decisions, it will be difficult for experts to trust the ML system, and there will be risks!
What are the common use cases of XAI?
Currently, there are XAI use cases in all the areas that AI/ML is using! Instead of listing all the use cases, this article uses two examples to illustrate why XAI is needed and how to use XAI when decisions made by the ML model seriously affect people's lives. The following examples come from the fields of medicine and finance.
Use case 1: why do medical ML applications need XAI?
Suppose a patient goes to the hospital to check if he / she has epilepsy. Doctors input MRI images of the patient's brain into a complex ML model, and the resulting report diagnoses epilepsy with 85% confidence. Here are some questions your doctor may ask:
How can I trust the report of this model?
Based on what characteristics of the MRI image, the model made this decision?
Does the way the ML model makes this decision make sense to me? How do I know what the decision-making process of this model is?
What if the report is wrong or the decision-making process of the model is not accurate?
Besides, there are a lot of questions! It can be seen that unless the decision-making process of the ML model is presented and can be verified, doctors will not trust the decision-making results of the model.
Epilepsy detection system in medical application
Use case 2: why do financial ML applications need XAI?
Suppose a person goes to a financial institution to apply for a housing loan. Financial institutions use complex ML models to obtain information such as the customer's household population and financial history and create a report on whether the customer is eligible for a loan.
If the customer is unlucky, the system determines that he or she is not eligible for a loan. So the question is whether business people using the system can trust the decision of the model. This is the same problem as in the previous example. Here are some questions that business people using the model might ask:
If the client asks why his / her loan application is rejected, how should I answer it?
Can the ML model explain and validate its decision-making process so that we can report to our customers?
Under what circumstances can this model not make a correct prediction? Will we lose a loyal customer by trusting ML model decisions?
In addition, there are many problems! If a company uses complex ML models, and the decisions of these models have a significant impact on their customers, then there may be a lot of problems.
Loan models used in financial applications
What is the future of XAI?
The future of XAI is difficult to predict because it is a fairly new area in the field of AI/ML, and many researchers are actively working on new XAI models. However, the XAI model can be predicted based on the current research trend and industry demand. In a few years' time, as the application of the XAI pattern in various industries becomes more mature, the following may happen:
The ML model will be able to explain its own results! (think of it as a robot in Westworld that can do "analysis".)
More interpretable models are generated with which users can interact and modify (or improve) the results.
Because the model is interpretable and can know how it makes decisions, users may be able to inject their knowledge into the model!
Where can I learn more about XAI?
There are many online materials for learning on the Internet. Interpretable ML manual (
Https://christophm.github.io/interpretable-ml-book/) is a general overview of XAI's current approach, and this manual will be a good introduction if you are not already familiar with this area. The Defense Advanced Research projects Agency (DARPA) has released a roadmap (https://www.darpa.mil/attachments/XAIProgramUpdate.pdf) on the use of XAI, showing their work plans on different XAI models and methods, and how to apply XAI to the ML model.
Https://www.toutiao.com/a6728565544158495246/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.