In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Causal AI has been so popular in the past two years! I believe that readers who pay attention to the field of AI in the past two years, it is not difficult to find that there is a growing voice: causal AI will be the next generation of trusted AI technology, causal revolution will start the next generation of AI wave.
For the article on causal AI drum and call, the logic of argument is basically the same:
If you want to write causal AI, you can't just write causal AI.
It is necessary to write in-depth learning "alchemy" and people's yearning for trusted AI.
To write about the lineup of scientists, strong artificial intelligence and machine cognition.
Writing AI learns the magic of causal reasoning, and writing industry sets off a causal revolution.
Write about the technical beginnings of an AI company and the bright future of AI commercialization.
Objectively speaking, exploring the next generation of trusted AI has been discussed in academia and industry for a long time, in which the combination of causal inference and machine learning is also one of the important directions. However, around the technical discussion and industrial inference of causal AI, the argumentation logic is a little complicated and leaping.
What role does causal AI play in the trusted AI technology system? How does causal AI improve the interpretability of AI? What is the commercial potential of causal AI? These questions seem to be easily passed by grand concepts and words such as the Turing Award, trusted AI, General artificial Intelligence, causal Revolution, and so on. It seems impolite to ask further. Can scientists be wrong?
It may be a long way for AI to learn causality, but human evolution has engraved it into DNA for tens of thousands of years. For example, a gunman first fires a shot and then draws a bull's-eye in the bullet hole. I believe that the audience who saw this scene will find it hard to believe that this is a "sharpshooter," because there is no inevitable causal relationship between the sword target painted later and the accuracy of the gun.
So, it would be confusing if a scientist or company first shot causal AI, then drew the target of "trusted AI", and then announced that he was leading the next generation of industry.
How can trusted AI hit the heart? What on earth did the causal AI hit? Today, let's take a detailed look at the target game behind causal AI fever.
Premise of the story: a target called trusted AI Today, if you type the keyword "causal AI" or "Causal AI" into the search engine, then the highly weighted articles or associative words are "the next generation of trusted AI".
This is a very interesting phenomenon, because causal AI is nothing new, and the concept of causal inference and causal AI has existed in about a decade since the popularity of the previous generation of unexplained and unreliable AI dominated by deep learning. Since the study of causality has been going on, why has it become a popular style in recent years?
As early as 2011, Judea Pearl, the Turing Prize winner and the father of Bayesian networks, predicted a new bottleneck in the development of artificial intelligence, suggesting that people should pay more attention to causal inference in artificial intelligence. Why: the New Science of causality, co-authored by Judea Pearl, is one of the few causal AI textbooks currently available on the market. He himself has also been the causal AI platform for many times, and a well-known sermon in China was a speech delivered at the second Beijing Zhiyuan Conference in June 2020, "thinking about New causal Science, data Science and artificial Intelligence", pointing out that data science is shifting from the current data-centered paradigm to the science-centered paradigm, and now there is a "causal revolution" sweeping all research fields.
(Judea Pearl < Judea Pearl >) obviously, we can come to the conclusion that causal AI, as a technical concept, is popular because it hits the target of "next-generation trusted AI". In other words, people are essentially looking forward to a new AI, and causal inference is only one of the ways to achieve it.
It is necessary to explain how much impact causal AI has on achieving trusted AI.
AI with deep learning as the mainstream has unexplainable "black box" problems, which has been criticized for a long time, so the interpretable, robust, secure and controllable trusted AI has become the object of the industry's call.
In his book how to create a credible AI, Gary Marcus summarizes three big pits in which AI is out of touch with reality. Causal AI is the rammed earth needed to get out of the pit:
The first is the "credulity pit". It is believed that human beings like to look at the ability of machines based on human cognitive models, thus credulously believing that machines have human-like intelligence, which leads to mistrust and disappointment of the real AI. For example, the AI system is unable to make reasonable decisions and responses based on common sense, and is complained of as "artificial mental retardation". Imagine that AI machines would be wiser if they could think about their actions and consequences, and causal reasoning is a prerequisite for machines to gain common sense.
The second is the "illusory progress pit". Suppose that the academic progress of AI can also solve similar tasks in reality. Because the real world often has great complexity and uncertainty, and the traditional machine learning algorithm is better at finding relevance and correlation, the association mechanism is unreliable and unstable. In contrast, the causality structure is stable and invariant, which provides help for AI model to understand and adapt to the changing real world.
The third is the "robust pit". Deep learning is limited by algorithms, data, lack of generalization and robustness, and limits the landing of AI in some areas with low fault tolerance and risk sensitivity, such as self-driving, medical diagnosis and so on. The unexplained black box model algorithm is opaque and cannot explain the principle behind it to the user: why make such a decision? Is it factor A that leads to B result? What might happen if we don't do C? Causal AI can explain the results generated, making the model more convincing.
If we want to cross the "AI gap", we need to find new strategies to develop the next generation of trusted AI. The main framework of trusted AI includes robustness, interpretability, privacy protection and fairness.
It is not difficult to find that in order to achieve a truly credible AI, we need a variety of software and hardware technologies, politics, industry, university, research and application to promote.
Take the "Initiative to promote trusted artificial Intelligence" launched by WAIC trusted AI Forum in 2021 as an example. Its joint release units include: China Institute of Information and Communication, Sino-British Research Center for artificial Intelligence Ethics and Governance, Institute of Automation, Chinese Academy of Sciences, JD.com Exploration Research Institute, Ant Group, Shanghai artificial Intelligence Industry Association, BRICS Future Network Research Institute China Branch, Huawei, Black Sesame Intelligence, Aigeng Technology, Fusu Technology, Eye Technology, Ruilai Wisdom, Yidu Yun, Vladivostok Technology, Insight Technology, University of Science and Technology of China, Tsinghua University, Fudan University, Shanghai Jiaotong University, Zhejiang University, Wuhan University.
The next generation trusted AI needs the comprehensive use of data, chip, private computing and blockchain on the basis of underlying technology. Laws and regulations and industry alliances realize the global co-governance, standardized development and ethics of trusted AI from the governance level.
Therefore, it is not so much "causal AI is very important to trusted AI", but rather the arrival and development of trusted AI, which provides the stage of the times for causal AI.
What is the strength of the causal AI shot? You may ask, trusted AI needs so many technologies, what block chain, federal learning, computing chips, only the "causal revolution" set off a "next generation AI wave", so causal AI technology must be a stronger shot to hit the bull's-eye of trusted AI.
Some authors who take a clear-cut view of causal AI will quote the judgment of several AI scientists. Except Pearl, who is familiar with Pearl, that is, Yoshua Bengio and Yann LeCun in the "Big three of artificial Intelligence", they have all said publicly that causal reasoning is an important way to improve the generalization of ML / DL (Deep Learning).
But at the same time, we should also see that in the history of AI, there has never been a lack of a large number of factions and various teams to explore in parallel for a goal, and many of these routes may fail, resulting in a key breakthrough in "evolution." the famous "artificial intelligence pendulum" oscillates back and forth between connectionism and symbolism, each leading the way for more than a decade. The purpose of mentioning this is to show that the promotion of a few scientists in their own research fields may not necessarily start an AI wave or era.
We can also find some scientists who are cautious about the future of causal AI. For example, some researchers have proposed that one of the main directions of the development of AI is not to excessively imitate human causal reasoning (because this causal reasoning is problematic), but to focus on prediction, action and imagination.
Some scholars believe that the algorithm of causal discovery is complex and difficult to extend to the scenarios with a large number of features, and the explanation of causality is unavailable or too complex to be understood by the affected people in many cases. A paper from Georgia Institute of Technology puts forward the concept of "Explainability pitfalls (EPs)", which holds that the interpretation of AI system may not always bring positive results, but may also have a negative impact.
In addition, causal analysis is not enough to solve the problem of fairness of the algorithm. Because the discovery of causality depends on a large number of statistics, and the statistical assumption is that human future behavior will always be consistent with past behavior, that is, if there are structural biases and unfairness in the first place, for example, whites are more educated than blacks, and men have higher loan approval rates than women, then even if causal analysis is used to build models The model only considers the causality variables related to whether the individual can repay the loan, and the result may also be biased.
As for Pearl's "causal ladder", he thinks that machines can climb step by step from the first layer of "relevance"-the second layer of "intervention" to the third layer of "counterfactual reasoning" and eventually achieve "strong artificial intelligence". To put it simply, the machine has free will, it must have a causal model of the world, be able to interact with the environment, and reflect based on memory systems. This is a distant goal of a "lifetime series" and requires parallel breakthroughs in a variety of technologies.
Generally speaking, the combination of causal reasoning and deep learning, as one of the key technologies of the next generation of trusted AI, is still just the beginning, many people are waving the flag, many people are exploring, many people are watching, there is still a certain distance from commercialization, and it is still too early to talk about "industrial revolution" and "cognitive intelligence".
Having said so much, just listen to the metaphysics of "strong artificial intelligence". At present, the key research direction of causal AI is mainly to do two things:
One is causal discovery (Causal Discovery), which mines the causal relationship between variables in the data, so that the model can give a more stable and reliable explanation; the other is the estimation of causal effect (Causal Effect Estimation), which evaluates the impact of causal variables on outcome variables in order to improve the accuracy of AI prediction and decision-making.
The main purpose is one: to help AI "win trust".
As mentioned earlier, the fundamental reason for the rise of the next generation of credible AI is the crisis of trust between human beings and AI. Causal AI relies on "you listen to me to explain", so that AI can be honest with human beings, so as to achieve the effect of "winning trust".
There are two kinds of unexplainable in deep learning, one is unexplainable in principle, which is often called "black box", and the other is semantic inexplicability, that is, the agent's semantic understanding ability is not good enough, which makes the model unstable and unreliable. Causal AI solves the second problem.
What are the consequences of semantic inexplicability? A very famous experiment is to train an image classifier to identify whether the animal in the image is a husky or a wolf.
If most of the huskies in the test data are grass and woods, and most of the wolves are in snow, then the accuracy of the model may be very high. But when the background of the husky in the test data is snow, and the background of the wolf is grass and forest, the accuracy of the model may be greatly reduced. That is, there is a strong correlation between background and foreground objects, while AI judges by relevance rather than causality. Because of sample deviation or confusion, the result is no longer stable and robust.
Just imagine, if someone else is holding your photo or video, the bank or government AI does not understand causality, not by the identification of your appearance, but by the correlation of random control, then criminals can easily impersonate your identity and deceive the AI system by changing the background.
Causal reasoning is different, because causality is not as "fragile" as false correlation, it is immutable and reliable, so the result of causal inference is also invariant. For example, researchers at Microsoft Research Asia use the "causal semantic generation model" (Causal Semantic Generative model, CSG) to enable AI to learn to predict from the appearance characteristics of "wolves" based on causality and causality. In this way, even if the background environment changes, the situation of admitting mistakes will be greatly reduced.
Therefore, the essence of causality is to eliminate the false relevance caused by semantic inexplicability and find those constant causality, so that AI can leapfrog the "three pits" and accelerate its industrialization.
Is the gun important or the target important? Having said that, it is clear that the development of trusted AI has become an industry consensus, and causal AI has become popular because of its potential on interpretable issues. Can we come to the conclusion that a company that is outstanding in causal AI technology can make the next AI era?
In other words, with the gun "causal AI" looking for the target, can it promise an AI algorithm company a bright future?
If you have experienced the stage of "looking for nails with a hammer", you will probably get an uncertain answer. History has told us countless times that it often takes three steps for a technology to move towards industrialization, become the mainstream and bring about changes.
The first step: technological innovation. Technological innovation drives business development, as many people say, "look for nails with a hammer." For AI, enterprises with high success rates often have a powerful "hammer" or "gun" first, otherwise, even if they find a large number of application scenarios and industry pain points, when it comes to the show technology and "hammer", they find that they can not solve the problem at all, which is tantamount to "wanting to take a shortcut but taking a big bend".
A person from an algorithm company in the field of machine vision once told us that there was a business cooperation opportunity, and the other party wanted to use machine vision inspection instead of manual inspection. As a result, the AI companies who visited the company evaluated and found that each other's work tower was very high. Let the robot climb the stairs to patrol, but also maintain high stability, which is an impossible task at present. " The nail is in front of my eyes, and the hammer in my hand can't be smashed down even if it's not hard. And holding a "hammer" can be like causal AI and trusted AI, what if there is a breakthrough? If you can't find a nail, you can smash nuts and stones, and take self-defense.
Therefore, the important task of technology enterprises is to hold the hammer of technology in their hands. Google, Amazon, Microsoft, BATH and other technology companies will make efforts for simple laboratory research, which is the truth. At present, the study of causal AI is still in its infancy, and there are many technical challenges waiting for breakthroughs, such as the direction discrimination of causal equivalence classes, the control of false discovery rate on high-dimensional data, the detection of hidden variables on incomplete observation data, etc., which is also where AI enterprises can widen the gap by virtue of causal AI.
The second step: the coordinated development of the industrial chain. Causal AI as one of the trusted AI technologies, its development is also inseparable from the support of the AI industry chain. For example, future algorithms need to be deployed through hardware terminals, which requires the participation of ISV service providers, developers, hardware manufacturers, algorithm markets, industry customers and so on. The lack of any place may hinder the process of commercialization of causal AI technology.
Trusted AI has attracted a large number of universities, high-tech enterprises, governments and industry organizations to participate. Some people have issued professional consultation reports, some have carried out the exploration of technology ethics, and some have explored the landing of technology industrialization. Causal AI, as a more subdivided, academic-led technology trend, obviously lacks sufficient strength and persuasion in the aspect of industrial chain agglomeration innovation.
The third step: the scale transformation of achievements. If a technology can be widely used in the industry, it must have the characteristics of industrialization, scale, and automation. only in this way can we dilute AI's research and development costs, rapidly expand the mainstream market, and ensure an increase in the index of revenue, which is what Jeffrey Moore, the father of high-tech marketing, called a "tornado storm." Take machine vision as an example, the "CV four Little Swan" ushered in the rapid development and high valuation in the period of popularity of machine vision.
So the problem is, for start-ups in the field of causal AI, any technology will lead to a substantial reduction in technical barriers and acquisition costs because of the open integration of head technology enterprises. if in the future a large number of enterprises can rely on causal inference capabilities directly from the head enterprise's deep learning platform, this will undoubtedly directly affect the business potential of pure algorithm companies. Selling algorithms will be cut off by platform enterprises, and the value of being a technology integrator is limited.
At present, AI research teams such as Microsoft Research, Google, Ali, Tencent, Huawei, and even more vertical Kuaishou and du Xiaoman are all working on causal AI research. For example, Microsoft has launched a software library, DoWhy, which provides a programming interface for common causal reasoning methods.
This is undoubtedly a good thing for various industries to be intelligent, which means that causal AI applications will become easier and easier in the future. But if the core competence of an AI company is the causal AI algorithm, it will be dangerous.
As time goes on to 2022, we have not heard the question of "should we use AI" for a long time, replaced by pig farmers in Yunnan, water stations in Fujian, photovoltaic power plants in Ningxia and docks in Zhejiang. All kinds of voices are asking, "how do you use AI?"
This may also be a metaphor, when AI is really integrated into the industry, it is no longer necessary to pay attention to and demonstrate the logic, feasibility, value of technology, and so on. At present, trusted AI has reached the stage of "actions speak louder than words". Whether causal AI, as an independent technical concept, will continue to break out, or whether it will be blended into trusted AI capabilities and released to the industry under a new name will be an industry thinking question worth paying attention to in the coming year.
No matter how many changes there are in the process, AI has been making progress, upgrading, and growing, the next generation of trusted AI or the causal AI wave, all show the strong vitality and vitality of AI. To help improve the quality and efficiency of the national economy and people's livelihood, which is why AI is always worth looking forward to.
This article comes from the official account of Wechat: brain polar body (ID:unity007), author: Tibetan fox
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.