In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Since AI entered the public view from the perspective of application, it has been unable to escape the harsh criticism of "humanism". As a technology that relies on huge amounts of data, AI can be used as a tool to improve efficiency mainly because of the high concentration of human experience.
And the "human experience" itself is not perfect. The experience that generally accumulates into huge amounts of data is sometimes more biased. Just as if AI had been brought into the era of Columbus, AI would have become a staunch supporter of geocentric theory.
After Li Feifei left Google and returned to Stanford, the first project, HAI-- AI Research Institute (Stanford Human-Centered AI Institute), focused on addressing the gap between AI and humanism.
AI is so anthropomorphic that he is a "rich white male"?
The first thing to know is what makes AI unable to be "people-oriented"?
At present, from a humanistic and fair point of view, the two issues recognized by AI are "white supremacy (White Guy Problem)" and "male sea (Sea of Dudes)".
The so-called white supremacy refers to some acts of racial discrimination made by AI driven by algorithms. For example, Google's automatic image classification used to classify black photos as gorillas, and Hewlett-Packard's camera algorithm could not identify dark-skinned people. In crime prediction software, the crime rate of blacks is even identified as more than twice that of ordinary whites.
The Sea of Men, on the other hand, refers to the great gender bias among AI practitioners. On NIPS in 2015, the number of female attendees accounted for only 13.7%. Li Feifei mentioned that male authors were cited 100 times more than female authors in citations.
In the words of the New York Times, the combination of the two has shaped AI's values of a "rich white man"-exactly the same as those of business owners who hold technological supremacy.
As a result, the use of AI is likely to undermine long-standing efforts to promote racial and gender equality.
Just as egalitarists have been promoting equal income for men and women, last year computer scientists at Carnegie Mellon University found that Google's advertising push mechanism was more likely to push job ads for high-paying jobs to male users.
When police departments across the United States are carrying out predictive policing work, data-driven risk assessment tools will allow them to go more to colored areas, virtually increasing the bias and labeling of a certain group of people.
The scary thing is that when a woman is discriminated against in the workplace, she can also speak up about her situation. And when AI drives everything in silence, women don't even know they're in a discrimination chain-- if they've never seen this job advertisement, they don't know that high-paying jobs are more likely to hire men.
And when the AI industry is full of "wealthy white men", it is naturally difficult for them to notice such a problem in the algorithm black box. In the end, everything runs under the rules created by human discrimination, but every group driven can not see the true face of the rules.
The ambitious goal of a billion dollars
The HAI project led by Li Feifei at Stanford probably has the following three goals: the first is to promote and develop the next generation of AI science (focusing on brain science and cognitive science), the second is to study and predict the impact of AI on human society and life, and the third is to design and practice people-oriented AI technology and applications.
From this point of view, the so-called "people-oriented" statement is actually quite empty. But summing up some of Stanford's public information and some of Li Feifei's speeches, we can roughly sum up what HAI wants to do.
The first is to introduce more diversified perspectives and cross-thinking into AI research.
The most important thing is to support women and people of color in AI research. For example, the "Black in AI" project supported by Stanford calls on people of color to pay attention to the current AI research and the discrimination brought about by AI.
At the same time, it also continues to track the impact of the application of AI in various fields.
For the first time, HAI also invited people from all walks of life to participate, such as education, industry, art, and so on, trying to get them to express their views together, especially to give feedback to technology developers about the impact of AI on this field, in order to weigh the future trend of technology.
As for promoting the next generation of AI science, it is easy to understand, mainly to help researchers delineate the research direction, to promote the interpretability of AI, and so on.
But what is interesting is that HAI, as a very politically correct and well-established project, has not received consistent support in public opinion. In particular, the media pointed out that the institution has 121 faculty members, of whom more than 100 are white and only 30% are women.
As a result, HAI invited a group of wealthy white men to try to raise 1 billion dollars to correct the "rich white male" values of artificial intelligence from a human perspective.
Under the Gear: how to look at AI other than Business efficiency?
Although HAI has received different reviews, the fairness problems brought about by AI have indeed begun to affect people's normal lives.
As mentioned above, the algorithm misestimates the crime rate of people of color and triples it, which also means that the algorithm mistakenly underestimates the crime rate of whites by twice as much. If law enforcers rely on this wrong algorithm, it means that not only the good guys may be wronged, but also the bad ones may be misplaced.
Another example is the scandal caused by Amazon two years ago, when users found that when allocating whether goods could be delivered on the same day, once the zip code of the black area was entered, the same-day delivery service could not be used.
(people of color rated as "high-risk" and whites with several criminal records rated as low-risk)
This bias phenomenon is increasingly appearing in a variety of services: loan AI risk control, insurance AI audit mechanism. Finally, it leads to the more vulnerable groups, the more likely to be marginalized by the algorithm, and then it is difficult to obtain resources and help, and finally further tilt to the vulnerable party, and even eventually lead to crime, aggravating algorithm discrimination.
So it seems that many strategies of HAI are worthy of our careful consideration.
For example, when we focus on the efficiency of industrial AI, should we also consider the impact of AI on the industry in addition to efficiency? When AI plays a role for retail groups with a strong IT base, and they better understand the minds of users, are those small and beautiful micro retail stores forgotten and squeezed in the trend and finally quit the stage?
Or, for example, do we have a responsibility to listen to more people than those who develop and pay for technology? developers and technology buyers of AI may have a clear understanding of how AI drives our lives, but do those who are also caught up in gears have the right to know how these gears work?
More importantly, should we try our best to promote the transparency of the AI black box so that we can solve problems from the internal technical mechanism when we find problems?
In the past, we always felt that technology was always a story between developers and applications. Now it seems that maybe AI has become a world proposition.
The world proposition should be widely participated in.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.