In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
2020-06-24 20:11:58
Author | Twilight
Editor | Jiang Baoshang
A long open letter involving 1700 scientists boycotting the publication of an artificial intelligence study on Springer Nature went viral on the reddit machine learning section today.
Many of these 1700 scientists are scholars from famous universities such as MIT, NYU, Harvard, as well as Google, DeepMind, Microsoft and other well-known enterprises. For example, Kathy O'Neill, author of algorithmic hegemony, Dr. Rumman Chowdhury, global leader of Responsible AI, and so on.
The boycotted study is a paper called "A Deep Neural Network Model to Predict Criminality Using Image Processing".
The author of the paper, a researcher from the University of Harrisburg in the United States, claims to have developed automatic computer facial recognition software that can predict whether someone is likely to become a criminal. And the software is 80% accurate and can predict whether someone is a criminal based solely on his or her facial images. In addition, the author also emphasizes that the algorithm model in this paper has no racial bias.
As for the specific "story" of the paper, one of the authors, Sadeghian, said: "We already know that machine learning technology can outperform humans in a variety of tasks related to facial recognition and emotion detection." "this study shows that these tools can extract highly predictable crime features from images, which shows how powerful these tools are."
one
The outbreak of the incident stems from concerns about the AI threat theory.
In fact, the outbreak of the "joint boycott of 1700 scientists" is actually the result of people's long-term concern about the fairness problems in AI technology or the AI threat theory. For example, face generation or face translation models are often complained because the results are biased towards whites. Yann LeCun also criticized on twitter on June 23rd:
When the data are biased, the ML system will also have biases. The face-up sampling system makes everyone look white because the network has been pre-trained on FlickFaceHQ, which contains mainly white photos.
As shown in the picture below, we can easily recognize the coded face on the left as Obama, but the algorithm converts it into the face of a white male.
Although the author claims to "predict whether someone is a criminal only based on their facial expressions", the accuracy is "80%, without racial prejudice." The problem is that by using the word "non-racial bias", they confuse algorithmic bias with social bias. This sentence well illustrates the problem of social prejudice:
Since the category of "crime" itself is racially biased, it is not possible to develop a system to predict or identify "crimes" without racial bias.
On Reddit, netizens also commented on the incident:
That's why the movie Minority report is so popular, because pre-crime prediction doesn't make sense. Even the indication / suggestion of the forecast is not a good prediction at all. Otherwise, we will have used the algorithm to play a game on the stock market. We will not arrest adults over the age of 21 for drinking, but for drunk driving. Even drinking at the age of 21 may be a strong sign that they may drink and drive.
The model may not have a "bias" relative to the training data, but there is an inherent bias in the training data itself. Algorithms based on current data will only continue the existing unfair bias chain.
In the United States, our Fifth Amendment stipulates that "No one shall be held responsible for the death penalty or other notorious crimes unless stated or indicted by a grand jury." The study is so bad that it goes beyond data bias and violates our freedom. "
Perhaps as a machine learning community, we need to solve a terminology problem first so that we can better communicate the difference between algorithmic bias (depending on the expected outcome may be unexpected) and biased social perceptions.
two
Summary of the petition
Despite its length, this petition clearly clarifies many of the issues discussed around the fairness of ML.
Address of petition: https://medium.com/@
CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16
Let's take a brief overview of this petition:
Dear Springer Nature editorial Board,
We are writing to you to represent expert researchers and practitioners in all areas of technology, science and humanities, including statistics, machine learning and artificial intelligence, law, sociology, history, communication research and anthropology. Together, we are deeply concerned about the upcoming research paper "A Deep Neural Network Model to Predict Criminality Using Image Processing". Based on the latest news, this article will be published in your journal Springer Nature-Research Book Series: Transactions on Computational Science and Computational Intelligence.
We urge:
The audit committee will publicly withdraw the publication of that particular study and explain the criteria used for evaluation.
Springer Nature issued a statement condemning the use of criminal justice statistics to predict criminal behaviour and acknowledging its past role in motivating harmful academia.
All publishers should not publish similar research in the future.
Community organizers and black academics have been at the forefront of law enforcement resistance to the use of AI technology, with a particular focus on facial recognition. However, even if the industry and research institutes devote significant resources to establishing "fair, accountable and transparent" practices for machine learning and AI, these voices remain marginalized.
One of the attractions of machine learning is that it is highly malleable and any possible causal mechanism can be used to rationalize correlations that are useful for prediction or detection.
However, the way in which these studies are ultimately expressed and interpreted depends to a large extent on the political economy of data science and the environment in which it is used. Machine learning applications are not neutral. Machine learning techniques and the data sets they use usually inherit mainstream cultural beliefs about the world. These technologies reflect the motivations and opinions of people in the privileged position of developing machine learning models, as well as the data they rely on.
The uncritical acceptance of default assumptions inevitably leads to discriminatory design in the algorithm system, reproducing the concept of standardizing social hierarchy and legitimizing violence against marginalized groups.
Such studies do not require deliberate malice or racial prejudice by the researchers. On the contrary, it is an expected by-product in any field, almost always evaluating the quality of its research on the basis of "predictive performance".
It seems that the petition has an immediate effect. Springer Nature has stated on twitter that it will not publish the paper, but it is not clear how it will respond to the specific request of the petition.
Different from the popularity of face recognition technology in China, "nourishing everything quietly", in the context of racism, face recognition in the United States has been in an awkward position.
A 2018 survey showed that the error rate of face recognition for black women was as high as 21%, while the error rate for white men was less than 1%-which is very politically incorrect in the United States.
IBM announced on June 8 this year that it will no longer provide any face recognition and face analysis software.
The American Civil Liberties Union (American Civil Liberties Union,ACLU) called face recognition "probably the most dangerous surveillance technology in history" and repeatedly filed letters to the US government asking Amazon to stop providing its "Rekognition" face recognition technology to the police.
In the United States, the black community itself has a higher crime rate. Even if technology itself claims to be unbiased, it is impossible to eliminate social prejudices caused by group differences. If the "criminal identification" technology finds that blacks are more likely to be identified as criminals than whites in practical application, it will inevitably trigger the sensitive nerves of racial conflict in the United States again like the recent George Floyd death.
Secondly, there is also controversy on the accuracy of the algorithm. Amazon's image recognition AI system, Rekognition, has reportedly identified 28 members of Congress as criminals. And although the "criminal identification" study claims an 80% accuracy, no one wants to be that 20%. After all, being mistakenly identified as a criminal is a very serious offense.
Finally, "criminal identification" has not become a systematic science, human face may only be a very one-sided clue, no matter how high the accuracy in the small sample data set, it can not guarantee its reliability in practical application.
Reference:
Https://www.reddit.com/r/MachineLearning/comments/heiyqq/dr_a_letter_urging_springer_nature_not_to_publish/
Https://twitter.com/SpringerNature/status/1275477365196566528
Https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16
Https://web.archive.org/web/20200506013352/https://harrisburgu.edu/hu-facial-recognition-software-identifies-potential-criminals/
Https://www.toutiao.com/i6841885989133091342/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.