Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The year of AI Development: growing Technology scandals and boycotts

2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

Author | AI Now College

Translator | Raku

Edit | Jane

[introduction] on October 2nd, AI Now College of New York University organized the fourth annual AI Now seminar at New York University's Skopper Theater (Skirball Theatre). The seminar invited industry organizers, scholars and lawyers to give lectures and jointly take the stage to discuss related work. The seminar focused on five hot topics, focusing on the negative impact of AI and the growing tide of resistance.

Five hot issues:

Facial and emotional recognition

The change from "AI's bias" to Justice

Cities, monitoring, borders

Labor Force, Workers' Organization and AI

The impact of AI on Climate

The first panel discussed the use of AI in policing and border control; the second spoke with tenant organizers in Brooklyn who opposed landlords' use of facial recognition systems in buildings; and the third sued Michigan on the grounds of civil rights lawyers, using incorrect and biased algorithms. The last panel discussion focused on blue-collar technicians, from Amazon warehouses to show economic drivers, to explain to the team their organization and major achievements over the past year.

AI Now co-founders Kate Crawford (Kate Crawford) and Meredith Whitaker (Meredith Whittaker) began with a brief speech summing up the critical moments of the year. Four groups expressed their views at the seminar, and here are excerpts from their speeches.

1. Facial and emotional recognition

In 2019, companies and governments have stepped up efforts to promote face recognition in public housing, recruitment and urban streets. Now, some American airlines even use it instead of boarding passes, claiming it will be more convenient.

Similarly, emotion recognition has been more widely used to "read" our inner emotions by interpreting facial microexpressions. As psychologist Lisa Feldman Barrett has shown in a wide range of research reports, there is no reliable scientific basis for this kind of artificial intelligence appearance. However, it has been used in classes and job interviews without people's knowledge.

Documents obtained by the Georgetown Privacy and Technology Center, for example, show that FBI and ICE have been quietly accessing the driver's license database for facial recognition searches for millions of photos without personal consent or state and federal lawmakers' authorization.

But this year, voters and lawmakers began to respond to the use of the technology after scholars and organizers such as ACLU's Kade Crockford, Evan Selinger of the Rochester Institute of Technology and Woodrow Hertzog of Northeastern University called for strict restrictions on the use of facial recognition. The Ninth Circuit Court of Appeals recently ruled that Facebook could be sued for facial recognition of photos without the user's permission, calling it an invasion of privacy.

San Francisco signed the first facial recognition ban in May, thanks to leaders of groups such as Media Justice (Media Justice), and two other cities began banning facial recognition technology. And now there is a presidential candidate promising a nationwide ban; many musicians want to stop facial recognition at music festivals; and a federal bill called the ban on biometric housing barriers Act. the goal is facial recognition in public housing.

Similarly, the application of interview identification is not going well in Europe. A British parliamentary committee has called for a halt to facial recognition tests until a legal framework is in place, and it has recently been found that the use of these AI tools by Brussels police is illegal.

Of course, these changes require a lot of work. And to be clear, this is not a matter of improving technology or eliminating deviations. Even very accurate facial recognition can be harmful, taking into account ethnic and income differences in surveillance, tracking and arrest. As Kate Crawford recently wrote in the journal Nature-eliminating deviations from these systems is not the point. They are "dangerous when mistakes occur and harmful when successful".

2. The change from "AI's prejudice" to justice.

This year, we have also seen some important changes, from focusing only on the "de-bias" of artificial intelligence at the technical level to a substantive focus on judicial justice.

Many disturbing events have contributed to this to some extent.

For example, former Michigan governor Rick Snyder, a technical director and head of the Flint water crisis, decided to install a statewide automated decision-making system called MiDAS. It aims to automatically mark workers suspected of welfare fraud. To cut costs, the state installed MiDAS and fired the entire fraud detection department. But it turns out that the MiDAS system goes wrong 93% of the time. More than 40000 residents were wrongly accused, which led to many bankruptcies and even suicides. But MiDAS is only part of austerity measures designed to make the poor a scapegoat.

AI Now Policy Director Rashida Richardson is responsible for a case study of the link between daily police work and predictive policing software. She and her team found that in many police departments across the United States, predictive policing systems may use police records from racism and corruption.

Obviously, in this case, correcting the deviation has nothing to do with deleting variables in the data set, what needs to be changed is the way the police produce the data. Kristian Lum, a researcher at the Human Rights data Analysis Group, also made this point in her groundbreaking work on how algorithms magnify discrimination in policing.

Recently, Kate Kate Crawford and AI Now artist researcher Trevor Paglan discussed the politics of classification at their Training Humans exhibition, the first large art exhibition focusing on training data to create machine learning systems. The project reviews the history and logic of the AI training set from Woody Bledsoe's first experiment in 1963 to the most famous and widely used benchmark sets, such as Wilded Labeled Faces and ImageNet.

In September, millions of people uploaded their photos to see how they would be classified by ImageNet. This is a question of great significance. ImageNet is a standard object recognition data set. It has done more than any other company to shape the AI industry.

Although some categories of ImageNet are strange and even interesting, the dataset is also full of problematic classifications, many of them racist and misogynist (misogynist). Imagenet Roulette provides an interface for people to see how the AI system classifies them. Crawford and Paglen published a survey showing how they unveiled them on multiple benchmark training sets to reveal their architecture.

This is one reason why the combination of art and research is sometimes more influential than when alone, which makes us consider who defines our category and what the consequences will be.

3. City, monitoring, border

Energy, classification and control issues are the prospect of large-scale deployment of corporate surveillance systems in the United States this year. Amazon's Ring, for example, is a surveillance camera and doorbell system designed to allow people to monitor their homes and nearby areas around the clock.

Amazon is working with more than 400 police departments to promote Ring, hoping that the police will persuade residents to buy the system, which is a bit like turning the police into a door-to-door salesman selling the security system.

As part of the deal, Amazon will continue to get video clips; the police can call the surveillance videos they want at any time. The company has applied for a patent for face recognition in this area, indicating that they want to be able to compare the subjects captured by the camera with a "database of suspicious people" to effectively establish a privatized home surveillance system across the country.

But Ring is not the best solution to this problem. As Burku Baykurt,Molly Sauter and AI Now researchers Ben Green scholars have shown, the techno-utopian rhetoric of "smart city" masks deeper injustices and inequalities.

The residential community is also considering this issue. In August, San Diego residents protested against the installation of "smart" lampposts. In June, students and parents in Rockport, New York, protested against the school's use of a facial recognition system that can track and map the information of any student or teacher at any time.

One of the areas where these tools are most abused is on the southern border of the United States, where ICE, customs and border patrol units are deploying AI systems. At present, 52000 migrants are being held in prisons, detention facilities or other places where freedom is restricted, while 40, 000 homeless people are waiting for refuge on the Mexican side of the border. So far in the past year, seven children have died in ICE custody, and many face inadequate food and medical care. These terrible things do happen.

According to an important report from the advocacy group Mijente, we know that companies such as Amazon and Palantir are providing reasons for ICE to deport refugees. In order to oppose the above actions and organizations, more than 2000 students from dozens of universities have signed a pledge not to cooperate with Palantir, and there are almost weekly protests at the headquarters of technology companies that have signed up with ICE.

4. Labor force, workers' organization and AI

When we examine the growing diversity in the field of AI, structural discrimination in terms of race, class and gender will be fully demonstrated.

In April, AI Now released Discriminate Systems, headed by AI Now postdoctoral Sarah Myers West. This study shows the feedback loop between the discriminatory culture within AI and the bias and distortion embedded in the AI system. The findings are shocking, and just as the AI industry has established itself as a link between wealth and power, it has become more homogeneous. It is clear that there is a general problem in this field.

But more and more people are calling for change. One of the first to call for accountability was Awa Mboya (Arwa Mboya), a Kenyan graduate student. From Google strikes to Riot Games to face-to-face Microsoft employees with CEO, we have seen a series of strikes and protests by technology companies, all demanding the elimination of racial and gender inequalities at work.

Now AI Now co-founder Meredith Whitaker (Meredith Whittaker) left Google earlier this year. She was increasingly shocked by the direction of the industry, and things were getting worse, not better, so she and her colleagues began to focus on improper use and abuse of AI in the workplace.

The AI platform for worker management is also a growing problem. From Uber to Amazon warehouses, these vast automated platforms guide employee behavior, set performance goals and determine employee salaries, leaving employees with little control. Earlier this year, for example, Uber slashed workers' wages without explanation or warning, but quietly implemented the change through updates to its platform.

Fortunately, we have also seen some major victories for these workers. CA's Rideshare workers won a huge victory in AB- 5, which requires application-based companies to provide drivers with comprehensive employment protection. Compared with the status quo, this is a huge change.

5. The influence of AI on climate

The background of all these problems is climate.

AI consumes a lot of energy and a lot of natural resources. Emma Strubell, a researcher from Amherst, published a paper earlier this year that revealed the huge carbon footprint of training AI systems. Her team showed that creating an AI model for natural language processing alone could emit up to 600000 pounds of carbon dioxide, equivalent to the energy consumption of 125 flights between New York and Beijing.

The carbon footprint of large AI is usually hidden behind products such as "cloud". In fact, it is estimated that the world's computing infrastructure currently emits as much carbon as the aviation industry, accounting for a large proportion of global emissions.

A constant outpouring of protest

You can see more and more waves of resistance to AI abuse.

It is clear that the problems caused by artificial intelligence are social, cultural and political, not just technical. These issues have a long history, from criminal justice to workers' rights to race and gender equality.

The misuse of AI technology in 2019 reminds us that there is still an opportunity to decide what kind of AI to accept and how to hold it accountable. The organizations, legislation and bonuses represented in the visualization timeline cover some of the critical moments of the past year in resisting the negative impact of AI and the unexplained technical power. This is not an exhaustive list, but a snapshot of some of the ways in which staff, organizers and researchers actively resist the negative effects of AI.

Original text link:

Https://medium .com / @ AINowInstitute/ai-in-2019-a-year-in-review-c1eba5107127

Https://www.toutiao.com/i6749828780862210568/

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report