In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
On the morning of May 29, Beijing time, it is reported that at the end of 2022, a team of Meta engineers responsible for cracking down on false information is preparing to launch a fact-checking tool that took half a year to develop. The previous series of crises seriously damaged the credibility of Facebook and Instagram and created more opportunities for regulators to crack down on these platforms. In the face of this situation, this tool is particularly important.
With the new tool, third-party fact checkers, such as the Associated Press, Reuters and reputable experts, can add comments at the top of questionable articles as a way to confirm their credibility.
But the ambitious plan came to nothing when Meta CEO Mark Mark Zuckerberg declared 2023 a "year of efficiency", according to people familiar with the matter.
The layoffs announced by Meta involve about 21000 people, and after several rounds of implementation, Meta's trust and security work has been greatly affected. The fact-checking tool, which was initially endorsed by executives and was still in testing earlier this year, was completely abandoned because of the layoffs, according to people familiar with the matter.
A spokesman for Meta has not commented on layoffs in specific areas, but responded by email: "We remain focused on developing industry-leading integrity measures and will continue to invest in building teams and technologies to protect our communities."
Looking at the technology industry, a large number of people responsible for protecting the most popular "internet parks" have been withdrawn as companies respond to macroeconomic pressures and slowing income growth by tightening spending and mass layoffs. However, at a time when cyber violence is rife (which leads to more and more teenagers committing self-harm), a flood of false information and a surge in violent content, these problems are exacerbated by the explosive application of artificial intelligence.
In the recent earnings call, technology executives have stressed the concept of "spend less, do more", hoping to improve efficiency with less resources. Meta, Alphabet, Amazon and Microsoft have all implemented thousands of layoffs after significant expansion in previous years. Microsoft CEO Satyanadra (Satya Nadella) also said recently that his company would suspend salary increases for full-time employees.
The sharp downsizing of the team responsible for trust, security and artificial intelligence ethics by technology companies fully illustrates that although the 2024 US presidential election season is only a few months away, the online noise is bound to increase. But companies are more willing to cater to Wall Street's demand for efficiency. Within technology companies, the AI ethics team and the trust and security team belong to different departments, but their goal is to reduce the harm that corporate products and services may cause to the real world.
"this is a cat-and-mouse game, and the abuser usually goes ahead." Arjun Narayan, a former head of trust and security at Google and Byte Jump, is now in charge of trust and security at Smart News, a news aggregation app. "you always play a catch-up role," said Arjun Narayan.
But for now, in the eyes of technology companies, both trust and security teams and artificial intelligence ethics teams seem to be costs that should be reduced.
Twitter effectively disbanded its AI ethics team in November last year, retaining only one employee and cutting 15 per cent of its trust and security departments, according to people familiar with the matter. In February, Google cut 1/3 jobs in a department designed to deal with misinformation, aggressive emotions and toxic content. Meta also reportedly terminated the contract with about 200 content reviews in early January. The company also cut at least 16 jobs in Instagram's health Internet division and cut more than 100 jobs in trust, integrity and accountability, according to data submitted to the Labor Department.
In March, Amazon reduced the size of its artificial intelligence responsibility team, and Microsoft laid off the entire ethics and social team-two rounds of layoffs reportedly reduced the team's size from 30 to zero. Amazon did not comment, while Microsoft provided an official blog post about the layoffs.
At Twitch, Amazon's streaming gaming division, employees only learned about their fate in March from inappropriate internal posts from Andy Jassy, the company's CEO.
At the time, Jassi announced 9000 job cuts across the company, including 400 Twitch employees. About 50 of the 400 employees are responsible for monitoring online violence, illegal or harmful behavior, according to people familiar with the matter.
Amazon's trust and security team lost about 15% of its employees at a time when content review is more important than ever.
Twich CEO Dan Cornsey (Dan Clancy) did not specifically mention the trust and security team in his email to employees, but confirmed a broader layoff plan. In fact, he himself had just learned about the layoffs from Jassi's post.
"I am sorry to communicate this kind of news with you in this way, without being able to communicate directly with the employees affected in advance," Colancy wrote in an email. "
It is difficult to regain trust. A head of Twitch's trust and security team said the rest of the department felt "whipped" and worried about a second round of layoffs. The layoffs have dealt a major blow to the team, while threats, violence, terrorist groups and self-inflicted Twitch law enforcement response teams have also shrunk significantly, the person said.
A spokesman for Twitch has not commented on this, but provided an official blog post announcing layoffs in March. The blog post does not mention trust and security or content review teams.
Narayan of Smart News says that because large companies are underinvesting in security issues, it is difficult for them to cope with the endless stream of malicious activities. With more and more bad content, "trust erosion" has become a problem that they have to face.
"in the long run, it is really difficult to regain consumer trust." Narayan added.
Meta and Amazon cut jobs in response to investor demands and sharp declines in revenue and share prices, while Twitter changed ownership.
Elon Musk began mass layoffs almost after he completed his $44 billion acquisition of Twitter in October last year. Rumman Chowdhury, a former director of the ethics, transparency and accountability team for Twitter machine learning, said the Twitter AI ethics team, which had 17 people, had only one person left after layoffs.
Chaudhry said members of the team only learned about their situation after their laptops were shut down remotely. A few hours later, they received an email notification from the company.
"I just started to count people and set up our artificial intelligence red team so that we could ethically counter our model and implement it." Chowdhury said, "it feels as if the blanket has been removed from our feet while we are all strutting."
Mr Chaudhry said their plan included the development of an "algorithmic magnification monitoring mechanism" to track elections and party developments to see if "the content has been improperly magnified".
Chowdhury mentioned a project in July 2021, when Twitter's artificial intelligence team led the industry's first algorithmic bias reward competition. They invited outsiders to conduct biased reviews of the platform and made the results public.
Mr Chaudhry worries that Mr Musk is "actively abandoning what we have done".
"there is no internal accountability system." "We worked with two product teams to make sure that the work behind the scenes was fair to the users on the platform," she said. "
Twitter has not commented on this.
Advertisers are also pulling out of places where reputational risks are growing.
Six of the top 10 categories of US advertisers who spend the most money on Twitter cut back on spending year-on-year in the first quarter of this year, a total of 53 per cent, according to Sensor Tower. The site has also recently sparked controversy by allowing violent pictures and videos to be circulated.
The problem is further complicated by the surge in the popularity of chatbots. These artificial intelligence models similar to ChatGPT make it easy for criminals to create fake accounts with all kinds of content. Researchers at the Allen Institute of artificial Intelligence, Princeton University and the Georgia Institute of Technology tested ChatGPT's API and found that the "toxicity" of content can be increased up to sixfold, depending on the functional positioning that companies assign to chatbots, such as whether they are customer service representatives or virtual assistants.
Regulators are also keeping a close eye on the growing impact of artificial intelligence and corporate downsizing of artificial intelligence ethics and trust and security teams. Michael Atleson, a lawyer for the Federal Trade Commission (FTC), pointed out this paradox in a blog post earlier this month.
"given the concerns about the use of new AI tools, now may not be the best time for companies to build or deploy such tools to replace AI ethics teams." "if FTC calls to ask about this, and you want us to believe that you have fully assessed the risk and mitigated the damage, then these compression measures may not be wise," Atterson wrote. "
Weather vane Meta over the years, the technology industry has enjoyed an ultra-long bull market, bringing a lot of money to top Internet platforms, while Meta has taken the opportunity to become an ethical and security leader in the eyes of many experts.
The company has spent years recruiting trust and security teams, many of them with social science backgrounds. They hope to avoid repeating the mistakes of the 2016 US presidential election. At that time, there was a lot of false information on Facebook, which culminated in the Cambridge Analytics scandal in 2018. The scandal exposed how third parties illegally used Facebook's user data.
But in 2022, it all took a sharp turn for the worse: after Meta's advertising business and share price were hit, Zuckerberg went into austerity mode, cheering investors who had complained that the company was too bloated.
In addition to fact-checking programs, the layoffs have had an impact on researchers, engineers, user design experts and other employees working on social issues. Four former Meta employees said the team specializing in dealing with false information had been significantly laid off.
Before Meta's first round of layoffs in November last year, the company had begun to integrate the integrity team into one department. In September last year, Meta merged the central integrity team, which deals with social issues, and the business integrity team, which deals with business issues such as advertising and spam and fake accounts, according to former employees.
Former employees of Meta's trust and security team said many of their colleagues lived in the shadow of layoffs in the months that followed, as broader austerity programs swept through the company, while managers did not see how their work had affected Meta's profits.
For example, projects that do not require much resources, such as improving spam filters, can be approved, while long-term projects aimed at changing policies are difficult to get support, such as those involving false information. Under such a system, employees are more willing to undertake manageable tasks so that they can demonstrate their results in a six-month performance review.
Ravi Iyer, a former Meta project manager who left before the layoffs, said many people he knew had played a key role in design and policy changes, but they also lost their jobs in the layoffs, which was more frustrating than the layoffs on the content review team.
"I don't think we should take it for granted that the drawdown of trust and security teams is bound to make the platform worse." "but I have found that many of the people who have been fired are the most insightful people who rethink the basic design of these platforms," said Ayer, who is now director of the Institute of Science and Technology Psychology at the University of Southern California. "if the platform is not going to invest in designs that have proved harmful, that's what we should all worry about."
A Meta spokesman had previously played down the impact of layoffs on the false information department: "the team has been integrated into a broader content integrity team on a much larger scale, focusing on the integrity of the company as a whole."
But people familiar with the matter said that after the layoffs, the number of Meta employees responsible for handling false information did decrease.
For professionals in the areas of artificial intelligence, integrity and security, and content censorship, job prospects seem bleak.
People who have just been laid off by social media companies say there are not many jobs they can apply for as companies continue to cut back on spending. A former Meta employee said that when he interviewed for trust and security positions at Microsoft and Google, those positions were suddenly cut.
One former Meta employee said the company's contraction in trust and security could have an impact on smaller peers and start-ups, who appeared to be "following Meta in terms of layoffs".
Chaudhry, a former head of artificial intelligence ethics at Twitter, says these types of work are sure to be cut because "they can't increase product profits".
"I think this is a completely wrong framework." "but if your value is that the company hasn't been sued or no one else has been hurt, it's hard to prove your worth," she said. when our work is over, there are no flowers and applause, but we can create a safe community. this can create long-term financial value, but from the quarterly results, it is really difficult to measure the significance behind it. "
According to a former Twtich employee, some people on the company's trust and security team know where to find dangerous activities. This is very important to the game industry, because the game itself is a "unique beast".
Now, few people look at these "dark, scary places", where criminals are hiding and where cyber violence is rampant.
More importantly, no one knows how bad the situation will be.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.