In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
On the morning of April 20, Beijing time, it was reported that Google asked internal employees to test the new tool shortly before the artificial intelligence chat robot Bard was launched to the public in March.
Based on screenshots of internal discussions, one employee concluded that Bard was a "pathological liar". Another employee said Bard was "embarrassing". 'when they ask Bard how to land a plane, chatbots often give advice that can lead to a crash, 'one employee wrote. Another employee said Bard's advice on diving "could lead to serious injury or death".
However, Google still launched Bard. According to several current and former employees, and Google's internal documents, Google, a previously highly trusted Internet search giant, provides low-quality information through Bard in order to keep up with competitors, while giving low priority to the ethical commitment of technology. In 2021, Google promised to double its research team on artificial intelligence ethics and devote more resources to assessing the potential dangers of artificial intelligence technology. However, with the debut and popularity of rival OpenAI's chat robot ChatGPT in November 2022, Google rushed to integrate generative artificial intelligence technology into its most important products.
For the technology itself, this brings about a significantly faster pace of development and may have a far-reaching impact on society. Google's current and former employees say the technology ethics team Google has promised to strengthen has no voice and low morale. Employees responsible for new product safety and the ethical impact of technology have been told not to hinder or try to stifle any generative artificial intelligence tools under development, they said.
Google's goal is to revitalize its increasingly mature search business around this cutting-edge technology, get ahead of Microsoft-backed OpenAI, push generative artificial intelligence applications into tens of millions of phones and homes around the world, and win the competition.
"the ethics of artificial intelligence have taken a back seat," said Meredith Whittaker, chairman of Signal Foundation, an advocacy group and a former Google manager. "if the ethics of technology cannot be put above profits and growth, they will not work in the end."
In response, Google says responsible artificial intelligence remains its top priority. "We will continue to invest in teams that apply artificial intelligence principles to technology," said Brian Gabriel, a Google spokesman. Teams engaged in responsible artificial intelligence research cut at least three jobs in a round of layoffs in January, including technical governance and project managers. The job cuts affected about 12000 employees of Google and its parent company.
For years, Google has led most of the research underpinning the current development of artificial intelligence, but when ChatGPT was launched, Google had not yet integrated user-friendly generative artificial intelligence technology into the product. In the past, Google has carefully assessed its capabilities and ethical considerations when applying new technologies to search and other important products, Google employees said.
Last December, however, Google's senior management issued a "red alert" and changed its appetite for risk. Google's leadership team decided that as long as the new product was positioned as an "experiment," the public would forgive its flaws, employees said. However, Google still needs to bring in the technology ethics team. That month, Jen Gennai, head of artificial intelligence governance, held a meeting of the responsible innovation team, which maintains Google's artificial intelligence principles.
Gennay suggests that Google may need to make some compromises in order to speed up product launches. Google rates several important categories of products to measure whether they are ready to be released to the public. In some areas, such as child safety, engineers still need 100% to solve potential problems. But she said at the meeting that in other areas, Google may not have time to wait. "Fairness may not be, and it doesn't need to be 99," she said. In this respect, we may only need to reach 80, 85, or higher to meet the needs of product release. " The "fairness" here refers to the reduction of product bias.
In February, a Google employee asked in an internal source group: "Bard's problem is not just useless: please don't release this product." Nearly 7000 people checked the message. Many of them think that Bard's answer to simple questions of fact is contradictory or even grossly wrong. However, according to sources, Gennay rejected the risk assessment report submitted by team members the following month. The report concluded at the time that Bard was not ready and could cause harm. Soon after, Google opened Bard to the public, calling it an "experiment".
In a statement, Janay said it was not just a decision at her level. After the team made an assessment, she "made a list of potential risks and submitted the results of the analysis" to senior managers of products, research, and lines of business. The team then judged that "we can move forward in the form of limited experimental releases, including continuing pre-training, raising the security fence and making appropriate disclaimers," she said.
As a whole, Silicon Valley is still trying to resolve the contradiction between competitive pressure and technological security. The ratio of AI researchers to researchers focused on AI security reached 30:1, the Humane Technology Research Center said in a recent demonstration. This suggests that it is often difficult to raise concerns about artificial intelligence in large organizations.
With the accelerated development of artificial intelligence, people are more and more worried about the adverse impact of artificial intelligence on society. As the technology behind ChatGPT and Bard, large language models get large amounts of text information from news reports, social media posts and other Internet sources, which are then used to train software to automatically predict and generate content based on user input or questions. This means that, in essence, these products are likely to produce offensive, harmful or inaccurate statements.
However, ChatGPT caused a sensation when it made its debut. By the beginning of this year, there is no turning back. In February, Google began releasing a series of generative artificial intelligence products, launching Bard, a chat robot, and then upgrading its video platform YouTube. Google said at the time that the creators of YouTube would soon be able to make virtual changes in videos and use generative artificial intelligence to create "fantastic movie scenes". Two weeks later, Google announced the addition of new artificial intelligence features to Google Cloud, demonstrating how services such as Google Docs and Slides automatically create presentations, sales training files and emails. On the same day, Google also announced that it would integrate generative artificial intelligence into the health care product line. Employees say they are concerned that this pace of development means Google does not have enough time to study the potential dangers of artificial intelligence.
Whether cutting-edge artificial intelligence technology should be developed in a technologically ethical way has long been a topic of debate within Google. Over the past few years, errors in Google's products have attracted a lot of attention. For example, in an embarrassing incident in 2015, Google albums mistakenly marked a picture of a black software developer and his friend as "gorilla".
Three years later, Google said it did not fix the underlying artificial intelligence technology, but deleted all the results of the search keywords "gorilla", "chimpanzee" and "monkey". According to Google, "a group of experts from different backgrounds" evaluated the solution. The company has also set up an ethics department of artificial intelligence to actively carry out relevant research to ensure that artificial intelligence is fairer to users.
However, according to a number of current and former employees, an important turning point was the departure of artificial intelligence researchers Timnit Gebru and Margaret Mitchell. They worked together to lead Google's artificial intelligence ethics team. However, they left in December 2020 and February 2021 respectively because of a dispute over the fairness of Google's artificial intelligence research. Eventually, Sammy Samy Bengio, the computer scientist in charge of their work, and several other researchers left their jobs within a few years to join Google's competitors.
After the scandal, Google also took steps to try to restore its public reputation. The responsible AI team was reorganized under the leadership of Marianne Crocker, then vice president of engineering. She promised to double the size of the AI ethics team and strengthen its links with other parts of the company.
But even after the public announcement, some people found it difficult to work on artificial intelligence ethics at Google. A former employee said they made a request to solve the issue of fairness in machine learning. However, they also find that they are often not encouraged to do so that it even affects the performance score of their work. He said managers also protested that the ethical issues of artificial intelligence were hampering their "real work".
Employees who continue to work on the ethics of artificial intelligence at Google face a dilemma: how to keep their jobs at Google while continuing to do research in this area. Nyalleng Moorosi, a former Google researcher, is now a senior fellow at the distributed artificial Intelligence Institute founded by Grube. She says artificial intelligence ethics research posts are created to show that technology may not be ready for large-scale release and that companies need to slow down.
So far, Google's AI ethical review of products and functions has been almost entirely voluntary, with the exception of the publication of research papers and Google's review of customer collaboration and product launches, according to two employees. Biometrics, identification, and artificial intelligence research related to children must be reviewed by Gennay's team on "sensitive topics", but other projects are not necessarily required. However, some employees also contact the artificial intelligence ethics team without being required.
Still, when employees of Google's product and engineering team try to understand why Google's artificial intelligence is making slow progress in the market, they tend to notice public concerns about the ethics of artificial intelligence. But some at Google believe that new technologies should be made available to the public more quickly so that they can be optimized through public feedback.
Another former employee said it might be difficult for Google engineers to have access to the company's state-of-the-art artificial intelligence models before the red alert was released. Google engineers often brainstorm the possibility of the technology by studying the generative artificial intelligence models of other companies, and then study how to implement it within the company, he said.
Gaurav Nemade, a former Google product manager who worked on Google chat robots until 2020, said: "there is no doubt that I have seen the positive changes brought to Google by the Red Alert and the touch of OpenAI. Can they really become leaders and challenge OpenAI in their games?" A series of recent changes, such as Samsung's consideration of replacing Google with Microsoft Bing as the default search engine, fully demonstrate how important the first-mover advantage of technology is. Microsoft Bing also uses ChatGPT technology.
Some Google employees say they think Google has conducted adequate security checks on the latest generative artificial intelligence products and that Bard is safer than rival chatbots. However, the urgent task now is to release generative artificial intelligence products quickly, so it is futile to emphasize the ethics of science and technology.
The team developing new artificial intelligence features is currently in a closed state, so it is difficult for the average Google employee to see the full picture of Google's artificial intelligence technology. In the past, employees could publicly express their concerns through the company's email groups and internal channels, but now Google has come up with a series of community guidelines to restrict employees from doing so on the grounds of reducing toxicity. Several employees said they saw the restrictions as a way to control speech.
"it brings frustration," says Mr Mitchell. our feeling is, what on earth are we doing? " Even if Google does not have a clear directive to stop the work of technological ethics, in the current atmosphere, employees engaged in this kind of work will clearly feel that they are not supported and may eventually reduce their workload. " When Google management openly discusses the ethics of technology, they tend to talk about a hypothetical future in which omnipotent technology cannot be controlled by humans, rather than dealing with everyday scenarios that are already potentially harmful. This practice has also been criticized by some people in the industry as a form of marketing.
El-Mahdi El-Mhamdi, a former research scientist at Google, left Google in February after Google refused to address the ethics of artificial intelligence directly. He said that a paper he co-authored at the end of last year suggested that mathematically, basic artificial intelligence models could not be large-scale, robust and secure at the same time.
He said Google questioned the research he was involved in through employment. Later, instead of defending his academic research, he directly abandoned his employment relationship with Google and chose an academic identity. "if you want to stay at Google, you have to serve the entire system, not conflict with it," he said. "
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.