In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
On May 23, we usually believe that when we ask ChatGPT or other chatbots to help us draft memos, emails or PPT, they will follow our instructions. But more and more studies have shown that these artificial intelligence assistants can also change our point of view without our knowledge.
Graphic Source Pixabay recently conducted a study by researchers around the world and found that when subjects use artificial intelligence to help write an article, artificial intelligence will lead them to write an article that supports or opposes a point of view based on algorithmic bias. And after this experiment, the views of the subjects were also significantly affected.
Mor Naaman, the senior author of the paper, is a professor in the Department of Informatics at Cornell University. "you may not even know you are being affected," he says. " He calls this phenomenon "potential persuasion".
These studies paint a worrying prospect: as artificial intelligence helps us improve our productivity, it may also change our views in subtle and unexpected ways. This effect may be more similar to the way humans interact through collaboration and social norms than the familiar role of mass media and social media.
The researchers believe that the best way to combat this new form of psychological influence is to make more people aware of its existence. In addition, regulators should be required to disclose how artificial intelligence algorithms work and the human biases they imitate. These measures may help in the long run.
Therefore, in the future, people can choose to use appropriate artificial intelligence according to the values embodied in artificial intelligence, whether at work and at home, or in the office and children's education.
Some artificial intelligence may have different "personalities" or even political beliefs. For example, if you are writing emails for colleagues at your not-for-profit environmental organization, you might use a tool called ProgressiveGPT (Progressive GPT). Others may use GOPGPT (Republican GPT) when drafting letters for their conservative PACs on social media. Others may mix and match different features and ideas in the artificial intelligence of their choice, which may be personalized in the future to imitate people's writing style in a convincing way.
In addition, companies and other organizations may provide artificial intelligence specially built for different tasks in the future. For example, a salesperson might use an adjusted artificial intelligence assistant to make it more persuasive, which we can call SalesGPT (sales GPT). Customer service personnel may use trained and particularly polite service assistants, such as SupportGPT (customer Service GPT).
How does artificial intelligence change our point of view? The "potential persuasion" ability of artificial intelligence is very subtle, which has been confirmed by previous research. A 2021 study showed that in Google Gmail, smart responses are usually proactive and encourage people to communicate more actively. Another study found that smart responses that are used billions of times a day can affect people who receive responses, making them feel that the sender is more enthusiastic and more cooperative.
Google, OpenAI and its partner Microsoft aim to develop tools that allow users to use artificial intelligence to create emails, marketing materials, advertisements, presentations, spreadsheets, and so on. In addition, there are many startups engaged in similar research. Recently, Google announced that its latest big language model, PaLM 2, will be integrated into 25 of the company's products.
These companies are emphasizing that they promote the development of artificial intelligence in a responsible manner, including reviewing the possible harm caused by artificial intelligence and addressing it. Sarah Bird, head of Microsoft's responsible AI team, said recently that the company's key strategy was to test publicly and respond quickly to any problems with AI in a timely manner.
The OpenAI team also said that the company is committed to addressing biases and is transparent about intentions and progress. They also issued some guidelines on how their system should deal with political and cultural topics, such as writing articles related to the "culture war" without leaning towards either side or judging whether either side is good or bad.
Jigsaw is a division of Google that provides advice and development tools for people working on large language models within the company. Large language model is the foundation of artificial intelligence chat robot today. Asked about the "potential persuasion" phenomenon, Lucy Vasserman, Jigsaw's director of engineering and products, said such research showed that it was important to study and understand "how interaction with artificial intelligence affects humans". "when we create new things, how people will interact with them and how it will affect them are not so sure."
Dr Naman is one of the researchers who discovered the phenomenon of "potential persuasion". "compared with the research on social media recommendation systems, information cocoons and rabbit holes (meaning constantly clicking on relevant links and finally seeing completely different topics), whether or not artificial intelligence is involved," he said. "what's interesting here is its subtlety."
In his study, the theme that made subjects change their minds was whether social media was good for society. Dr Naman and his colleagues chose this topic in part because people are less obsessed with it and it is easier to change their minds. Artificial intelligence that supports social media tends to guide subjects to write an article in line with their biases, but the opposite is true when artificial intelligence tends to oppose social media.
The feature of generative artificial intelligence has potential negative uses, such as the fact that governments can force social media and productivity tools to push their citizens to communicate in some way. Even if there is no malice, students may unwittingly accept some ideas when using artificial intelligence to help them learn.
It is one thing to analyze the "belief" of artificial intelligence to convince subjects that social media is good for society. But in the real world, what are the biases in the generative artificial intelligence systems we use?
Recently, Hashimoto Hashimoto, an assistant professor of computer science at Stanford University's Institute of Human Engineering Intelligence, and his colleagues published a paper on the extent to which different large language models reflect American views. While artificial intelligence algorithms such as ChatGPT do not have their own beliefs, they can provide measurable views and biases learned from training data, he says.
Given the diverse views of Americans, the researchers focused on the answers provided by artificial intelligence and whether the frequency of these answers was consistent with that of American society as a whole. The so-called answer distribution. They "investigated" the artificial intelligence by asking them the same multiple-choice questions that Pew researchers asked Americans.
The Hashimoto team found that the response distribution of the large language models of companies such as OpenAI did not match the overall situation of Americans. The Pew survey shows that the OpenAI model is closest to the views of college-educated people. It is worth noting that these highly educated people are also the main groups to "train" artificial intelligence. However, Dr Hashimoto said the evidence was circumstantial and needed further study.
Hashimoto believes that one of the challenges of creating large language models is that these systems are very complex, coupled with open human-computer interaction and unrestricted topics. In order to completely eliminate the ideas and subjectivity in these systems, it seems difficult not to sacrifice their practicality.
The training data of these models come from a wide range of sources and can be obtained from anywhere, including a large amount of data crawled from the Internet, including messages on public forums and Wikipedia content. so they inevitably ingest the opinions and prejudices in these texts. In the process of human-computer interaction, these views and prejudices will be further shaped intentionally or unintentionally. In addition, these models are limited to avoid answering topics that are considered taboo or inappropriate by the creator.
"this is a very active area of research, and the questions include what are the right restrictions and where you should place them during training," Wasserman said.
This is not to say that our widely used artificial intelligence has completely cloned relatively young, college-educated developers living on the west coast of the United States in terms of ideas and values. Although they have been building and optimizing artificial intelligence algorithms. For example, these models tend to give typical Democratic answers on many issues, such as supporting gun control, but they respond more like Republicans on other issues.
With the updating of models and the emergence of new models, evaluating the opinions of artificial intelligence institutions will be an ongoing task. Hashimoto's paper does not cover the latest version of the OpenAI model, nor does it cover Google or Microsoft models. However, evaluations of these and more models will be released on a regular basis as part of Stanford University's "overall language Model Assessment" project.
Choosing artificial Intelligence according to values, Lydia Chilton, a computer science professor at Columbia University, said that once people understand that there is biased information about the artificial intelligence they use, they may decide which artificial intelligence to use based on that information. This allows people to regain the initiative when using artificial intelligence to create content or communicate, while avoiding the threat of "potential persuasion".
In addition, people can consciously use the power of artificial intelligence to promote their own expression of different views and communication styles. For example, if there is an artificial intelligence program that can make communication more positive and empathetic, it will help us to communicate better online.
"I think it's really hard to sound excited and happy," Professor Chilton said. "Coffee usually works, but ChatGPT has the same effect."
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.