In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Not long ago, Google launched an internal campaign codenamed "dogfighting", requiring all employees to spend two to four hours a week to help test and improve its new artificial intelligence search chat robot, Bard.
The Bard release comes shortly after Microsoft released a new version of Bing search engine that uses the technology behind the ChatGPT chat robot, which allows users to have multiple rounds of conversations on almost any topic. However, questions about Google emerged after Bard was found to have provided the wrong answer. Similarly, as more and more people test the new Bing, their chatbots also encounter problems, such as their tendency to aggressive behavior.
AI chatbots like Bard and ChatGPT can imitate human conversations through text training written by humans, which explains why Bing's responses sometimes seem emotional and unpredictable. After all, robots that are trained to be like humans can easily make human mistakes.
These chatbots initially complete most of their learning by ingesting a large amount of training data. In addition, Jack Krawczyk, product director of the Bard project, told employees in a memo that Google's research found that adding high-quality responses to user queries "significantly" improved the quality of its AI model.
Google employees may write high-quality responses for Bard to improve their models, according to several AI experts. These experts have done extensive research in the field of AI and large language models.
Krauzick asked employees to ask Bard about their areas of expertise, such as their favorite interests. They were then asked to evaluate Bard's answers to ensure that they met expectations and were reasonable in length and structure. If an answer is too human, factual, or meaningless, employees can rewrite the answer and submit it to Bard's model training.
Ved Shwartz, an assistant professor of computer science at the University of British Columbia, said Google could use a combination of supervised learning and reinforcement learning to continuously improve Bard.
Among them, supervised learning is the first step, and researchers can enter human-written queries and answers to the chatbot until it learns how to respond like a human. On that basis, Google can build a reinforcement learning model and train it with answers written by Google employees to help it understand what values the company wants Bard's answers to show, including improvements in structure, tone and other aspects.
The reinforcement learning model looks at the answers given by Bard, removes inappropriate answers, and validates qualified answers until the chatbot understands what it should do. Basically, the "right" answer from Google employees will help improve the model.
Reinforcement learning models can also teach Bard to provide information without talking about emotions or otherwise pretending to be human. The first model focuses on learning basic writing skills, while the second model will guide the machine to answer questions in the expected direction.
Zhou Yu, a computer science professor at Columbia University, said that with enough good answers to analyze, the reinforcement learning model can know which answers are appropriate and which are not.
Ensure factual accuracy Google has always been cautious about launching chatbots, possibly because of its short-term impact on search profits and concerns about accuracy. Google asked employees to refuse to answer questions that Bard tried to advise users on sensitive topics such as finance or health because of the high risk of giving the wrong answer.
The AI field has been working hard to solve the problem of factual accuracy, and OpenAI released an update in January to improve the accuracy of ChatGPT's conversations on various topics. At a conference on chatbots and AI in San Francisco this month, Dario Amodei, chief executive of Anthropic, said he believed that as the model improved, chatbots would no longer fabricate facts.
While training helps improve the quality of answers generated by chatbots, Schwartz says she doesn't think it will completely solve the problem of factual accuracy. Both Bard and ChatGPT have so-called "hallucinatory" tendencies, a term used in the industry to describe chatbots fabricating facts. They extract content from web pages and sometimes inevitably summarize it incorrectly.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.