In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Thanks to CTOnews.com netizen Xiao Zhan for the clue delivery! Beijing, June 1 (AI)-ChatGPT developer OpenAI released a research paper on Wednesday on how to solve the "illusion" of artificial intelligence (AI). AI hallucinations refer to chatbots responding with fabricated messages.
Chatbots such as ChatGPT or Google "Bard" completely fabricate information and act as if they are telling the truth, which is the AI illusion. For example: in a promotional video Google shot for Bud in February, the chatbot made false statements about the James Webb Space Telescope. Recently, ChatGPT cited a "fake" case in a document in federal court in New York, in which New York lawyers may face sanctions.
"even the most advanced models tend to generate lies and show a tendency to fabricate facts in uncertain times," OpenAI researchers said in the report. "these hallucinations are particularly problematic in areas that require multi-step reasoning, because one logical error is enough to destroy a larger solution."
To combat the AI illusion, OpenAI found a potential new strategy: training the AI model to reward itself at every correct step of reasoning, rather than just waiting until the correct final conclusion is inferred. The researchers say this approach is called "process monitoring", as opposed to "outcome monitoring", which may train better interpretable AI because the strategy encourages models to reason in a more human-like way of "thinking".
"detecting and mitigating the logic errors or hallucinations of a model is a key step in building consistent general artificial intelligence (AI)." Karl Cobbe, a researcher at the OpenAI Mathematical Paper Generator (mathgen), said in an interview. He points out that although OpenAI did not invent the process monitoring method, the company is promoting its development. "the motivation for this study is to solve hallucination problems in order to make models more capable of solving challenging reasoning problems."
OpenAI has released an accompanying dataset containing 800000 human tags that are used to train the models mentioned in the research paper, Cobb said. Phoenix New Media science and technology "AI outpost" will continue to pay attention to this.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.