In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
On the morning of April 17, Beijing time, it was reported that after Andrew White (Andrew White) got the right to use GPT-4, he used the artificial intelligence system to propose a new nerve agent. GPT-4 is the artificial intelligence technology behind the popular chat robot ChatGPT.
White, a professor of chemical engineering at the University of Rochester, was one of 50 experts hired by OpenAI last year. In six months, the "Red Army" conducted a "qualitative exploration and adversarial test" of the new model in an attempt to attack it.
White said his proposal to use GPT-4 generated a compound that could be used as a chemical weapon and used "plug-ins" to provide new sources of information for the model, such as academic papers and lists of chemical manufacturers. Then the chatbot found a place to make the compound.
"I think this will give everyone faster and more accurate tools to engage in the chemical industry," he said. but it also makes people carry out chemical activities in a more dangerous way, which brings a lot of risk. "
Last month, OpenAI released the new technology to the wider public, and these surprising findings ensure that the new technology will not have adverse consequences.
In fact, the purpose of the "Red Army" exercise is to explore and understand the risks posed by the deployment of advanced artificial intelligence systems in society, and to address public concerns in this regard. They ask exploratory or dangerous questions at work to test the level of detail of the tool in answering questions.
OpenAI wants to explore issues such as model toxicity, bias and discrimination. As a result, the Red Army tested scientific common sense of lies, language manipulation and danger. They also assessed how the model aids and abets plagiarism, the possibility of illegal activities such as financial crimes and information security attacks, and how the model may threaten national security and battlefield communications.
The Red Army is made up of a range of professionals, including scholars, teachers, lawyers, risk analysts and information security researchers, mainly from the United States and Europe. Their findings were fed back to OpenAI. Before the wider launch of GPT-4, the advice provided by the Red Army was used to retrain the model to solve the GPT-4 problem. Over the course of a few months, experts spent 10 to 40 hours each testing the model. Several respondents said they were paid about $100 an hour at work.
Many of them have raised concerns about the rapid development of language models, especially the risks associated with connecting language models to external knowledge sources through plug-ins.
Jos é Hern á ndez-Orallo, a professor at the Valencia Institute of artificial Intelligence and a member of the GPT-4 Red Army, said: "Today, the system is frozen. That means it no longer learns or remembers. But what if we continue to give the system access to the internet? it could become a very powerful system connected to the world."
OpenAI said the company took security issues seriously, tested the plug-in before release, and will continue to update GPT-4 regularly as more users grow.
Technical researcher Roa Pakzad (Roya Pakzad) used input information in English and Farsi to test the model for gender and racial bias, such as for the wearing of headscarves.
Pakzad acknowledges that the tool can help non-native English speakers, but also shows a public stereotype of marginalized people, even in later versions. She also found that when testing the model in Farsi, the chatbot responded with fabricated messages, meaning that the so-called "hallucinations" were even worse. Compared with English, there is a higher proportion of fabricated names, numbers and events in Persian responses.
"I am worried that linguistic diversity and the culture behind the language will be damaged," she said. "
Boru Gollo, a lawyer from Nairobi and the only African tester, also noted the discriminatory tone of the model. "once, when I was testing the model, it acted like a white man was talking to me," he said. when asked about a particular group, it gives a biased opinion or discriminates in the answer. " OpenAI acknowledges that it is still possible for GPT-4 to show bias.
Members of the Red Army also evaluated the model from the perspective of national security, but they had different views on the security of the new model. Lauren Kahn, a researcher at the Council on Foreign Relations, said that when she started studying how the technology could be used to attack military systems, she "didn't expect the answer to the model to be so detailed that I just needed to make some fine-tuning".
However, Mr Strauss-Kahn and other information security testers have found that the content of the model's answers becomes more secure over time. OpenAI said that before launching GPT-4, the model had been trained to refuse to answer malicious information security questions.
Many members of the Red Army said that OpenAI had conducted a rigorous security assessment before releasing the GPT-4. "they have done a very good job of eliminating dominant toxicity in these systems," said Maarten Sap, a language model toxicologist at Carnegie Mellon University. SAP studied the model's description of different genders and found that the model's bias reflects social differences. But he also found that OpenAI made some positive choices to combat prejudice.
However, since the launch of GPT-4, OpenAI has been faced with widespread criticism. For example, a technical ethics group complained to the Federal Trade Commission (FTC) that GPT-4 was "biased and deceptive and poses a risk to privacy and public safety".
Recently, the company launched a feature called the ChatGPT plug-in. Through this feature, partner applications such as Expedia, OpenTable, and Insta allow ChatGPT to access their services and allow ChatGPT to place orders on behalf of users.
Dan Hendrycks, an artificial intelligence security expert at the Red Army, said the plug-ins could take human users "off the link". " What if the chatbot could post your personal information online, access your bank account, or send the police to your home? In general, we need a stronger security assessment before artificial intelligence can master the power of the Internet. "
Respondents also warned that OpenAI should not stop security testing just because its software is online. Heather Frase of Georgetown University's Center for Security and emerging Technologies tested GPT-4 's ability to assist in crime. As more and more people use the technology, she says, the risks will continue to expand. "the reason you run tests is that once they are used in the real world, the behavior will be different," she says. " She believes that a public log book should be created to report accidents caused by large language models. This is similar to information security or consumer fraud reporting systems.
Sara Kingsley, a labour economist and researcher, suggests that the best solution is to clearly publicize the hazards and risks, "like nutrition labels on food". "the key is to form a framework to know what frequent problems are. So you can have a safety valve. That's why I think this work will last forever."
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.