Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Nature | GPT-4 blows up, scientists worry about overflow screen

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

Although GPT-4 has only been released for a short time, scientists 'concerns about this technology are gradually overflowing.

The emergence of GPT-4 was both exciting and frustrating.

Despite GPT-4's astounding creativity and reasoning abilities, scientists have expressed concern about the safety of the technology.

Because OpenAI violates its original intention, it does not open source GPT-4 and publish the training method and data of the model, so its actual working situation is not known.

The scientific community is frustrated.

Sasha Luccioni, a scientist specializing in environmental research at the open source AI community HuggingFace, said,"OpenAI can build on their research, but for the community as a whole, all these closed-source models are like a dead end scientifically."

Andrew White, a chemical engineer at the University of Rochester who was a member of the red-teamer team, had privileged access to GPT-4.

OpenAI pays the red team to test the platform, trying to get it to do something bad. Andrew White has had access to GPT-4 for the past six months.

He asked GPT-4 what chemical steps were needed to make a compound, asked it to predict the yield of the reaction, and asked it to choose a catalyst.

"Compared to previous iterations, GPT-4 doesn't seem to be any different, and I don't think so. But then it was really amazing, it looked so realistic, it would conjure an atom here and skip a step there."

But when he continued testing and gave GPT-4 access to the thesis, things changed dramatically.

"We suddenly realized that these models might not be that good. But when you start connecting them to tools like retrospective synthesis planners or calculators, suddenly new capabilities appear."

As these abilities emerged, people began to worry. For example, could GPT-4 allow dangerous chemicals to be manufactured?

Andrew White shows that with input from Red Team people like White, OpenAI engineers can feed it back into their models and prevent GPT-4 from creating dangerous, illegal, or destructive content.

False facts outputting false information is another problem.

Luccioni says models like GPT-4 haven't solved the problem of hallucinations, meaning gibberish.

"You can't rely on these kinds of models because there are too many illusions, and although OpenAI says it has improved security in GPT-4, this is still a problem in the latest version."

OpenAI's security guarantees are insufficient in Luccioni's view, given the unavailability of data for training.

"You don't know what data is. So you can't improve it. It is impossible to do science with such a model."

The mystery of how GPT-4 was trained has also puzzled psychologist Claudi Bockting: "It's very difficult to hold humans accountable for something you can't supervise."

Luccioni also argued that GPT-4 would be biased by training data, and that without access to the code behind GPT-4, it would be impossible to see where bias might originate and remedy it.

Ethical discussions Scientists have long had reservations about GPT.

When ChatGPT was introduced, scientists had objected to GPT appearing in the author column.

Publishers also agree that artificial intelligence such as ChatGPT does not meet the criteria for research authors because they cannot be held responsible for the content and integrity of scientific papers. However, AI contributions to writing papers can be recognized outside the author list.

There are also concerns that these AI systems are increasingly in the hands of large tech companies. These technologies should be tested and validated by scientists.

There is an urgent need for a set of guidelines to govern the use and development of AI and tools such as GPT-4.

Despite such concerns, White said, GPT-4 and its future iterations will shake up science: "I think it's going to be a huge infrastructure change in science, just like the Internet." We began to realize that we could connect papers, data programs, libraries, computational work and even robotic experiments. It won't replace scientists, but it can help with some tasks."

However, it seems that any legislation surrounding AI technology is difficult to keep pace with developments.

On April 11, the University of Amsterdam will host an invitational summit to discuss these issues with representatives from organizations such as UNESCO's Scientific Ethics Committee, the Organization for Economic Cooperation and Development and the World Economic Forum.

Key topics include insisting on manual inspection of LLM outputs; establishing mutual accountability rules within the scientific community aimed at transparency, integrity, and fairness; investing in reliable and transparent large language models owned by independent nonprofits; embracing the advantages of AI, but having to weigh the benefits of AI against the loss of autonomy; inviting the scientific community to discuss GPT with interested parties, from publishers to ethicists; and more.

References:

https://www.nature.com/articles/d41586-023-00816-5

This article comes from Weixin Official Accounts: Xinzhiyuan (ID: AI_era)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report