Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

OpenAI CEO: the size of the large language model is close to the limit, not the bigger the better.

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

Thanks to CTOnews.com netizen Xiao Zhan for the clue delivery! CTOnews.com, April 16, Sam Altman (Sam Altman), co-founder and CEO of OpenAI, spoke about the trends and security issues of the Big language Model (LLM) in an interview at the MIT Imagination Action event.

Altman believes that we are approaching the limit of the size of LLM, and that larger size does not necessarily mean that the model is better, but may just be in pursuit of a number. The scale of LLM is no longer an important indicator to measure the quality of the model, and there will be more ways to improve the ability and effectiveness of the model in the future. He compared the size of the LLM to past chip speed races, pointing out that today we are more concerned with whether chips can get the job done than how fast they are. The goal of OpenAI, he says, is to provide the world with the most capable, useful and secure models, not to be narcissistic about the number of parameters.

CTOnews.com noted that Altman also responded to an open letter asking OpenAI to suspend the development of a stronger AI for six months, saying that he agreed to improve security standards as the capabilities became stronger, but he believed that the letter lacked technical details and accuracy. He said that OpenAI had spent more than six months studying security issues before releasing GPT-4 and invited external auditors and "red teams" to test the model. " I also agree that as capabilities become stronger and stronger, safety standards must be improved. But I think this letter lacks most of the technical nuances that we need to pause-an earlier version of the letter says we are training GPT-5. We don't, and not for a while, so in that sense, it's kind of stupid. But we are doing other things on the basis of GPT-4, and I think there are all kinds of security issues that need to be addressed, but there is no mention in the letter. Therefore, I think it is very important to proceed with caution and to be more and more strict with security issues. I don't think the proposal is the ultimate solution to this problem, "he said.

Altman says OpenAI has been working in this field for seven years and has made a lot of efforts, and most people are reluctant to do so. He said he was willing to openly discuss security issues and the limitations of the model because it was the right thing to do. He admits that sometimes he and other company representatives say something "stupid" and turn out to be wrong, but he is willing to take the risk because the technology requires dialogue.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report