In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Early this morning, OpenAI released the latest version of its large language model, GPT-4. The company says GPT-4 outperforms the vast majority of humans in many professional tests.
Specifically, GPT-4 has achieved a leap forward in the following aspects: smarter, better and better problem solving; support for image input, with strong map recognition ability, but currently limited to internal testing; longer context, text input limit raised to 25000 words; significantly improved answer accuracy; safer, less harmful information.
For ordinary people, how to understand how strong and smart GPT-4 is?
According to OpenAI, GPT-4 passed all the basic exams and passed with high marks. For example, GPT-4 ranked in the top 10 per cent of candidates in the mock lawyer qualification exam, in the top 7 per cent in the SAT reading test and in the top 11 per cent in the SAT math test. By contrast, the real score of the once-shocking GPT-3.5 is around the bottom 10%, and the strength of GPT-4 can be imagined.
One netizen commented, "it would be terrible if it is what the report says. I feel that my ability has far exceeded me." Others laughed and said, "I'm lying down! I was born in a time that suits me!"
The most common voice is the worry about the future job security, "which industry can not be replaced?"dispel any illusions, the AI era is sweeping the world, think about what GPT-4 can't do but you can do."
"larger" and perhaps more expensive than previous versions, OpenAI says GPT-4 is "bigger" than previous versions, which means it has been trained on more data and has more weights in model files, which makes it more expensive to run.
At the same time, OpenAI did not disclose the number of parameters in the model.
OpenAI says it uses MSFT.US 's Azure training model, which has invested billions of dollars in the start-up. OpenAI did not release details of the specific model size, nor did it disclose the hardware used to train the model, citing "competitive conditions".
At present, many researchers in this field believe that many of the latest developments in artificial intelligence come from running larger and larger models on thousands of supercomputers, and the training process of these models can cost tens of millions of dollars.
In order to continuously improve the performance of GPT-4, OpenAI also develops and develops "infrastructure" for it.
Over the past two years, OpenAI has rebuilt the entire deep learning stack and, together with Azure, designed a supercomputer for its workload from scratch. A year ago, OpenAI first tried to run the supercomputing system while training GPT-3.5, and then they found and fixed some errors and improved its theoretical basis. As a result of these improvements, the training operation of GPT-4 has been more stable than ever before.
Greg Brockman, co-founder and president of OpenAI, said OpenAI expected cutting-edge models to be developed in the future by companies investing in billion-dollar supercomputers, with some of the most advanced tools at risk. OpenAI wants to keep certain parts of their work secret to give startups "some breathing space to really focus on security and do it well."
The limitations are obvious, GPT-4 is still not completely reliable, although it is already very powerful, GPT-4 still has similar limitations to the earlier GPT model, the most important of which is that it is still not completely reliable, that is, it is still possible to talk nonsense.
OpenAI also warns that GPT-4 is not perfect and that in many cases it is not as capable as humans. "GPT-4 still has many known limitations that we are working to address, such as social prejudices, hallucinations and confrontational cues," the company said. "
Generally speaking, GPT-4 has significantly reduced the hallucination problem compared to the previous model (after many iterations and improvements). In OpenAI's internal adversarial authenticity assessment, GPT-4 scored 40 per cent higher than the latest GPT-3.5 model.
Meanwhile, GPT-4 training data is still available until September 2021. This also means that GPT-4 does not know enough about the information after this point in time and will not learn from its experience.
After GPT-4 's release, OpenAI founder Altman tweeted: "it's still flawed, it's still limited, and when you spend more time using it for the first time, it seems more impressive than it really is."
How can China's ChatGPT chase? The most shocking thing is that, according to OpenAI engineers in the demo video, GPT-4 's training was completed last August, with the rest of the time fine-tuning and, most importantly, the removal of dangerous content generation.
This also means that the internal technology of OpenAI is more years ahead of the outside world than people think. What is more frightening is that OpenAI has also opened the API interface and related papers in one breath! How can China's ChatGPT catch up with this?
Wang Sheng, a partner in the Yingnuo Angel Fund, once told the "State ℃" column, "even if China's ChatGPT are catching up, statically speaking, they may be able to catch up in two or three years. I think they are already very optimistic."
But with the release of the more powerful GPT-4, it is clear that OpenAI's technical capabilities are still improving, and at a much faster pace than we can catch up with. Wang Sheng judged in a previous interview, "unless this matter suddenly encounters a bottleneck, the whole direction of scientific and technological development has come to an end, and the other party has to stop, maybe we still have a chance to catch up, or we may not have a chance to dig a new technological path and have a chance to catch up."
At the same time, OpenAI has also made the latest progress in the landing and application of the model.
The new model will be available to paying users of ChatGPT, as well as as part of API, allowing programmers to integrate AI into their applications. OpenAI will charge 3 cents for 750 words of instruction messages and 6 cents for response messages of about 750 words.
OpenAI also said Morgan Stanley is using GPT-4 to organize data, while electronic payments company Stripe is testing whether GPT-4 can help fight fraud. Other clients include language learning companies Duolingo, Khan Academy and the Icelandic government. Microsoft, an OpenAI partner, said Tuesday that the new version of Bing's search engine will use GPT-4.
The achievements of GPT-4 are exciting, but for the Chinese ChatGPT who are still on the starting line, there is still a lot of work to explore and research. I must have felt great pressure. The bigger the storm, the more expensive the fish will be, and there will be huge opportunities for these companies that are rapidly entering this new field.
The strength of GPT-4 warns us that the only limiting factor in the future is your imagination!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.