In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
On Tuesday, March 15, local time, the artificial intelligence research laboratory OpenAI released the latest version of the large language model GPT-4. This long-awaited advanced tool can not only automatically generate text, but also describe and analyze image content; it not only promotes the technical level of the wave of artificial intelligence, but also makes the moral boundaries of technology development more and more unnegligible.
ChatGPT, an early chat robot launched by OpenAI, received a lot of attention with its automatically generated fluent text and made the public uneasy about its ability to make up papers and fictional scripts. You know, ChatGPT still uses the older generation of technology GPT-3, which was out of date more than a year ago.
In contrast, the most advanced GPT-4 model can not only automatically generate text, but also describe images according to the simple requirements of users. For example, when showing GPT-4 a picture of a boxing glove hanging from a wooden seesaw with a ball at one end, the user asks what will happen if the glove falls off, and GPT-4 will reply that it will hit the seesaw and make the ball fly.
Early testers claimed that GPT-4 was very advanced in its ability to reason and learn new things. Microsoft also revealed on Tuesday that the Bing artificial intelligence chat robot released last month has been using GPT-4.
The technology will further revolutionize people's work and lives, developers said on Tuesday. But at the same time, it also makes the public worry about how to compete with this hideously sophisticated machine and how people can believe what they see online.
OpenAI executives say GPT-4 's "multimode" across text and images makes it far better than ChatGPT in terms of "advanced reasoning capabilities". Fearing that the feature would be abused, the company delayed the release of GPT-4 's image description feature, and subscribers to GPT-4-supported ChatGPT Plus services could only use text.
Sandhini Agarwal, a policy researcher at OpenAI, said the company had not released the feature to better understand the potential risks. Niko Felix, a spokesman for OpenAI, said OpenAI was planning to "implement safeguards to prevent personal information in images from being identified".
OpenAI also admits that GPT-4 still has habitual mistakes such as "hallucinations", nonsense, perpetuating social prejudices and providing bad advice.
Microsoft has invested billions of dollars in OpenAI, hoping that artificial intelligence will become a killer in its office software, search engines and other online products. The company promotes the technology as a super-efficient partner that can handle repetitive tasks and allow people to focus on creative work, such as helping software developers work as a team.
But some people who are concerned about artificial intelligence say these may just be appearances, and artificial intelligence can lead to business models and risks that no one can predict.
The rapid development of artificial intelligence, coupled with the explosion of ChatGPT, has led companies in the industry to compete fiercely for the dominant position in the field of artificial intelligence and release new software.
This frenzy has also attracted a lot of criticism. Many believe that these companies' eagerness to introduce untested, unregulated and unpredictable technologies could deceive users, destroy artists' works, and hurt the real world.
Because it is designed to generate convincing language, artificial intelligence language models often provide the wrong answer. And these models are trained with information and images on the Internet, and learn to imitate human prejudices.
"as GPT-4 and similar artificial intelligence systems are widely adopted, they" reinforce stereotypes, "OpenAI researchers wrote in a technical report.
Irene Soleman (Irene Solaiman), a former OpenAI researcher and policy director at Hugging Face, an open source artificial intelligence company, believes that the pace of progress in this technology requires society to respond to potential problems in a timely manner.
She further said that "as members of society, we have reached a broad consensus on some of the hazards that should not be caused by the model", "but many of the injuries are subtle and mainly affect minorities." Harmful biases "cannot be a secondary consideration for artificial intelligence performance," she added. "
The latest GPT-4 is not completely stable either. When users congratulated the artificial intelligence tool on upgrading to GPT-4, its response was "I am still the GPT-3 model". After being corrected, it apologized and said, "as GPT-4, I thank you for your congratulations!" Users then joked that it was actually still a GPT-3 model, and AI apologized again and said it was "indeed a GPT-3 model, not a GPT-4".
OpenAI spokesman Felix said the company's research team is investigating what went wrong.
On Tuesday, artificial intelligence researchers criticized OpenAI for not disclosing enough information. The company does not release an assessment of GPT-4 's biases. Eager engineers are also disappointed to find that there are few details about GPT-4 models, data sets, or training methods. OpenAI said in a technical report that it would not disclose these details given the "competitive landscape and security implications" it faced.
GPT-4 is in a highly competitive field of multi-sensory artificial intelligence. DeepMind, an artificial intelligence company owned by Google's parent company Alphabet, released an omnipotent model called Gato last year, which can describe images and play video games. Google this month unveiled a multi-mode system, PaLM-E, that integrates artificial intelligence vision and language analysis into a single-arm robot. For example, if someone asks it to pick up some chips, it can understand the request, turn to the drawer and choose the right object.
Similar systems have inspired boundless optimism about the potential of the technology, and some people have even seen almost the same level of intelligence as humans. However, as critics and artificial intelligence researchers argue, these systems only find established patterns and inherent associations in repetitive training data, and do not clearly understand what they mean.
GPT-4 is the fourth "generative pre-training converter" since the first release of OpenAI in 2018, based on the breakthrough neural network technology "converter" developed in 2017. This "pre-training" system, which carries out "pre-training" by analyzing online text and images, has made rapid progress in how artificial intelligence systems analyze human voice and images.
Over the years, OpenAI has also fundamentally changed the potential social risks of releasing artificial intelligence tools to the public. In 2019, the company refused to release GPT-2 publicly, saying that although artificial intelligence performed very well, it was worried about the emergence of "malicious applications" that used it.
But last November, OpenAI publicly launched a fine-tuned version of ChatGPT based on GPT-3. In just a few days after the launch, the number of users exceeded 1 million.
Public experiments with ChatGPT and Bing chatbots show that the technology is far from perfect without human intervention. After a series of strange conversations and wrong answers, Microsoft executives admitted that artificial intelligence chatbots were still untrustworthy in terms of providing the right answer, but said they were developing "confidence indicators" to solve the problem.
GPT-4 is expected to improve on some shortcomings, and artificial intelligence advocates such as technology blogger Robert Scoble believe that "GPT-4 is better than anyone expected."
Sam Altman, chief executive of OpenAI, has tried to assuage expectations of GPT-4. He said in January that speculation about GPT-4 's capabilities had reached impossible heights. "rumors about GPT-4 are absurd" and "they will be disappointed."
But Altman is also promoting OpenAI's vision. In a blog post last month, he said the company was planning how to ensure that "all humans" benefit from Universal artificial Universal Energy (AGI). This industry term refers to the still impractical idea of making super artificial intelligence as smart as or even smarter than humans.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.