In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
In an exclusive interview with WSJ some time ago, OpenAI CEO Sam Altman and CTO Mira Murati discussed AGI, the development of GPT in the future, and the impact of AI on human beings.
"Why is AGI the ultimate goal of OpenAI? What is AGI? "
"what is the purpose of ChatGPT and other language models? "
"what will happen to the relationship between humans and artificial intelligence in the future? "
At a technology press conference in the Wall Street Journal (WSJ) in 2023, OpenAI CEO Sam Altman and CTO Mira Murati discussed artificial general intelligence (AGI), the future development of GPT models, and the impact of artificial intelligence on humans.
Nine years ago, Sam Altman told the Wall Street Journal that artificial intelligence caused human unemployment, which is something that doesn't need to be too worried for a long time to happen.
In less than a decade, Altman and his co-founder, OpenAI, released an artificial intelligence chat robot called ChatGPT.
It can write emails, business plans, and even code that would have been unthinkable nine years ago.
When discussing the current impact and impact of artificial intelligence on human society, Altman is still optimistic, but more cautious than nine years ago.
The ultimate goal of OpenAI: AGIAGI, this concept has been endowed with an infinitely beautiful imagination since its birth.
The same is true of Altman, who believes that AGI will be the most outstanding creation in the history of mankind.
With this great tool, human beings will be able to solve all kinds of problems in the world today and create unimaginable new things for themselves, each other and the world.
At that time, human beings will have more creative means of self-expression. But Altman is sure that these changes will bring great benefits to mankind.
"Nine years from now, when the Wall Street Journal invites me, you may ask: why did we think that humans didn't want AGI to come? "
So when will AGI appear? How can people tell the arrival of AGI?
Altman defines AGI as something we don't have yet. Ten years ago, people might have thought that GPT-4 or GPT-5 was AGI.
But now, GPT-4 is only seen as a good "little chat robot".
People have higher and higher requirements for the benchmark threshold of AGI, which requires more and more efforts on artificial intelligence.
"humans are now close enough to the threshold of AGI that the ability to improve AI has become less important," Altman said. The problem we are facing now is how to define AGI. "
GPT-5: solve the problem of "hallucination" and data copyright OpenAI has released several versions of GPT since its inception, each more powerful than the previous one.
In March this year, OpenAI released its latest model, GPT-4. But for the next model of OpenAI, people are still full of expectations: is GPT-5 under development?
Faced with this question, OpenAI's CTO Mira Murati replied, "We're not there yet."
But she also said that OpenAI has been working on the next thing, such as reducing model hallucinations: future releases of GPT-5 will address the hallucinations that haunt models now.
Mira said that although GPT-4 has made great progress in hallucinating, it is still a long way from completely solving the hallucination problem.
But OpenAI has been on the right track to solve the problem: human feedback reinforcement learning (RLHF), which allows the model to output really reliable content.
Moreover, OpenAI also integrates a variety of technologies to reduce the problem of model hallucination, such as increasing the ability of checking and searching for the model, and providing more factual data for the model, so as to ensure that users can get more factual output from the model.
But for OpenAI, copyright has always been controversial.
Not only the data used to train the model, but also the content generated by the model often involves the issue of copyright protection.
Several publishers, as well as writers, have protested against OpenAI's infringement.
Altman discusses data usage and data ownership from another point of view.
In the future, OpenAI's new model will be an infrastructure that everyone can use, which means that the way of thinking about data ownership and economic flows will change.
Now, OpenAI is trying to build partnerships with different data copyright holders, but as the model becomes more intelligent and capable, there will be less and less data needed to train the model in the future.
But the current model still needs as much data as possible for each human production during training. However, Altman says this will not be the path for the long-term development of the model in the future, because what really matters in the future is valuable data.
As OpenAI advances in technology, the discussion about data and ownership will shift.
The future of human and AI is first of all the relationship between human and AI.
On September 25, OpenAI added more personalization features to GPT-4. Now, ChatGPT can see, listen and speak.
The voice function of the new GPT-4 is quite user-friendly, and the communication with people is very natural.
It can be predicted that the ubiquitous AI is about to become a reality of human life.
In the future, we can not avoid the interaction with artificial intelligence, which brings a question: how should human beings deal with their relationship with artificial intelligence.
OpenAI, which trains models, and other companies can, to some extent, control the artificial intelligence that forges relationships with people.
This will be a disturbing future, and these artificial intelligence are likely to become friends and even partners of human beings.
But Altman made it clear that he does not want people to build intimate relationships with artificial intelligence beyond human friends: artificial intelligence is different from humans, and these systems may be full of personality, but it has nothing to do with human nature.
Therefore, when communicating with artificial intelligence, it should be different from human communication.
"the reason we named the model ChatGPT rather than a person's name is to make it clear that users are communicating with an artificial intelligence, not a real human. "Altman emphasized this point.
But just as people have many different relationships, people also establish special relationships with artificial intelligence. But eventually, people will realize that artificial intelligence is different from human beings, but the relationship we have established with it will not be broken.
On the other hand, the rapid development of AI also makes people worry about the uncontrollable risks of using these systems to commit crime and the impact on the job market.
Such as ordering AI to invade computer systems or designing chemical and biological weapons.
This is not a distant future, since the outbreak of generative AI, the use of AI for fraud and cyber attacks has been common.
But Altman believes that in the process of technological development, these negative effects are inevitable.
What we need to solve is the risk brought by technology, not to give up development. The latter is still a moral failure for human beings.
In human history, almost every technological revolution will have a profound impact on the job market, either a complete subversion or half of the jobs will disappear.
But in fact, when the old job disappears, the new one will be born. This shows the progress of mankind, and the problem lies in the speed at which our society adapts to change.
Within two or at most three generations, human beings can adapt to almost any degree of changes in the job market.
There may be people who don't want or like to change their jobs, but the nature of the work will change.
For a primitive hunter-gatherer tribe, typing in front of a computer can't be a real job.
"work is just an attempt by humans to entertain themselves with some stupid status games. "said Altman.
The real challenge is to cope with the transition of job market innovation.
Society needs to take action to ensure that people will not be hurt in this transition. It is not enough to provide a universal basic income. People need to have the initiative and influence to participate in the construction of the future.
This is why OpenAI is so determined to promote ChatGPT.
Although not everyone can use artificial intelligence technology, as more and more people participate, people will have the opportunity to think about and map out the direction of future development.
This is the most important thing we should pay attention to.
Reference:
Https://www.youtube.com/watch?v=byYlC2cagLw
This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.