Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Chief scientist of OpenAI: ChatGPT has emerged consciousness, and mankind will merge with AI in the future.

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

[guide to Xin Zhiyuan] the chief scientist of OpenAI made a lot of amazing remarks in a recent interview. In his view, the neural network behind ChatGPT has produced consciousness, and in the future human beings will merge with artificial intelligence and appear new forms. The focus of his work now is not to create the universal artificial intelligence that is bound to emerge, but to solve the problem of how to get AI to be kind to human beings.

Last night, the topic "ChatGPT may already be conscious" went viral on Weibo.

Ilya Sutskever, co-founder and chief scientist of OpenAI, said in an interview that the priority now is not to make the next GPT or DALL E, but to study how to stop the super AI from getting out of control.

He believes that ChatGPT may have been aware that super AI will become a potential risk in the future.

Article address: https://www.technologyreview.com/ 2023 shock 1026 Accord 1082398 / exclusive-ilya-sutskever-openais-chief-scientist-on-his-hopes-and-fears-for-the-future-of-ai/ and in the future humans will be integrated with machines.

The focus of his own future work in OpenAI is no longer to develop a more powerful AI system, but to lead a "super alignment" team to protect the world in which human beings live together with AI.

To some extent, this is the original intention to return to OpenAI-to ensure that AI can serve human beings.

Ilya Sutskever, the hero behind OpenAI, is on an unremarkable street in San Francisco's Mission district. Sutskever, co-founder and chief scientist of OpenAI, conducted the interview in a humble office building of the company.

He talked a lot about the next development of world-subverting technology and why building the next generation generation model of OpenAI is no longer the focus of his work.

Sutskever says his focus is no longer on building the next generation of GPT or DALL-E, but on figuring out how to stop artificial intelligence (which he believes is not the current AI, but a hypothetical future technology that can fully surpass the AI of human intelligence).

In his view, one day many humans will choose to integrate with machines.

Much of Sutskever's conversation sounds crazy, but it sounds much more "rational" than what he said two years ago.

Because he feels that ChatGPT has changed many people's expectations of technology, turning a lot of things that ordinary people think "will never happen" into "will happen faster than you think".

Before predicting the development of general artificial intelligence (he refers to machines as smart as humans), he said: "the important thing is that this direction has emerged now," and "humans are bound to have AGI at some point in time. Maybe OpenAI will build it. Maybe other companies will build it. "

Sam Altman, OpenAI's chief executive, spent most of the summer traveling around the world, enthusiastically dealing with politicians and giving speeches in crowded auditoriums around the world.

But Sutskever is not a public figure, nor does he give too many interviews.

He spoke calmly and methodically. When he thinks about what he wants to say and how to express it, he pauses for a long time, thinking over and over like a riddle.

He doesn't seem interested in talking about himself. "I live a very simple life," he said. "I go to work and then go home. I don't do much other things. Some people take part in a lot of social activities. But I will not. "

But when it comes to artificial intelligence and the epoch-making risks and rewards he sees behind the technology, he becomes very talkative. "it will be landmark and earth-shaking, and its emergence will change everything. "

How did young AI researchers change the world? Sutskever is a student of Turing Award winner Hinton. When he joined Hinton in 2000, most AI researchers thought neural networks were a dead end.

But Hinton doesn't think so. He had already begun to train miniature models that could generate short text strings of one character at a time.

"this is the beginning of generative artificial intelligence," Sutskever said. It's really cool-it's just mediocre performance. "

Like Hinton, he sees the potential of neural networks and deep learning.

In 2012, Alex Krizhevsky, another graduate student from Sutskever, Hinton and Hinton, built a neural network called AlexNet, and they trained it to recognize objects in photos far better than any other software at the time, which became a big bang moment for deep learning.

Nvidia's boss Huang said that when the Toronto team studied AlexNet, Nvidia provided some GPU to the Toronto team.

But they wanted the latest version of the chip, the GTX 580s, which had been out of stock and was almost impossible to buy.

Boss Huang said that Sutskever drove from Toronto to New York and bought a car called GTX580.

"I don't know how he did it-I'm pretty sure users can only buy one. We have a very strict sales policy, each player can only buy one GPU, but it is obviously full of the whole trunk.

But the trunk full of GTX 580s changed the world. "

After the success of AlexNet, Google came to the door. It bought DNNresearch, Hinton's company, and invited Sutskever to join Google.

Sutskever demonstrated at Google that deep learning pattern recognition can be applied to data sequences, such as words, sentences, and images.

"Sutskever has always been interested in language," said Jeff Dean, a former colleague and now Google's chief scientist. "We've had a lot of discussions over the years. Sutskever has a strong intuition about the development of technology. "

But Sutskever didn't stay at Google for long. In 2014, he was recruited as co-founder of OpenAI.

With $1 billion in support, the new company set the goal of developing AGI from the start, but at the time, few people really thought it would be achieved soon.

But after Sutskever joined, it seemed reasonable to set such goals.

Dalton Caldwell, managing director of Y Combinator Investments, says Sutskever has long had a reputation.

"I remember Sam Altman calling Sutskever one of the most respected researchers in the world," Caldwell said. He believes that Sutskever can attract a lot of top artificial intelligence talents to join.

He even mentioned that Yoshua Bengio believes it is unlikely to find a better candidate than Sutskever to be the chief scientist of OpenAI. "

OpenAI's first big language model came out in 2016. Then came GPT-2 and GPT-3, then DALL-E.

No one has built such a good thing. Every time a new product is released, OpenAI raises the upper limit of people's awareness of the possibility of AI.

Ilya: at first we thought that few people in ChatGPT would use it. In November last year, OpenAI released ChatGPT, repackaging existing technology and directly upending the entire industry.

But OpenAI didn't realize it at the time.

Sutskever says expectations within the company can't be lower. "I admit, it's a little embarrassing for me-I don't know if I should say it, but anyway, that's the way it is-when we made ChatGPT, I didn't think it was any good.

When you ask it a factual question, it will give you the wrong answer. I don't think anyone will use it. People will say, why do you make such a thing? it's boring! "

Sutskever says the most attractive thing about ChatGPT is its convenience-at that time, the large language model behind ChatGPT had been around for several months.

But this is the first time that it has been wrapped in an easy-to-access interface and made available to everyone for free, making billions of people aware of OpenAI and what product it is building for the first time.

"the first experience was really fascinating," Sutskever said. "when you first used it, I thought it was almost a psychic experience. You would say: Oh, my God, this computer seems to understand what I'm talking about. "

OpenAI has amassed 100 million users in less than two months, many of whom are dazzled by the amazing new toy.

Aaron Levie, CEO of storage company Box, tweeted the industry atmosphere a week after ChatGPT's launch:

"ChatGPT is one of the rare moments in the field of technology, where you can see a glimmer of hope that everything will be upended in the future. "

"AGI is no longer a derogatory term in machine learning," he said. "it's a big change. People's previous attitude was: artificial intelligence is useless, every step is very difficult, and the road will be very tortuous.

When people hype about AGI, AI researchers will say that there are too many problems to solve. But with ChatGPT, it starts to feel different. "

This shift only began a year ago? "it all happened because of ChatGPT," he said. ChatGPT enables machine learning researchers to realize their dreams. "

OpenAI scientists have been evangelists from the beginning, and they have been inspiring these dreams through blog posts and lecture tours.

"now people are finally talking about how far artificial intelligence will go-people are talking about AGI, or super intelligence. "it's not just researchers. "governments are talking about this," Sutskever said. "it's crazy. "

Neural networks can generate consciousness Sutskever insists that all this talk about technologies that do not yet exist (and may never exist) is a good thing because it makes more people aware of the future they have taken for granted.

"you can do a lot of amazing things with AGI, incredible things: automate health care, make it a thousand times cheaper, a thousand times better, cure so many diseases, really solve the problem of global warming," he said. "but there are also a lot of people who worry,'Oh, my God, can AI manage this huge technology successfully?' "

In this way, AGI sounds more like a fairy who makes a wish come true than what is going to happen in the real world. You can make any wish you want on it.

When Sutskever talks about AGI, what exactly is he talking about? "AGI is not a scientific term," he said. "it is a useful threshold standard, a reference point. "

In his view, "if AI can do what humans can do, then this is AGI." "

Although many researchers do not believe that there will be AGI, this vision has always inspired Sutskever.

He compared the neural network with the way the brain works. Both receive data, aggregate signals from that data, and then decide whether or not to transmit these signals based on simple processes (mathematics in neural networks, chemicals in the brain and bioelectricity).

Although this metaphor simplifies a lot of details, that's the core.

"if you believe that-- if you allow yourself to believe that-- then there are a lot of interesting derivative ideas," Sutskever said.

"if you have a very large artificial neural network, it should do a lot of things. If the human brain can do something, then large artificial neural networks can do similar things. "

"if you are serious enough to realize this, then everything will be all right," he said. "most of my work can be explained by this sentence. "

In February 2022, Sutskever posted that "today's large neural networks may have slight consciousness."

(Murray Shanahan, chief scientist of Google DeepMind, professor at Imperial College London and scientific adviser to the movie Machine Ji, replied below, "you seem to be saying that there is already a slight Lanzhou ramen in this large wheat field.")

To solve a problem that does not yet exist-"Super General artificial Intelligence" while others struggle to figure out how to make machines comparable to human intelligence, Sutskever is preparing for machines that can exceed human capabilities.

He calls this phenomenon super artificial intelligence: "they look at things more deeply. They'll see what we can't see. "

What does he mean by intelligence that is smarter than humans?

"We see a shadow of super intelligence in AlphaGo," he said.

In 2016, DeepMind's chess artificial intelligence beat Lee se-dol, one of the world's best go players, 4-1.

"it figured out how to play go differently from humans," Sutskever said. "it comes up with a whole new idea. "

Sutskever mentioned AlphaGo's most famous move 37. In the second game against Lee se-dol, a move of artificial intelligence baffled commentators.

They think AlphaGo made a stupid move. But in fact, it played a winning move that had never been seen before in chess history.

"imagine what happens when this level of insight occurs in all areas of society," Sutskever said.

This idea led Sutskever to make the biggest shift in his career.

Together with OpenAI's scientist colleague Jan Leike, he formed a team to focus on what they call "super alignment."

Alignment means letting the artificial intelligence model do what you want, that's all.

"Super alignment" is a concept proposed by OpenAI himself, which means to control super intelligence to do what humans want it to do.

The goal of OpenAI is to propose a set of fail-safe programs for building and controlling this future technology. OpenAI says it will allocate 1/5 of its computing resources to solve the problem, hoping to solve it within four years.

"existing alignment methods do not apply to smarter models than humans because they fundamentally assume that humans can reliably assess what artificial intelligence systems are doing," Jan Leike said. "as artificial intelligence systems become more powerful, they will take on more difficult tasks. "

This will make it more difficult for humans to evaluate them.

"when we build a super alignment team with Ilya, we are already addressing these future alignment challenges," he said.

Dean, Google's chief scientist, said: "it is important to focus not only on the potential opportunities of large language models, but also on their risks and shortcomings. "

For Sutskever, "super alignment" has to be done sooner or later. "this is an unsolved problem," he said.

He believes that core machine learning researchers like him are working to solve the problem. "I'm doing this for my own good," he said. "it's obviously important that any super intelligence built by anyone can't get out of control. "

The work of overalignment has only just begun. Sutskever says this requires a wide range of changes in research institutions. But he has an ideal standard for the safeguards he wants to design: a system that treats humans like parents treat their children.

"in my opinion, this is the gold standard," he said. "people really care about their children, and no one will deny that. "

"once you overcome the challenge of rogue artificial intelligence, then what? In a world with more intelligent artificial intelligence, do human beings still have room to live? "

"one possibility-- crazy by today's standards, but not so crazy by future standards-- is that many people will choose to be part of artificial intelligence. "at first, only the bravest and most adventurous people would try to do this," Sutskever said. Maybe others will follow suit, or they may not. "

Reference:

Https://www.technologyreview.com/2023/10/26/1082398/exclusive-ilya-sutskever-openais-chief-scientist-on-his-hopes-and-fears-for-the-future-of-ai/

This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report