Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

OpenAI opened 3 million + annual salary recruitment "Super AI researcher", invested 20% of the total effort to set up a new department, with the goal of "controlling Ultron" within 4 years.

2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

OpenAI takes 20% of its efforts at one time and invests in a new research direction_

Superintelligent alignment.

What is the 20% OpenAI concept?

Microsoft had previously built a supercomputer specifically for them, with 285000 CPUs and tens of thousands of NVIDIA A100 GPUs.

No one knows how many H100 GPUs OpenAI now has, except that they raised a total of $11.3 billion, including Microsoft's additional Azure cloud computing power.

It was as if he had dropped a bomb on the technology circle.

Look at the subtle word differences in this announcement:

It is not general artificial intelligence (AGI), and even the word artificial has been removed.

Direct leapfrogging targets how to control superintelligence, defined as AI systems that are much smarter than us.

The text clearly states: Although it seems far away, we believe that superintelligence will arrive in this decade.

It is now the second half of 2023, leaving humanity with a total of six and a half years.

OpenAI has set itself a shorter time frame of only four years.

Controlling "Autron" requires "Jarvis." Some netizens described the research method published by OpenAI as "Jarvis vs Autron."

OpenAI understands that humans can't do it on their own, and proposes new concepts. Automated alignment researcher

That is to say, train an AI researcher who has roughly reached the human level, and then invest a lot of computing power in rapid iteration.

Try to reach a technological singularity first, then go straight to the big bang.

The cornerstone of the entire plan was to create the first automatic alignment researcher.

OpenAI proposes a temporary solution for this, divided into three items:

Developing a Scalable Training Method

In order to provide training signals on tasks that are difficult for humans to evaluate, AI needs to be leveraged to assist in evaluating other AI systems, known as "scalable supervision."

Verify the model generated in step 1

To verify the consistency of the system, problematic behavior and problematic internal structure are automatically searched.

Pressure test the entire pipeline flow

Testing the entire pipeline by deliberately training misaligned models and confirming that the technology can detect the worst type of error, known as adversarial testing.

Why temporary solutions?

OpenAI anticipates substantial changes in research focus as it learns more about the problem, likely adding entirely new areas of research.

Just take it one step at a time.

Lead scientist, create a new department to work on top-level problems requires top-level teams.

OpenAI co-founder and chief scientist Ilya Sutskever will lead the new division alongside Jan Leike, the previous alignment team leader.

Members include not only OpenAI's own employees, but also researchers from other companies.

Ilya Sutskever, one of the AlexNet authors who started the deep learning era in 2012 and one of the AlphaGo authors, has previously made AI alignment his core research focus with more than 400,000 citations.

Jan Leike is one of the authors of InstructGPT, the predecessor of ChatGPT, and participated in the human feedback reinforcement learning method of OpenAI and DeepMind as early as 2017.

Jan Leike believes that OpenAI's investment is likely to be more than all of humanity's previous investments in AI alignment research combined.

Including these two, the new team currently has 10 members.

A new round of recruitment has also opened, divided into research managers (annual salary of 420,000 - 500,000 US dollars), research scientists and research engineers (annual salary of 245,000 - 450,000 US dollars), two levels of three positions.

Who supervises regulators? For OpenAI's big move, the academic community has different views.

Scholars agree that natural intelligence is produced by competitive evolutionary rewards, AI by evolutionary rewards aligned with human interests, and expect future effects.

But others say OpenAI fundamentally misunderstood the concept of alignment.

He believes that alignment cannot be forced control, but should be to align the interests of both sides, similar to "blocking is better than sparing."

Some netizens complained,"I can't believe AGI is likely to arrive before the long-awaited game of The Elder Scrolls 6." "

Some people in the comment section pointed out that when the time comes, you want to play what game to let AI do it.

There are also netizens soul ask, who will supervise the regulator again?

Reference link:

[1]https://openai.com/blog/introducing-superalignment

[2]https://twitter.com/OpenAI/status/1676638358087553024

[3]https://www.reddit.com/r/singularity/comments/14rh1l1/superintelligence_possible_in_the_next_7_years/

This article comes from Weixin Official Accounts: Qubit (ID: QbitAI), Author: Meng Chen

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report