Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

GPT-5 may be suspended, and thousands of experts, including Musk and Turing Award winners, have called for the suspension of super AI research and development for at least six months.

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

GPT-4 is so strong that not only the masses panic, but today, the bosses of AI all over the world also do it! Thousands of people issued a joint letter calling on everyone to suspend training on AI, which is stronger than GPT-4.

Just now, a joint letter from thousands of bigwigs was exposed on the Internet, banning all AI who are stronger than GPT-4!

In this joint letter, more than 1000 bosses called for us to immediately stop training the AI system, which is more powerful than GPT-4, and suspend it for at least six months.

The current signatures include Turing Award winner Yoshua Bengio, Stability AI CEO Emad Mostaque, Apple co-founder Steve Wozniak, New York University professors Marcus and Musk, and Yuval Noah Harari, author of A brief History of humanity.

Look at this can not see the head of the signature, the boss content is too high.

To be able to collect the signatures of more than a thousand bosses, this joint signature must have been prepared for a long time.

However, Yann LeCun, also a Turing Award winner, did not sign: "I do not agree with this premise." "

In addition, the signature also appeared in a so-called "OpenAi CEO", but according to Marcus and a number of netizens speculated that it should not be signed by me. The whole thing is very complicated.

As of press time, he also specially @ Sam Altman confirmation message.

The AI open letter, which is better than GPT-4, said that a large number of studies have shown that the AI system, which is recognized by top AI laboratories and has human competitive intelligence, may pose a profound risk to society and human beings.

As stated in the widely accepted Asilomar AI principles, advanced AI may mean profound changes in the history of life on Earth, and we should invest appropriate considerations and resources to plan and manage them.

Unfortunately, so far, no one has taken action.

In recent months, AI labs around the world have been completely out of control. They are frantically engaged in AI competitions to develop and deploy more powerful AI, and no one can understand, predict, or control these AI, not even their creators.

Now that AI systems have become as competitive as humans in general systems, we must ask ourselves:

Should machines be allowed to flood our information channels with propaganda and lies? Should all actions be automated, even satisfactory work? Should we develop non-human intelligence that may one day surpass us, eliminate us, and replace us? Should we risk losing control of human civilization?

No unelected technical leader has the right to make such an important decision.

Only when we are convinced that the impact of this AI is positive and the risk is manageable should we develop a strong AI system. Moreover, we must have good reason to believe this, and the greater the potential impact, the more we need to be sure of it.

OpenAI's recent statement on general artificial intelligence noted that, "at some stage, independent review may be important before starting to train future systems, and there should be limits on the rate of growth of the computing power used to create new models. "

We agree on this, and that point in time is now.

Therefore, we call on all AI labs to immediately suspend training of AI systems that are more powerful than GPT-4 for at least 6 months.

Such a pause should be open to all, verifiable to all, and involve all key members. If it cannot be suspended quickly, the government should be allowed to intervene.

During these six months, all AI labs and independent scholars should work together to develop a shared security protocol for the design and development of advanced AI. After the agreement is completed, it should be strictly audited and supervised by independent external experts. These protocols must ensure the unquestionable security of these AI systems.

This is not to say that we should absolutely suspend the development of AI, but that we should take a step back from the dangerous competition and turn to a larger, unpredictable black box model with the ability to emerge.

All artificial intelligence research and development should refocus on this point-to make today's most powerful SOTA model more accurate, secure, interpretable, transparent, robust, aligned, trustworthy and loyal to human beings.

At the same time, developers must work with policy makers to significantly accelerate the development of a powerful AI management system.

The system should at least include:

The agency responsible for supervising the AI

Provenance and watermarking system to help distinguish between real and generated content, and to track model leaks

Powerful audit and certification system

After the damage caused by AI, make it clear who is responsible

Provide strong public funds for AI security technology research.

Mankind can enjoy the prosperous future brought by AI. After successfully creating a powerful AI system, we can enjoy the "summer of AI", reap the rewards, design the above systems for the benefit of all, and give all mankind a chance to adapt.

Now, our society has suspended other technologies that could have catastrophic effects. For AI, we should do the same.

Let's enjoy a long AI summer instead of entering autumn unprepared.

Soon, the signing of the open letter by thousands of bosses caused an uproar in public opinion.

Supporters believe that the panic about AI is justified because its training efficiency is too high and the level of intelligence is expanding every day.

Opponents even put up propaganda posters directed by Edison to kill people by alternating current, believing that this is no more than an inexplicable accusation that forces with ulterior motives are misleading people who do not know the truth.

Sam Altman, intriguing attitude from the recent events, the arrival of this letter, it can be said that she came out shyly.

Since the end of November last year, ChatGPT seems to have sounded the starting gun, and AI institutions all over the world are sprinting crazily, rolling up to red eyes.

And the "initiator" of OpenAI, the pace has not slowed down, in conjunction with the financial father Microsoft, it gives us a critical blow every once in a while.

The panic brought by the advanced AI tools has hit everyone wave after wave.

Today, the bosses finally made a move.

In yesterday's public interview, Sam Altman's words unexpectedly had some intriguing expressions.

He says researchers at OpenAI don't understand why there is reasoning in the GPT series.

All they know is that in constant testing, people suddenly found that the reasoning ability of the GPT series began to appear from ChatGPT.

In addition, Altman also said in the interview such an earth-shattering sentence: "AI can indeed kill humans."

In addition to Altman, artificial intelligence godfather Geoffrey Hinton, Bill Gates and New York University professor Gary Marcus have recently issued a warning: AI to eliminate human beings, is really not empty talk.

OpenAI researcher predicted that AI will know that he is AI, and Richard Ngo from OpenAI governance team also predicted the development of AI in two years' time.

Prior to that, he was a research engineer on DeepMind's AGI security team.

According to Richard's prediction, the neural network will have the following characteristics by the end of 2025:

Have human-level situational awareness, such as knowing that you are a neural network.

Surpass humans in writing complex and effective plans

Do better than most peer reviews

Complete applications can be designed, coded, and distributed independently

Outperform anyone in any computer task that a white-collar worker can accomplish in 10 minutes

Write award-winning short stories and books of up to 50,000 words

Generate a coherent 20-minute movie

However, good humans will still do better (albeit at a much slower pace) in the following areas:

Write a novel

Carry out a plan steadily for several days

Make breakthroughs in scientific research, such as the innovation of theorems (although neural networks have proved at least one)

Compared with the robot controlled by neural network, it accomplishes typical manual tasks.

In addition, in popular terms, situational awareness (situational awareness) refers to the individual's perception, understanding and prediction of events and situations in the surrounding environment. This includes understanding the dynamic changes in the environment, assessing the impact of these changes on yourself and others, and predicting what may happen in the future.

For the specific definition of situational awareness in AI research, please refer to the following paper:

Paper address: https://arxiv.org/ abs / 2209.00626Richard says his forecast is actually closer to 2 years, but 2.75 seems more robust because different people use different evaluation criteria.

In addition, the "prediction" mentioned here means that Richard believes that the credibility of this view is more than 50%, but not necessarily much higher than 50%.

It is important to note that the prediction is not based on any specific information related to OpenAI.

Netizens are eagerly waiting for GPT-5. Compared with the very cautious boss, netizens obviously can't wait for the arrival of GPT-5 after experiencing the GPT-4 with cracked performance.

Recently, the prediction of GPT-5 is springing up like bamboo shoots after a spring rain.

(unofficially) according to the prediction of a mysterious team, GPT-5 will bring a series of exciting features and enhanced performance on the basis of GPT-4, such as overall transcendence in terms of reliability, creativity, and adaptation to complex tasks.

Personalized template: customized according to the user's specific needs and input variables to provide a more personalized experience.

Allows users to adjust the default settings of AI, including professionalism, humor, tone of voice, etc.

Automatically convert text to different formats, such as still images, short videos, audio, and virtual simulations.

Advanced data management: includes recording, tracking, analyzing, and sharing data to simplify workflows and improve productivity.

Assistant decision-making: assist users to make informed decisions by providing relevant information and insights.

Stronger NLP capabilities: enhance AI's understanding and response to natural language, making it closer to human beings.

Integrated machine learning: allows AI to learn and improve continuously, adapting to user needs and preferences over time.

As a transitional GPT-4.5, the team also predicts that the GPT-4.5 model as a transitional will be launched in September or October 2023.

GPT-4.5 will build on the advantages of GPT-4, which was released on March 12, 2023, and bring more improvements to its dialogue ability and context understanding:

Handle longer text input

GPT-4.5 may process and generate longer text input while maintaining context and consistency. This improvement will improve the performance of the model in dealing with complex tasks and understanding the user's intentions.

Enhanced consistency

GPT-4.5 may provide better coherence, ensuring that the generated text pays attention to relevant topics throughout the conversation or content generation.

A more accurate response

GPT-4.5 may provide more accurate and context-sensitive responses, making it a more effective tool for a variety of applications.

Model fine tuning

In addition, users may be able to fine-tune GPT-4.5 more easily to more effectively customize the model and apply it to specific tasks or areas, such as customer support, content creation, and virtual assistants.

With reference to the current situation of GPT-3.5 and GPT-4, GPT-4.5 is also likely to lay a solid foundation for GPT-5 innovation. By addressing the limitations of GPT-4 and introducing new improvements, GPT-4.5 will play a key role in shaping the development of GPT-5.

Reference:

Https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Https://chatgpt-5.ai/gpt-5-capabilities/

Https://twitter.com/RichardMCNgo/status/1640568775018975232

This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report