In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
A breakthrough made by three artificial intelligence researchers more than a decade ago has permanently changed the field of artificial intelligence.
They created a convolution neural network system called AlexNet and used 1.2 million network pictures to train the system. In the end, the system successfully identified a variety of objects such as container ships and jaguars, which is far more accurate than previous image recognition systems.
The three developers are Alex Crezevsky (Alex Krizhevsky), Ilya Satzkiefer (Ilya Sutskever) and Geoffrey Hinton (Geoffrey Hinton). With this invention, they won the ImageNet image recognition competition in 2012. With the successful development of AlexNet, the scientific and technological community began to realize the potential of machine learning, which brought about a revolution in artificial intelligence.
To a large extent, this revolution in artificial intelligence is a quiet revolution that ordinary people do not pay attention to. Not long ago, most people knew nothing about the prospect that artificial intelligence based on machine learning could take over all human jobs in the future. Machine learning is a low-level technology that involves computers learning from a large amount of data, which has been widely used in many tasks that can only be done by human beings in the past, such as credit card fraud identification and online content-related advertising.
But recently, there has been another breakthrough in the field of artificial intelligence that has spread to human society, making the revolution less quiet than it used to be-people finally realize that their "rice bowls" are being threatened by artificial intelligence.
ChatGPT detonates a new round of artificial intelligence revolution
One example of this breakthrough is ChatGPT, a question-and-answer text generation system released at the end of November last year. Such a system used to be seen only in science fiction, but now it has come into the public eye and become a shining existence.
ChatGPT, founded by OpenAI, an American artificial intelligence research institution, is the latest and most eye-catching one of the hot so-called generative artificial intelligence systems. This generative artificial intelligence can generate the corresponding content according to the instructions issued by human beings. Ilya Satzkiver, one of the founders of AlexNet, is the co-founder of OpenAI.
It is reported that if you enter a question into ChatGPT, it will generate a concise text containing the answer to the question and relevant background knowledge. For example, when asked who won the 2020 U. S. presidential election, it will tell you that Joe Biden won the election, in addition, it will tell you when Biden took office.
ChatGPT is easy to use and generates the answer to the question immediately after the question is typed. It looks like a normal human being is communicating with you. It is precisely because of this characteristic that artificial intelligence is now promising to enter people's daily life. So far, Microsoft has invested billions of dollars in OpenAI, which is also a sign of the popularity of the technology. ChatGPT is likely to play a central role in the next phase of the artificial intelligence revolution.
However, ChatGPT is just one of a series of recent high-profile artificial intelligence achievements. OpenAI's other artificial intelligence system, the automatic writing system GPT-3, shocked the tech world when it was first demonstrated in 2020. Since then, other companies have quickly launched their own large-scale language model systems. Then, artificial intelligence technology was further extended to the field of image generation last year, with representative products including OpenAI's Dall-E2, Stability AI's open source Stable Diffusion, and Midjourney systems.
With the emergence of these technologies, people began to flock to find a variety of new application scenarios for these technologies. It is expected that a large number of new applications will emerge, perhaps comparable to the explosion of life forms in the Precambrian period.
If computers can learn to write and paint, what else can't they do? Recently, artificial intelligence has been experimentally applied to video generation (Google), mathematical question answering (Google), music creation (Stability AI), code generation and other fields. Pharmaceutical companies also plan to use artificial intelligence technology to help design new drugs. Biotech companies have used artificial intelligence to design new antibodies and greatly shorten the time required for preclinical testing.
Artificial intelligence will revolutionize the way humans interact with computers. It will understand human intentions and act accordingly in an unprecedented way. Artificial intelligence will become a basic technology, touching all aspects of human society.
With the increasingly wide application of artificial intelligence, people need to consider the negative impact that this technology may bring to human society. For example, ChatGPT may be used by teenagers to finish their homework instead of them. To make matters worse, artificial intelligence may be deliberately used to generate a large amount of false information. In addition, it can automate a lot of human work, which may cause serious unemployment.
Is artificial intelligence reliable?
People who agree with generative artificial intelligence believe that these systems can improve human productivity and creativity. For any work that requires creativity, generative artificial intelligence systems help people break the shackles of their minds, suggest new ideas to staff, help review work, and even help produce large amounts of content. However, while "generative" artificial intelligence is easy to use and has the potential to upend most areas of technology, it poses profound challenges for both companies and individuals.
The biggest challenge that artificial intelligence brings to human beings is reliability. Artificial intelligence systems can produce seemingly credible results, but in fact that is not necessarily the case. Artificial intelligence is based on massive data according to the principle of probability to make the best guess and form the answer to the question. However, artificial intelligence does not have the kind of real understanding in the sense of human thinking about the results they produce.
Artificial intelligence can not remember the dialogue with human beings, they do not really understand human beings, and have no concept of words and symbols in the real world. Artificial intelligence only gives a seemingly persuasive response to human instructions. They are smart and mindless human imitators, and their resulting output is just a digital illusion.
There are already signs that the technology may produce seemingly credible but untrustworthy results. Late last year, for example, Meta, the parent company of Facebook, demonstrated a generative system called Galactica, which uses academic papers as data sets for training. It was soon discovered that the system could generate research results that looked credible but were actually completely false as required. As a result, Meta had to withdraw the system a few days later.
The creators of ChatGPT also admit that the system sometimes gives ridiculous answers because the truth / correctness of data sets cannot be guaranteed when training artificial intelligence. OpenAI says so-called supervised learning (that is, being trained under human supervision rather than self-taught) does not work for ChatGPT because the system is often better at finding "ideal answers" than its human teachers.
One possible solution is to check the reasonableness of the results before they are output. The experimental LaMDA system released by Google in 2021 gave about 20 different responses to each human instruction, and then Google assessed the security and groundedness of each response. However, some experts believe that any system that relies on humans to verify the output of artificial intelligence may cause more problems. For example, this may teach artificial intelligence to produce deceptive but seemingly more credible content, thus fooling humans again. The truth is sometimes elusive, and humans are not very good at exploring the truth, so it is not necessarily reliable to rely on humans to evaluate the output of artificial intelligence.
Some experts believe that there is no need to worry about philosophical questions such as whether the output is reasonable, just focus on the practical value of artificial intelligence technology. For example, the output of Internet search engines sometimes also contains error messages and useful results, but people will identify the results themselves and find out what is useful to them. People need to learn to use these artificial intelligence tools discriminatively in order to benefit from them. Of course, children can't use these systems to cheat at school.
However, practice has shown that it may not be a good idea for people to identify the output of "generative" artificial intelligence by themselves. the reason is that people tend to put too much faith in the output of artificial intelligence-they unconsciously treat artificial intelligence as real people and forget that these systems do not really understand what human beings mean.
In view of this, the reliability of artificial intelligence may indeed be a problem. The technology could be deliberately used by bad people to become a factory for generating false information, which could flood social media platforms. Artificial intelligence may also be used to imitate the writing style or voice of a particular person. In short, the production of false content will become easier, cheaper and unlimited than ever before.
Emma Mostak, president of Stability AI, said that reliability is an inherent problem in artificial intelligence. People can either use the technology in a moral and legal way or use it immorally or illegally. He claims that people with bad intentions are bound to take advantage of these advanced artificial intelligence tools, and the only precaution against this is to disseminate the technology as widely as possible and to open it to all.
However, in the eyes of professionals, it is still controversial whether the wide spread of artificial intelligence technology can effectively prevent the negative effects of artificial intelligence technology. Many experts argue that access to the basic technologies of artificial intelligence should be restricted. People from Microsoft said that their company will communicate closely with customers to understand their use of the technology and ensure that artificial intelligence is used responsibly.
In addition, Microsoft also tries to prevent the use of its artificial intelligence products to do bad things. The company will provide customers with the necessary tools to scan the output for offensive content, as well as specific content that customers do not want to see. The company is keenly aware that artificial intelligence can also behave abnormally: just one day after launching the intelligent chat robot Tay in 2016, Microsoft had to hastily withdraw the product because the robot output racist and other extreme remarks to users.
To some extent, the technology itself may restrain the abuse of artificial intelligence. For example, Google has developed a language system that can detect whether a speech is synthesized by artificial intelligence with 99% accuracy. All artificial intelligence models under development in Google prohibit the generation of images of real people, which will play a role in preventing deep forgery (deep fake).
Will human work be replaced in large numbers?
With the emergence of generative artificial intelligence, people begin to debate the impact of artificial intelligence on human work again. Will intelligent machines replace human work in large numbers? Or, by taking over the procedural parts of human work, artificial intelligence increases human productivity and further enhances the sense of achievement?
Occupations and jobs involving a large number of design / writing elements will face the direct impact of artificial intelligence. Late last summer, Stability AI launched the artificial intelligence text-to-image model "Stable Diffusion". This model can instantly generate matching images according to the text description provided by human beings. This artificial intelligence tool has caused a panic in the commercial art and design circles.
Some technology companies are already trying to apply "generative" artificial intelligence to advertising. The representative company in this area is Scale AI, which is using advertising images to train an artificial intelligence model. This model may be used to produce professional product advertising pictures for small retailers and small brands, thereby saving advertising production costs for merchants.
In short, artificial intelligence may affect the livelihood of content creators. This technology has revolutionized the entire media industry. Major content providers around the world agree that they need a meta-universe strategy as well as a "generative" media strategy.
Some creators who are on the verge of being replaced by artificial intelligence believe that the problem is not just about their jobs and livelihoods, but also about something deeper. A musician was so shocked when he saw ChatGPT's songs that looked like his style that he posted the following sigh on the Internet: musicians create music in pain, which is based on complex inner pain, but for artificial intelligence, they do not have human feelings, can not feel human suffering.
Technical optimists take a different view. They believe that artificial intelligence will not replace human creativity, but can enhance human creativity. For example, in the past, a designer could only create a single picture one by one, but with an artificial intelligence image generator, he could focus more on the creation of the entire video or the entire collection.
However, the existing copyright system is not very beneficial to creators. Companies that use artificial intelligence claim that the United States allows "fair use" (fair use) of copyrighted materials, so they can use all available data to train their systems free of charge. But many artists and related groups are very dissatisfied with this, they believe that artificial intelligence companies use their works of art to train artificial intelligence, is an abuse of this copyright license. Just last week, Getty Images and three artists took legal action against Stability AI and other artificial intelligence companies. This is the first time that the creative community has taken action on the problem of artificial intelligence training data.
A lawyer representing the artificial intelligence company said that the artificial intelligence industry has known that legal action is inevitable and is ready for it. These lawsuits, which concern the status of data in artificial intelligence, will establish ground rules that will have as much impact on the technology industry as the patent wars in the early days of smartphones.
Fundamentally, courts and legislators will play the role of establishing the basic framework for the copyright system in the era of artificial intelligence. If they think that artificial intelligence technology has subverted the foundation of the existing copyright system, then the copyright system will face change.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.