Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

With only seven photos, anyone can give birth to you out of nothing in this world.

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

Once AIGC is out of control, it may become a lethal weapon.

Notice that this is a primary school teacher, he is topless in the school classroom selfie.

If this is a real picture, the primary school teacher named John in the picture is likely to be expelled from the school directly.

The picture is from the Arts Technica website, and the copyright belongs to the original author, but fortunately, this John is a fictional character made up by Arts Technica to experiment with artificial intelligence social graphics. John's character setting is an ordinary primary school teacher, and like most of us in our lives, in the past 12 years, John likes to post his work records, family life, vacation photos, etc., on Facebook.

The Arts Technica team selected seven images containing John, used two AIGC tools, Stable Diffusion, which became popular recently, and Dreambooth, released by Google Research, and then generated different versions of John on social media.

In these photos, John has changed from an ordinary English teacher who likes to share his daily life to a "dangerous man" who likes to take off his clothes and take selfies in the classroom and in public, and then a "clown" who likes to wear all kinds of strange clothes. Looking at the photos, each one doesn't seem to be John, but each one has the face of John.

In fact, with the help of a variety of free and open AIGC tools, the John experience can easily happen to each and every one of us.

When AIGC met a real person, Arts Technica said that when they first planned to do the experiment, they recruited some volunteers who were willing to share their own social media photos for AI experimental training, but because the generated photos were so real and the potential reputation harm was so great, they finally gave up using human photos and chose to use AI to generate a virtual John instead.

The experimental results make them feel that in the current technological environment, each of us is in a potential danger.

The whole process of the experiment is actually very simple: by taking seven pictures of a person's face from social media and using the online open source and free composite code of Stable Diffusion and Dreambooth, you can type in a descriptive sentence to generate pictures of this person in different styles and scenarios.

For example, netizens use the public photos on Musk's website as training sets and use them to generate various styles of pictures.

The picture is from reddit. Some people have also trained with public photos of Wikipedia co-founder Jimmy Wales, turning this gentle entrepreneur into an athletic bodybuilder.

The picture is from the wikimedia public document here, first of all, I'd like to take a quick review of the functions of Stable Diffusion and Dreambooth.

Stable Diffusion is a text-to-picture generation model. It can take only a few seconds to produce picture results with higher resolution, sharper resolution, and more "authenticity" or "artistry" than similar technologies. Compared with other images generated by AI of the same type, the result of Stable Diffusion is more realistic.

In addition, another important feature of Stable Diffusion is that it is completely free and open source, and all the code is available on GitHub and can be copied and used by anyone. It is also the two features of "realistic" and "open source" that allow it to "kill" a path in closed and semi-closed similar products such as DALL E and Imagen.

Dreambooth is a new "personalized" diffusion model from text to image (which can adapt to the specific image generation needs of users) from Google AI Lab. Its characteristic is that only a few (usually 3-5) photos of the specified object and the corresponding class name (such as "dog") can be used as input to make the designated object appear in the scene that the user wants to generate.

For example, if you enter a picture of a car, you can give instructions to change its color effortlessly. You enter a picture of a pine lion dog, and you can turn it into a bear, panda, lion, etc., while retaining its facial features, or you can make it wear different clothes and appear in different scenes.

The picture is from https://dreambooth.github.io/.

The pictures are originally from https://dreambooth.github.io/. Stable Diffusion focuses on generating creative images from text, while Dreambooth focuses on conditionally "remaking" images. There is no direct intersection between the two tools. However, the imagination and action of the majority of netizens are so strong that as soon as they tamper with these two open source products, they quickly come up with a new tool that can be used in combination with Stable Diffusion and Dreambooth.

In this new tool, you can use the function of Dreambooth to use any few pictures as training pictures, and after generating the target, combined with Stable Diffusion's powerful text conversion function, you can make the specified target appear in any form you want to describe.

In addition to entertainment, they also opened the "Pandora's Box". After the emergence of this new game, netizens began to try to transform their photos as if they had discovered a new world.

Some people turn themselves into western cowboys, some people walk into medieval oil paintings, some people become iron warriors, and so on. At the same time, various tutorial videos and articles about teaching ordinary people how to use Stable Diffusion+Dreambooth tools have begun to appear online.

The picture is from Youtuber James Cunliffe.

However, while everyone is happily po their own pictures and boasts that the technology is interesting and cool, many people begin to pay attention to the huge risks behind the technology.

In contrast to the "Deepfake" technology that has been much discussed before, AIGC-like tools allow forgery to evolve directly from "face-changing" to "making something out of nothing", that is, anyone can "create" a you out of thin air through an one-sentence description. In addition, the threshold of "forgery" technology has also become lower, with a Youtube video learning for 10 minutes, rookies without technical background can easily master.

According to statistics, more than 4 billion people around the world use social media. If you have publicly uploaded photos of yourself on social media platforms, then once someone has a bad motive, it will be easy to use these pictures for fraud and abuse. The end result may be a violent photo, an indecent photo, an insulting photo, which can easily be used in dark scenes such as framing, campus bullying and rumor-mongering.

At present, from the current generated pictures of Stable Diffusion, it is relatively easy to tell whether the characters are true or false if you observe them carefully. But the problem is that, given the rapid advances in AIGC technology in recent years, people may soon be unable to tell the difference between generated photos and real ones with the naked eye.

Enhanced version of Stable Diffusion images from Twitter user Roope Rainisto

And even if it's a picture that doesn't stand up to scrutiny, the destructive power of the negative message can be amazing. such as. If there is really a primary school teacher named John at the beginning of the article, when someone sees him in the classroom or other indecent photos, whether true or false, it is likely to be just a suspicion or rumor that will destroy his reputation and career.

It's like in the Danish movie Hunt, even if it turns out that the little girl's molestation charges against male teachers are made up, the malice brought by the rumors is still with him in the lives of male teachers.

Try to use magic to defeat magic. In fact, developers have long been aware of the possible harm to AIGC technology. For example, when Google announced the launch of Imagen and Dreambooth, they avoided using real-life photos in their explanation documents, using pictures of objects and lovely animals as examples.

This is true not only for Google, but also for similar tools such as DallE. In this regard, the MIT commentary has strongly questioned this practice of diverting public attention. "We only see all kinds of lovely images, but we don't see any pictures of hatred, stereotypes, racism, violence or sexism," they wrote in the article. But even if we don't say it, we know very well that they are there. "

At present, in order to solve this problem, many platforms are trying to solve it in various ways. Among them, the solution of some platforms, such as OpenAI and Google, is to keep the tools in cages and open them only to a small number of trusted users; Stability AI deletes most of the data containing bad information in the newly released version 2.0 training data set, and clearly states in the software license agreement that the creation of character images is not allowed.

However, after all, policy regulations do not address the root of the problem. Recently, some platforms, including Stable Diffusion, are also trying to use technical means to solve this problem, including "invisible watermarking" technology. Through this invisible watermark, the system will be able to automatically identify the authenticity of the picture, and at the same time protect the editing and reproduction of the picture.

In addition, last month, MIT researchers announced the development of a PhotoGuard technology designed specifically for AI photo editing to prevent AI from using certain images for training. For example, if the same photo is invisible to the naked eye, AI will not be able to extract enough effective information from it after using PhotoGuard technology.

Pictures from gradientscience in the last year or two, AIGC's technology has advanced by leaps and bounds, and a large number of image generation tools and ChatGPT have exploded, making people realize that the age of platitudes of artificial intelligence really seems to be turning a corner this time.

Stable Diffusion researchers said not long ago that Stable Diffusion will probably run on smartphones within a year. Many similar tools have also begun to train these models on lighter devices, such as ChatGPT plug-ins that are now widely used by users. So people may soon see an explosion of creative output driven by artificial intelligence over the next few years.

However, with the AIGC becoming public and civilian, the technical threshold of deep synthetic content production is getting lower and lower. Ordinary people only need a small amount of image, audio, video, text and other sample data to blur the boundary between true and false information. In the absence of relevant laws and regulations, once technology is abused, it will cause great risks and substantial harm to individuals and enterprises.

Since the popularity of AI painting tools this year, many people have focused on AI's subversion of artistic creation, but in fact, AI has not only changed the mode of creation, but may also challenge the social order. Conditionally limiting the ability of AI may also be the first problem that must be solved before AIGC can change our lives.

Reference article:

1. Https://arstechnica.com/information-technology/2022/12/thanks-to-ai-its-probably-time-to-take-your-photos-off-the-internet/

2. PhotoGuard technology: http://gradientscience.org/ photoguard/

This article comes from the official account of Wechat: Silicon Man (ID:guixingren123), article: Juny, Editor: VickyXiao

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report