Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

AI writes NB on his face like this, and he is playing with a very new art.

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

Thanks to CTOnews.com netizen Sancu for the clue delivery! It is said that AI painting is menacing, but when it comes to creativity, it is still fun for human beings.

If you don't believe me, take a look at this AI, which looks ordinary at first glance, but has gone viral on the Internet to generate a picture of a beautiful woman:

△ source: Douyin account @ McOrange MAJIC quickly press the phone screenshot button to see the comparison between the original image and the thumbnail, you can see the mystery:

Yes, that's right!

In this picture generated by AI, two Chinese characters are incorporated into the light and shadow.

These two days, similar pictures are brushing the screen crazily. Discussed on various platforms are not only the above 🐮 beer sister, but also the following diao brother.

△ source: Douyin account @ wheat orange MAJIC and the little red sweater sister who "wears" AI:

△ source: Douyin account @ McOrange MAJIC suggests that you manually zoom in on your mobile phone. The smaller the size, the clearer the text in the picture.

Some netizens have given other secrets of "reading", such as taking off your myopic glasses:

There are two most popular comments, one is the "cow wow cow wow" exclamation, and the other is the crying "squatting course".

So, how are these awesome and diao pictures made?

ControlNet also made great efforts to make light and shadow "write" on pictures and even characters' clothes, using the same magical combination of AI drawings:

Stable Diffusion+ControlNet .

As one of the two most popular AI painting tools, Stable Diffusion has been popular for a year and has been known and broken by everyone.

So today I'd like to focus on ControlNet, an AI plugin for Stable Diffusion.

This spring, ControlNet became popular because it was able to handle the hand details and overall structure that AI could not control, and was jokingly called "AI painting detail control master" by netizens.

Stable Diffusion is obviously too random to generate images based on prompts, and the function provided by ControlNet happens to be a more precise way to limit the scope of image generation.

In essence, the principle is to add an additional input to the training diffusion model so as to control the details of its generation.

"extra input" can be of various types, including sketches, edge images, semantic segmentation images, human key point features, Hough transform detection lines, depth maps, human bones, and so on.

In the whole process of Stable Diffusion and ControlNet, the first step is to generate images by the preprocessor, the second step is to let these images be processed by the ControlNet model, and the third step is to input the images into Stable Diffusion to generate the final version displayed in front of the user.

The whole idea of ControlNet is to copy the weight of the diffusion model and get a trainable copy (trainable copy).

The original diffusion model was pre-trained with billions of images, and the parameters were "locked". But this trainable copy only needs to be trained on a small data set for a specific task to learn conditional control.

And even if the amount of data is small-- even less than 50,000-- after the model is trained, the effect of conditional control is good.

For example, in the pictures of Brother diao and Beer Sister, it mainly plays a role in ensuring that the text is "put" into the image as light and shadow, clothing patterns, and so on.

△ source: Douyin account @ wheat orange MAJIC Douyin the original author said that finally, the use of the ControlNet tile model, this model is mainly responsible for increasing the details of the description, and to ensure that the composition of the original image will not be changed when increasing the noise reduction intensity.

There are also AI enthusiasts who "find another way", saying that to get the effect shown in the picture, you can use the ControlNet brightness model (control_v1p_sd15_brightness).

The function of this model is to control the depth information and to control the brightness of the stable diffusion, that is, to allow the user to color the grayscale image or recolor the resulting image.

On the one hand, it can make the picture and text blend better.

On the other hand, it can brighten the picture, especially the text, so that the text written by light and shadow will look more obvious.

Sharp-eyed friends may have found that the overall idea of adding Chinese character light and shadow to the picture is the same as the image-style QR code which was also popular a few days ago.

Not only does it not look like a "serious" QR code, scanning with a mobile phone can actually jump to a valid web page.

There are not only animation style, but also 3D, ink style, floating world painting style, watercolor style, PCB style.

It also caused a "wow" sound on platforms such as Reddit:

With a slight difference, however, these QR codes require not only Stable Diffusion and ControlNet (including the brightness model), but also the cooperation of LoRA.

For those of you who are interested, you can poke into the previous article to find out "the new way to play ControlNet is hot."

Soon after the light and shadow effect of sharing tutorials on AI became popular, some AI bigwigs on Twitter came forward to express their willingness to share the hands-on tutorials.

The idea is very simple and can be divided into three important steps:

Step 1: install Stable Diffusion and ControlNet

The second step is to carry out the general text diagram steps in Stable Diffusion.

The third step is to enable ControlNet, focusing on adjusting the two parameters Control Weight and Ending Control Step.

Using this method, we can not only complete the integration of portraits and light and shadow characters, but also the night view of the city.

The boss also gave a warm reminder in the tutorial:

When writing prompts, try not to use prompts such as close-up portraits, otherwise words or patterns will be covered on the face, which is ugly.

(tutorial link: https://mp.weixin.qq.com/s/rvpU4XhToldoec_bABeXJw)

With hand-to-hand teaching, and the secret book of making AI QR codes with the same idea has long been made public, netizens are already having fun:

△ source Weibo, Douyin netizens, generate works for AI you tell me, this effect, this practical ability, who saw it without saying a word of NB? (doge)

If you also do NB, welcome to @ qubit on Weibo platform to feed your final product and interact with us.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report