Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize Stroke based on screen Space with unity3d

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces how to achieve unity3d strokes based on screen space, the article introduces in great detail, has a certain reference value, interested friends must read it!

Outline (Based on Image Space)

Due to the poor effect of compressing the image, it is recommended to view the original image on a private blog:

Outline based on Image Space yuyujunjun.github.io

Introduction

The special effect of the broken noodles in the picture is not obvious in the light background. If the object is selected in the unity editor, unity will stroke it, and this effect meets the requirements.

The effect after being selected in unity

The effect of no stroke added

There are a variety of stroke methods, and most of the principles are easy to understand. This article focuses on introducing some API related to unity.

Advantages and applications

You can depict all the hollowed-out parts of the object, whether it is the inner boundary or the outer boundary.

Occlusion can be judged as needed (nothing more than adding an extra texture cache)

Deficiency

It is impossible to judge the "boundary" of a sudden change in normal vectors, so if only this method is used, it may not feel the same as cartoon rendering.

Basic principles

On the other hand, for an object in the image (for example, the ball), if we want to stroke it, we need its boundary position, boundary size and boundary color.

On the positive side, we want to stroke the shape of a pixel on an image. First of all, we need to get the position of the shape on the image, and then we calibrate the pixels on its boundary, this step is very free, because in theory, you can use a variety of calibration methods to calibrate all kinds of shapes, as long as you ensure that they are implemented in parallel on the GPU. Finally, we cover the pixels of these calibration positions on the original image, and you can stroke.

Steps

Step one:

This step is to record its location, and the illustration is just for illustration, so a regular rendering is used. In fact, we end up using its depth map, and the advantage of the depth map is that the depth map marks the areas with and without objects in disguise. Of course, the depth values of areas without objects are all 1.

Step 2:

In the second step, we extend the boundary of the image obtained in the first step and write a value to the boundary area. You can see that this picture is much more rugged than the previous one.

As for how to judge whether it is the boundary of the object, because our previous picture has marked the object area and the non-object area, then we only need to sample around each pixel, if there are pixels in the object area, then the pixel should be the boundary.

There is a small trick that can be judged to be a boundary as long as the values of the surrounding sampled points do not add up to 0.

The code is as follows:

Fixed4 col1 = tex2D (_ MainTex,i.screenPos.xy); fixed4 col2 = tex2D (_ MainTex,float2 (i.screenPos.x + 4Universe ScreenParams.xrei.screenPos.y)); fixed4 col3 = tex2D (_ MainTex,float2 (i.screenPos.x-4Universe ScreenParams.xPiiscreenPos.y)); fixed4 col4 = tex2D (_ MainTex,i.screenPos.xy); fixed4 col5 = tex2D (i.screenPos.x, i.screenPos.y + 4/_ScreenParams.y)) Fixed4 col6 = tex2D (_ MainTex,float2 (i.screenPos.x, i.screenPos.y-4/_ScreenParams.y)); if ((col1.x + col1.y + col1.z + col2.x + col2.y + col2.z + col3.x + col3.y + col3.z + col4.x + col4.y + col4.z + col5.x + col5.y + col5.z+ col6.x + col6.y + col6.z) > 0.01) return fixed4 (_ OutlineColor.rgb,i.vertex.z)

Interestingly, this step can be very complex, and this is just a simple expansion.

Step 3:

In the third step, we superimpose the pixels at the boundary, and the result of the first step plays a different role. In the second step, the purpose of the result of the first step is to identify the position of the object so that the boundary can be drawn in the second step, while in the third step, it is to subtract from the result of the second step to tell us what is not the boundary, but the object itself.

Matters needing attention

Although the above two images (one calibration position and one calibration boundary) can complete the stroke based on the graphic space, in fact, if you need to judge the occlusion or add special auxiliary effects to the boundary, you can add some more cache. just weigh it yourself.

Partial code (introduction to API)

Render the scene with an alternate shader

Function: the camera will render the scene as usual, but the objects in the scene will be rendered using the shader you specified.

The purpose of this function is, for example, to render a map of the normal information of the scene, or the map of the depth information of the scene that we mentioned above.

This feature corresponds to two API, namely:

Camera.RenderWithShader (Shader shader,string replacementTag) Camerea.SetReplacementShader (Shader shader,string replacementTag)

Explain the use of replacementTag. If the tag is empty, then all objects that the camera can render will use our customized shader. If there is a value, the camera renders only materials that have a specific Tag. For example:

Camerea.SetReplacementShader (depthshader, "RenderType")

The camera will only render objects with the same Render type values for Render Type and depthshader.

In unity Shader, specify the tag of the shader as follows:

Tags {"TagName1" = "Value1"TagName2" = "Value2"}

In Replacement Shader, you can have multiple subshader. For a normal shader, the subshader executes from top to bottom, and if the first subshader cannot be executed for hardware reasons, the next one is executed until it can be executed.

The subshader here plays a similar role. if there is a ReplacementTag in the subshader, then the subshader will be used to render real-world objects with the same tag value as the subshader.

Copy the original image to the new image using a specific shader

Graphics.Blit (RenderTexture src,RenderTexture dst,Material mat) above is all the content of the article "how to achieve unity3d Stroke based on screen Space". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report