In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the number of rendering queues in Unity, which can be used for reference by interested friends. I hope you will gain a lot after reading this article. Let's take a look at it.
Several rendering queues in Unity
First, take a look at several built-in rendering queues in Unity, and sort them from first to last according to the rendering order. The smaller the number of queues, the more you render first, and the larger the number of queues, the more you render later.
Background (1000) the queue of the earliest rendered objects.
Geometry (2000) rendering queue for opaque objects. Most objects should be rendered using this queue, which is the default rendering queue in Unity Shader.
AlphaTest (2450) has a transparent channel, and the queue of objects that need to be AlphaTest is more efficient than in Geomerty.
Transparent (3000) rendering queue for semi-transparent objects. Generally speaking, objects that do not write depth are rendered in this queue by Alpha Blend, etc.
Overlay (4000) the queue of the last rendered objects, usually covering effects, such as lens halos, screen patches, and so on.
It is also very easy to set up the rendering queue in Unity. We do not need to create it manually or write any scripts, we just need to add a Tag to the shader. Of course, if not, it is the default rendering queue Geometry. For example, if we need our objects to be rendered in the Transparent rendering queue, we can write:
Tags {"Queue" = "Transparent"}
We can see the rendering queue of shader directly on the Inspector panel of shader:
In addition, when we write shader, we often have a Tag called RenderType, but this is not as common as Render Queue. Here by the way:
Opaque: used for most shaders (normal shaders, self-illumination shaders, reflection shaders, and terrain shaders).
Transparent: for translucent shaders (transparent shaders, particle shaders, font shaders, shaders for extra channels of terrain).
TransparentCutout: skin transparent shader (TransparentCutout, two-channel vegetation shader).
Background: Sky Box shader.
Overlay: shader used by GUITexture, lens halo, screen flash, etc.
TreeOpaque: bark in the terrain engine.
TreeTransparentCutout: leaves in the terrain engine.
TreeBillboard: the billboard tree in the terrain engine.
Grass: grass in the terrain engine.
GrassBillboard: billboard grass of terrain engine he Zhong.
Rendering order of opaque objects in the same rendering queue
Take out the Unity, create three cubes, all use the default bump diffuse shader (the rendering queue is the same), give three different materials respectively (the object engine with the number of small vertices of the same material will dynamically batch), and use the Frame Debug tool with Unity5 to check the Draw Call. (Unity5 is much easier to use. If you use 4, you have to use frames such as NSight.)
It can be seen that opaque objects in Unity are rendered in front-to-back rendering order, so that opaque objects are rendered in Z Test after the vertex phase, and then you can get whether the object is finally visible on the screen. If the previously rendered object has already written the depth, and the depth test fails, then the later rendered object will not go to the fragment phase directly. (however, we need to open the distance between the three objects a little bit. During the test, I found that if the distance is very close, the rendering order will be out of order, because we don't know which objects are closer to the camera according to what criteria are used to determine which objects are closer to the camera in Unity.)
Rendering order of translucent objects in the same rendering queue
The rendering of transparent objects has always been a pain in graphics. For the rendering of transparent objects, it is not as fast and economical as rendering opaque objects, because transparent objects do not write depth, that is to say, there is no way to judge the interspersed relationship between transparent objects, so translucent objects are generally rendered from back to front, because there are more transparent objects. Transparent objects do not write depth, so there is no optimization between transparent objects that can be eliminated by depth testing. Each transparent object will be rendered in the pixel stage, resulting in a large number of Over Draw. This is why particle special effects are particularly performance-consuming.
Let's experiment with the order in which translucent objects are rendered in Unity, or the above three cubes. Let's change the shader of the material to the shader of the most commonly used Particle/Additive type of particles, and then use the Frame Debug tool to check the rendering order:
The order of rendering of translucent objects is from back to front, but because the content related to translucency is more complex, I will not talk about it in this article and intend to start another one.
Custom rendering queu
Unity supports our custom rendering queue. For example, if we need to ensure that certain types of objects need to be rendered after other types of objects are rendered, we can render through custom rendering queues. And super convenient, we only need to modify the Tag in the rendering queue when writing shader. For example, if we want our objects to be rendered after all the default opaque objects are rendered, then we can use Tag {"Queue" = "Geometry+1"} to allow objects that use this shader to render in this queue.
Or the above three cubes, this time we give three different shader, and the rendering queue is different. Through the above experiments, we know that by default, opaque objects are rendered in the queue of Geometry, so the three opaque objects will be rendered according to cube1,cube2,cube3. This time we want to reverse the order of rendering so that we can maximize the rendering queue for cube1 and minimize the rendering queue for cube3. Post the shader of one of them:
Shader "Custom/RenderQueue1" {SubShader {Tags {"RenderType" = "Opaque"Queue" = "Geometry+1"} Pass {CGPROGRAM # pragma vertex vert # pragma fragment frag # include "UnityCG.cginc" struct v2f { Float4 pos: SV_POSITION }; v2f vert (appdata_base v) {v2f o; o.pos = mul (UNITY_MATRIX_MVP, v.vertex); return o } fixed4 frag (v2f I): SV_Target {return fixed4;} ENDCG} / / FallBack "Diffuse"}
The other two shader are similar, except that the rendering queue and the output color are different.
Through the rendering queue, we are free to control when objects using this shader are rendered. For example, if the pixel phase of an opaque object is expensive to operate, we can control its rendering queue to render further back, so that some pixels occupied by the object can be removed from the depth written by other opaque objects.
PS: a problem seems to have been found here. When we modify shader, we usually don't need any other operations to directly see the changes after modification. However, after I have changed the rendering queue, sometimes I can see the changes in the rendering queue from the shader files. However, I do not see the change in the rendering results from the rendering results and the Frame Debug tool, and restarting Unity does not work. It wasn't until I re-assigned shader to the material that the change took effect. (guess it's a bug, because there are unlucky guys like me who have been trapped by this. My version is 5.3.2. I almost wondered if I drank yesterday, and the result of the experiment was completely wrong.)
ZTest (depth testing) and ZWrite (Deep Writing)
In the previous example, although the rendering order is reversed, the occlusion relationship between objects is still correct, which is the credit of z-buffer, regardless of our rendering order, the occlusion relationship can still remain correct. Our call to z-buffer is achieved through ZTest and ZWrite.
First of all, take a look at the ZTest,ZTest depth test, the so-called test is for the current object on the screen (more accurately is frame buffer) corresponding to the pixel, the object's own depth value and the current pixel cache depth value for comparison, if passed, the object in this pixel will be written to the color buffer, otherwise will not be written to the color buffer. ZTest provides more states. ZTest Less (pass if the depth is less than the current cache), ZTest Greater (if the depth is greater than or equal to the current cache), ZTest LEqual (pass if the depth is less than or equal to the current cache), ZTest GEqual (pass if the depth is greater than or equal to the current cache), ZTest Equal (pass if the depth is equal to the current cache), ZTest NotEqual (pass if the depth is not equal to the current cache), ZTest Always (pass anyway). Note that ZTest Off is equivalent to ZTest Always, and the turn-off depth test equals a full pass.
Let's take a look at ZWrite,ZWrite, which is relatively simple, with only two states, ZWrite On (turning on deep writing) and ZWrite Off (turning off deep writing). When we turn on depth writing, when the object is rendered, the depth of each pixel on the screen (more accurately, frame buffer) is written to the depth buffer; conversely, if it is ZWrite Off, then the depth of the object will not be written to the depth buffer. However, whether the object will write depth, in addition to the ZWrite state, the more important thing is to pass the depth test, that is, ZTest pass. If the ZTest fails, then the depth will not be written. For example, the default rendering states are ZWrite On and ZTest LEqual. If the current depth test fails, it means that there is already something higher in the corresponding position of this pixel to occupy the pit. Even if it is written, there is no need to write depth. Therefore, the above ZTest is divided into two cases: yes and no, and ZWrite is divided into two cases: enable and disable, there are a total of four cases:
1. Depth test passed, depth write on: write depth buffer, write color buffer
two。 Depth test passed, depth write off: do not write depth buffer, write color buffer
3. Depth test failed, depth write on: do not write depth buffer, do not write color buffer
4. Depth test failed, depth write off: do not write depth buffer, do not write color buffer
The default states in Unity (the state in which nothing is written when writing shader) are ZTest LEqual and ZWrite On, that is, deep writing is enabled by default, and the depth less than or equal to the depth in the current cache passes the depth test, and the original value in the depth cache is infinite, which means that objects closer to the camera will update the depth cache and block the later objects. As shown in the following figure, the square in the front will cover the object behind:
Write a few simple examples to see how the ZTest,ZWrite and Render Queue states control the rendering results.
One way to keep green objects from being obscured by the front cube is to turn off the blue cube depth writing in front:
Through the above experimental results, we know that, according to the rendering order from front to back, the blue object is rendered first, the depth test of the blue object passes, the color is written to the cache, but depth writing is turned off, the depth cache value of the blue part is still the default Max, and the green cube rendered later will still succeed in the depth test, write to the color cache and write to the depth Therefore, the blue cube does not play the role of occlusion.
Another way is to force green to pass the in-depth test:
In this example, the shader of other cubes uses the default rendering method, and the green one is set to Always, that is, the depth test passes anyway, and the color of the green cube is written to the cache. If there is no other override, then the final output is green.
So what if the red one also has a ZTest Always?
After ZTest Always is also used in the red cube, red obscures the green part and shows it as red. If we change the rendering queue and let green render before red, the result will be different:
The rendering queue has been changed so that the green rendering queue + 1 renders after the default queue Geometry, and eventually the overlap turns green again. It can be seen that when the ZTest is passed, the last one written to the color cache will overwrite the last one, that is, the final output is the last rendered object color.
Let's see what the Greater-related parts do. This time, the rest of us use the default rendering state. In the green cube shader, ZTest is set to Greater:
This effect is more interesting, although we find that in the depth, the front part obscured by the blue cube, the green one finally covers the blue, which is the desired result, but where are the other parts? For a brief analysis, the rendering order is from front to back, that is to say, blue is the first to render, the default depth is Max, the depth of the blue cube meets the LEqual condition, the depth cache is written to the depth cache, and then the green begins to render, the depth cache of the overlapping part is written by the blue cube, and the depth value of green meets the condition greater than the blue depth, so the depth test passes, and the color of the overlapping part is updated to green. While the part that coincides with the red cube, the red cube is finally rendered and tested in depth with the previous part, which is smaller than the previous part, the depth test fails, and the overlapping part will not be updated to red, so the overlapping part will eventually be green. Why did the green cube disappear where it did not coincide with other parts? In fact, when rendering a green cube, there is depth information except where the blue cube is rendered, and the depth information of other parts is Max. If the blue part is judged by Greater, it will definitely fail and there will be no color updates.
A fun effect can actually be achieved by testing ZTest Greater, which often occurs in the game. When the player is obscured by other scene objects, the occluded part will show the effect of X-ray. In fact, when rendering the player, a Pass is added, and the default Pass is rendered normally, while the added Pass uses Greater for depth testing, so that when the player is obscured by other parts, the occluded part will be displayed, rendered with a stroke effect, and the other parts can still use the original Pass.
Early-Z technology
In the traditional rendering pipeline, ZTest is actually in the Blending phase, when depth testing is carried out, and the pixel shaders of all objects are calculated without any performance improvement, just to get the correct occlusion results, resulting in a large number of useless calculations, because there must be a lot of calculations overlapping on each pixel. Therefore, modern GPU uses the technology of Early-Z to conduct an in-depth test between the Vertex phase and the Fragment phase (after rasterization, before fragment). If the depth test fails, it is not necessary to calculate the fragment phase, so the performance will be greatly improved. However, the final ZTest still needs to be carried out to ensure that the final occlusion relationship results correctly. The previous one is mainly for Z-Cull tailoring to achieve optimization, and the latter is mainly Z-Check for checking, as shown in the following figure:
Early-Z is implemented mainly through a Z-pre-pass. To put it simply, for all opaque objects (transparent is useless and cannot write depth itself), first render with a super simple shader, this shader does not write color buffer, only write depth buffer, the second pass turns off depth writing, turns on depth test, and renders with normal shader. In fact, we can also use this technique for reference. When rendering transparent objects, because deep writing is turned off, sometimes there are other opaque parts that block the transparent parts, but we actually do not want them to be occluded. We only want the occluded objects to be semi-transparent. At this time, we can use two pass to render. The first pass uses Color Mask to mask color writing, only writing depth, and the second pass renders semi-transparent normally. Turn off deep writing.
About Early-Z technology, you can refer to ATI's papers Applications of Explicit Early-Z Culling and PPT, as well as an article by Intel.
Summary of Unity rendering order
If we first draw the objects behind, and then draw the objects in front, it will cause over draw;, and through Early-Z technology, we can first draw closer objects, and then draw distant objects (only opaque objects). In this way, by first rendering the objects in front and letting the objects in front occupy the pit first, we can make the depth test of the objects behind fail, thus reducing repeated fragment calculations to achieve the purpose of optimization. By default, the Unity should be drawn according to the nearest face. We can take a look at the official Unity documentation:
From the flow given in the document, this Depth-Test occurs between the Vertex phase and the Fragment phase, that is, the Early-Z optimization mentioned above.
Briefly summarize the rendering order in Unity: first render opaque objects from front to back, and then render transparent objects from back to front.
The reason for the high consumption of Alpha Test (Discard) on mobile platform
From the moment I first came into contact with rendering, I began to hear about the Alpha Test comparison fee of the mobile platform. At that time, I wondered why the membership fee was directly discard. Why should it be more economical? This question has been bothering me for a long time. Let's get to the bottom of it today. Or with the Early-Z optimization we talked about above. Under normal circumstances, for example, if we render a patch, regardless of whether depth writing or depth testing is enabled or not, the depth value of the corresponding pixel after rasterization of the patch can be determined in the Early-Z (Z-Cull) stage. If Alpha Test (Discard) is enabled, the discard operation is carried out in the fragment phase, that is, whether the corresponding pixel after the facet rasterization is visible or not is known after the fragment phase, and the final color of this pixel needs to be judged by Z-Check. In fact, you can imagine that if we open Alpha Test and still use Early-Z, an area that should have been shaved off will still be written into the deep cache, which will cause other parts to be obscured by a place with nothing at all, and the final rendering effect will definitely be wrong. So, if we turn on Alpha Test, Early-Z,Z Test will not be postponed until after fragment, then the corresponding shader of this object will fully execute vertex shader and fragment shader, resulting in over draw. One way is to use Alpha Blend instead of Alpha Test, although it is also very expensive, but at least Alpha Blend does not write depth, but depth testing can be carried out in advance, because it will not be decided in the fragment stage whether it is visible or not, because it is visible, but the transparency is relatively low. But this is only a stopgap measure, and Alpha Blend is not a complete replacement for Alpha Test.
Thank you for reading this article carefully. I hope the article "how many rendering queues in Unity" shared by the editor will be helpful to you. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.