Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize Gaussian blur in Unity Shader post-processing

2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you how to achieve Gaussian blur in Unity Shader post-processing. I hope you will get something after reading this article. Let's discuss it together.

one。 Brief introduction

Before learning the principle of blur and the basic implementation of blur, the definition of clarity and blur is still more illustrative. Here is another post: "if there is a clear picture, there will be an obvious transition between each pixel." and if the gap between the pixels is not very large, then the image will be blurred. Given this definition, we can achieve a vague effect through code. Before Unity Shader- post-processing: a basic mean blur is realized in the mean blur, that is, a pixel and its surrounding pixels are blurred by taking the average value, and the effect of blurring is enhanced by iterative processing. However, due to the small number of sampling times, the weight of each pixel and its surrounding pixels is the same, and the blurred effect is not good. Although multiple iterations can enhance the fuzzy effect, iteration greatly increases the consumption of performance. although iterations can be used to achieve results in learning, efficiency has to become an important factor for us to consider when we actually use it. So, this time, let's learn about a more advanced blur effect-Gaussian blur.

Gaussian blur (Gaussian Blur), also known as Gaussian smoothing. The main function of Gaussian blur is the process of weighted average of the picture. Different from the average value of the surrounding pixels in the mean blur, Gaussian blur carries out a weighted average operation. The color value of each pixel is weighted averaged by itself and the color value of adjacent pixels. The closer to the pixel itself, the higher the weight, the more deviating from the pixel, the lower the weight. And this kind of weight accords with a kind of mathematical distribution that we are familiar with-normal distribution, also called Gaussian distribution, so this kind of blur is Gaussian blur.

two。 Concept introduction 1. Normal distribution

Let's review the distribution of Zhengtai first. The last time I heard about this word was in the class of probability and Mathematical Statistics in my sophomore year. Positive Pacific distribution, also known as Gaussian distribution, this distribution function has many very beautiful properties, which makes it have a very important influence in many fields. Moreover, the normal distribution is a relatively natural distribution, under certain conditions, the distribution of the local values of a large number of statistically independent random variables tends to be normal distribution, which is the legendary central limit theorem. Having said so much, it boils down to one sentence: the Gaussian distribution is more beautiful, so we use the Gaussian distribution as the weight reference for our weighted calculation.

The definition of Gaussian distribution is as follows:

Where μ is the mean value of the random variable of the normal distribution, that is, the expected value, that is, the middle position on the x-axis in the following figure, the random variable is symmetrically distributed on both sides of μ, and the second parameter ^ 2 is the variance of this random variable. therefore, the normal distribution is marked as N (μ, σ 2). The closer the random variable is to μ, the greater the probability is, and the farther away from μ is, the smaller the σ is, and the more concentrated the distribution is near μ. The larger the σ is, the more dispersed the distribution is. When μ = 0, σ ^ 2 = 1, it is called standard normal distribution, denoted by N (0d1). The image of the normal distribution is shown in the following figure:

So what does this distribution have to do with our Gaussian blur? To put it simply, we need to make our samples conform to the Gaussian distribution. So, when we deal with each pixel, the point of the pixel itself corresponds to the corresponding weight of μ, and if we want to sample around the pixel, the sampling range can be expressed by σ, for example, our variance is 1. Then we can basically achieve the effect of blurring if we directly take the value of one pixel out. If the variance is 0, then the weight of μ point is the largest, the weighted average is still the original pixel value, which is equal to no ambiguity; while the variance is very large, then the sampling range is very wide, it will be more blurred.

There is another point about Gaussian blur, that is, the number of samples, that is, the size of the legendary Gaussian template (Gaussian kernel). That is, we have to take several sampling points. μ corresponds to the pixel itself, μ ±1 σ and μ ±2 σ represent one outward sampling point and two sampling points, of course, because the more outward, the smaller the corresponding weight. so we can basically ignore the latter, here we use a total of seven sampling points μ, μ ±1 σ, μ ±2 σ, μ ±3 σ as the Gaussian kernel.

On the weight setting of Gaussian kernel, I have read several articles and books, but each Gaussian fuzzy kernel is different, and there is no standard here. As a graphics Daniel said: "Graphics this thing, it seems to be right, that is right", anyway, after all, it is for people to see in the end, the effect is better than anything else! In the following code, I set up a set of Gaussian weights, although it looks like a bit of a copycat, but the sum of the satisfaction is equal to 1.

two。 Convolution

Convolution is a magical concept. I often see this word in image processing recently. I thought that I didn't understand it when I went to school, so I had an obsessive-compulsive disorder and decided to find out how convolution was explained. For convolution, it is said in Baidu encyclopedia that convolution is a mathematical operator that generates a third function through two functions f and g. However, the master's explanation of convolution in the folk and Zhihu is more easy to understand. Here is an excerpt from what I think is the most incisive:

For example, your boss ordered you to work, but you went downstairs to play billiards, and then the boss found out that he was so angry that he slapped you (note, this is the input signal, pulse). So your face will gradually bulge a bag (cheaply), your face is a system, and the bulging bag is your face's response to the slap, okay, so establish a meaningful connection with the signal system. Some assumptions are also needed to ensure the rigor of the argument: suppose your face is a linear time-invariant system, that is, whenever your boss slaps you and hits you in the same position in the face (this seems to require your face to be smooth enough. If you say you have a lot of pimples, or even the whole face is unguided everywhere, it's too difficult, I have nothing to say ) Your face will always bulge a bag of the same height at the same time interval, and assume the size of the bulging bag as the system output. All right, then, let's go to the core content-convolution!

If you go underground to play billiards every day, your boss will slap you every day, but when your boss slaps you, your swelling will go down in 5 minutes, so over time, you will even get used to this kind of life. If one day, the boss can't stand it and starts the process of slapping you continuously at an interval of 0.5 seconds, then the problem arises. Before the swelling of the bag that first slaps you up, the second slap will come, and the bag on your face may bulge twice as high. The boss keeps slapping you, the pulse is constantly acting on your face, and the effects are constantly superimposed, so that these effects can be summed up. The result is a function of the height of the bag on your face over time (pay attention to understanding) If the boss is more ruthless and the frequency is so high that you can't tell the time interval, then the sum will become an integral. It can be understood that at a certain point in the process, what does the bulge on your face have to do with it? It has something to do with every time I hit you! But each contribution is different, the earlier the slap, the smaller the contribution, so that is to say, the output at a certain moment is the superposition of many previous inputs multiplied by their respective attenuation coefficients to form a certain point of output. then put the output points at different times together to form a function, which is convolution, and the function after convolution is a function of the size of the bag on your face over time. Originally, your bag can reduce the swelling in a few minutes, but if you hit it continuously, it won't go down for hours. Isn't this a smoothing process? Reflected in the Cambridge University formula, f (a) is the first slap, and g (xmura) is the degree of action of the a slap at the time of x. Multiply it and add it up to ok.

To put it simply, convolution is an operator for mathematical processing. In image processing, set the image f (x) and template g (x), and then move the template g (x) in the image f. Each time the template g (x) is moved to a pixel position, the elements intersected by f (x) and g (x) are multiplied and summed to get the pixel point in the new image. When all the pixel operations are completed, the convolution image is obtained. The Gaussian kernel we defined above is a convolution kernel.

We talked about the Gaussian kernel above. Normally, we should take the pixel as the μ point, then take some pixels up and down as the sampling point, and then multiply the corresponding weight according to the distance from the μ point. As the pixel value of this point after processing. But this has a disadvantage, that is, when we deal with each pixel, we need to do a lot of sampling calculation, and we need the pixel and several circles of sampling points around the pixel to weighted average all the pixels around the middle pixel. And this operation is calculated pixel by pixel, what is more frightening is that this effect is the effect of full-screen post-processing! Assuming that the screen resolution is massin and our Gaussian kernel size is massin, then the time complexity of a post-processing is O (M*N*m*n).

Is there any good way to optimize it?

Gaussian blur is a convolution operation, and this operation is a linear operation, in other words, the system is a linear system. The so-called linear system is the linear relationship between the input and output of a system, that is, the whole system can be divided into many independent changes, and the whole system is the accumulation of these changes. For example, in one system, input x1 (t) produces output y1 (t), expressed as x1 (t)-> y1 (t), while another input x2 (t) produces output y2 (t), that is, x2 (t)-> y2 (t). The system is linear if and only if x1 (t) + x2 (t)-> y1 (t) + y2 (t).

I have drawn a simple diagram of averages, hoping to explain this problem clearly:

We can see from the above figure that directly calculating the whole average is the same as calculating the horizontal average first and then the vertical average, that is to say, the two are equivalent (note that this figure is just an analogy. It does not represent the principle of Gaussian blur, which proves that the linear system can be divided into multiple independent operations, or which tall person has a better way of proof can also point my brother).

In our Gaussian blur operation, if the whole image is sampled, then the M*N*m*n sub-sampling operation will be carried out, and if it is horizontal first and then vertical, then we need M*m*n sub-sampling operation in the horizontal direction and N*m*n sub-sampling operation in the vertical direction. The total time complexity is O ((multiple N) * m\ n). Generally speaking, M and N are screen resolution, such as 1024mm 768, which greatly reduces the time complexity! However, a little bit of space is needed as a cache for intermediate results, but this cache is well worth the performance optimization.

three。 Realization of Gaussian blur

After dragging on the theory for so long, I'm finally going to start writing code. I don't say much here, but I'll just go straight to the annotated code.

Shader section:

Shader "Custom/GaussianBlur" {Properties {_ MainTex ("Base (RGB)", 2D) = "white" {}} / / through CGINCLUDE we can predefine some of the following struct and functions used in Pass / / in this way, you only need to set the rendering state and call the function in pass. Shader makes it more concise to understand the CGINCLUDE # include "UnityCG.cginc" / / blur structure, passing from the vert function of blur to the parameter struct v2f_blur {float4 pos: SV_POSITION of the frag function. / / Vertex position float2 uv: TEXCOORD0; / / texture coordinates float4 uv01: TEXCOORD1; / / one vector4 stores two texture coordinates float4 uv23: TEXCOORD2; / / one vector4 stores two texture coordinates float4 uv45: TEXCOORD3; / / one vector4 stores two texture coordinates}; / / the parameter sampler2D _ MainTex used in shader / / the pixel-dependent size of the XX_TexelSize,XX texture width,height corresponds to the resolution of the texture, x = 1/width, y = 1/height, z = width, w = height float4 _ MainTex_TexelSize; / / give an offset, this offset can be set on the outside, it is the key parameter float4 _ offsets for us to set horizontal and vertical blur / / vertex shader v2f_blur vert_blur (appdata_img v) {v2f_blur o; o.pos = mul (UNITY_MATRIX_MVP, v.vertex); / / uv coordinates o.uv = v.texcoord.xy / / calculate an offset value. The offset may be (0jc1j0d0d0) or it may be (1jc0d0d0), which indicates that the point _ offsets * = _ MainTex_TexelSize.xyxy around the pixel is taken horizontally or vertically. / / because uv can store four values, a uv stores two vector coordinates. _ offsets.xyxy * float4 (1meme 1memlore 1memlore 1) may mean (0mem1meme 0-1), indicating two / / coordinates above and below a pixel, or it may be (1meme 0meme 1mem0), indicating the coordinates of two pixels on the left and right of a pixel, below * 2.0 * 3.0 similarly o.uv01 = v.texcoord.xyxy + _ offsets.xyxy * float4 (1,1,-1,-1) O.uv23 = v.texcoord.xyxy + _ offsets.xyxy * float4 (1,1,-1,-1) * 2.0; o.uv45 = v.texcoord.xyxy + _ offsets.xyxy * float4 (1,1,-1,-1) * 3.0; return o } / / fragment shader fixed4 frag_blur (v2f_blur I): SV_Target {fixed4 color = fixed4; / / the weighted average color + = 0.4 * tex2D (_ MainTex, i.uv) of the pixel itself and the pixel left and right (or up and down, depending on the uv coordinates passed in by the vertex shader) Color + = 0.15 * tex2D (_ MainTex, i.uv01.xy); color + = 0.15 * tex2D (_ MainTex, i.uv01.zw); color + = 0.10 * tex2D (_ MainTex, i.uv23.xy); color + = 0.10 * tex2D (_ MainTex, i.uv23.zw); color + = 0.05 * tex2D (_ MainTex, i.uv45.xy) Color + = 0.05* tex2D (_ MainTex, i.uv45.zw); return color } ENDCG / / start SubShader SubShader {/ / start a Pass Pass {/ / the effect of post-processing is generally these states ZTest Always Cull Off ZWrite Off Fog {Mode Off} / / use Face definition of vertex and fragment shader CGPROGRAM # pragma vertex vert_blur # pragma fragment frag_blur ENDCG}} / / Post-processing effect is generally not given to fallback If it is not supported, do not show post-processing}

Part C #:

Using UnityEngine; using System.Collections; / / also runs [ExecuteInEditMode] / / inherits from PostEffectBase public class GaussianBlur: PostEffectBase {/ / blur radius public float BlurRadius = 1.0f; / / reduced resolution public int downSample = 2; / / number of iterations public int iteration = 1 Void OnRenderImage (RenderTexture source, RenderTexture destination) {if (_ Material) {/ / the resolution of applying RenderTexture,RT is reduced according to downSample RenderTexture rt1 = RenderTexture.GetTemporary (source.width > > downSample, source.height > > downSample, 0, source.format); RenderTexture rt2 = RenderTexture.GetTemporary (source.width > > downSample, source.height > > downSample, 0, source.format) / / directly copy the original image to the reduced resolution RT Graphics.Blit (source, rt1); / / iterate the Gaussian blur for (int I = 0; I < iteration) I blur +) {/ / first Gaussian blur, set offsets, vertical blur _ Material.SetVector ("_ offsets", new Vector4 (0, BlurRadius, 0,0); Graphics.Blit (rt1, rt2, _ Material) / / second Gaussian blur, set offsets, horizontal blur _ Material.SetVector ("_ offsets", new Vector4 (BlurRadius, 0,0,0)); Graphics.Blit (rt2, rt1, _ Material);} / / output the result to Graphics.Blit (rt1, destination) / / release two pieces of RenderBuffer content of the application: RenderTexture.ReleaseTemporary (rt1); RenderTexture.ReleaseTemporary (rt2);}

Notice that the GaussianBlur here inherits the PostEffectBase class. This class was fully implemented in the previous article, "UnityShader- post-processing: simple brightness contrast saturation adjustment," so the code will not be posted here.

In fact, GaussianBlur is basically the same as the SimpleBlurEffect in the previous article, and the change is that we need two times of Gaussian blur, which are blurred vertically and horizontally, respectively, and the difference and setting lies in the setting of offset. In fact, you can also use two different pass to blur the direction of horizontal and vertical respectively, but this code is redundant, so it is implemented in this more "elegant" way.

four。 Effect display

We hang the GaussianBlur script on MainCamera, then assign GaussianBlur.shader to the shader slot, and we can see the blurred effect.

First of all, take a look at the effect of the original image. If there is no need for post-processing, it is best to turn off the post-processing control directly, because operations such as Graphic.Blit are also very expensive. However, in order to demonstrate, the direct blur radius is 0, the resolution is not reduced, and the original image can be displayed clearly after one iteration:

When the blur radius is set to 1, the resolution is reduced to 1x2, iterate once, and slightly blur:

The blur radius is 1, and the resolution is 1 to 4. Iterate once for a more blurred effect:

With a blur radius of 1 and a resolution of 1 to 4, you can iterate twice to achieve the frosted glass effect shown below (PS, which is in the newly released "World" mobile game. When you click on the interface, the screen background becomes a similar effect):

From the above picture, we can see that reducing the resolution can greatly improve our Gaussian blur effect. In fact, the operation of lowering the resolution itself is similar to a blurred effect, which will look particularly painful when we want a clear image, as shown in the following figure:

However, if we don't want a clear picture, lowering the resolution will make us blur better. Moreover, the most important thing is that after reducing the resolution, pixel shader sampling will be greatly reduced, which will greatly improve our efficiency.

The effect of Gaussian blur is smoother, without the feeling of "nearsightedness" or pixel blocks like simple mean blur. Let's take a look at the comparison between Gaussian blur and simple mean blur in the previous article:

After reading this article, I believe you have a certain understanding of "how to achieve Gaussian blur in Unity Shader post-processing". If you want to know more about it, you are welcome to follow the industry information channel. Thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report