Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Implementation of texture Compression and semi-transparent display in Spine

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

Next, please follow the editor to study!

Texture compression

If you use the default output format of Spine, it looks like this

But the output texture must be in an uncompressed RGBA format, and if you change the image format to ETC2 or ASTC, it will look like this:

And this is an absolutely intolerable flaw in the picture.

The compression quality of ETC2 is by no means so bad, the reason for this is that the default output format of Spine is checked with the "pre-multiply (premultiplied alpha)" parameter, which will pre-multiply the original RGB channel of the picture by transparency and save it as a file, and then multiply it back with a special Shader when displayed. However, the compression algorithm of ETC2 does not take this situation into account, which leads to the reduction of compression quality.

The reason why Spine uses premultiplication (premultiplied alpha) is to solve the sampling problem of transparent images.

As shown in the picture, the pixel on the left is pure transparent, and the pixel on the right is white. If you sample at the red dot, the color value is (0.5).

The correct color value should be (1 minute 1), because the change should only be transparency, not even the brightness of the image itself because it reaches the edge.

Pre-multiplication is a perfect solution to this problem, and the pixel it generates is (0x0memo0x0meme0x0re0), (1x1recorder 1x1recorder 1x1mem1), and the color after sampling is 0.5mem0.5, but it has to be divided by transparency when restoring, so the result is (0.5prime, 0.5, 0.5, 0.5, and 0.5), and you can get the correct value.

In addition to pre-multiplication, another way is to fill the area with zero transparency with color. Although there is a color, it still looks transparent because the transparency is 0, as shown in the figure:

This operation is called bleeding. Excluding the transparent channel, you can see that the image will copy the edge pixels to the adjacent transparent area after bleeding.

Although there are errors in some special sampling angles, most cases are fine, and this method can normally support compression of textures.

Bleeding can be set in the output form of Spine, or you can check the Alpha Is Transparency activation of Unity images.

Therefore, we should cancel the premultiplied alpha option for Spine and check the Bleeding option to generate normal images (SpineEditorUtilities.cs has an automatic import script that needs to be modified, otherwise the image parameters will be reset by it).

In Shader, just draw it in the most common way.

Flaws still exist, but return to the compression quality of ordinary opaque images. (the specific quality depends on the compression format used. For example, ASTC is definitely better than ETC2.)

-- however, Spine cannot directly paint maps in the normal Alpha Blend way.

Spine's entire Shader is based on Blend One OneMinusSrcAlpha, that is, pre-multiply mixed mode. It requires that the color channel of the frag output must be pre-multiplied by the Alpha channel, so if the input image is not pre-multiplied, it is necessary to multiply the transparency after sampling and then output.

Col = tex2D (_ MainTex,xxxxx)

Col.rgb * = col.a

It's easy, but why not just change the mixed mode to a normal Blend SrcAlpha OneMinusSrcAlpha? That's what the next topic is about--

How to use a PASS to draw both Alpha Blend and Additive objects?

Alpha Blend corresponds to Blend SrcAlpha OneMinusSrcAlpha.

Equivalent to lerp (x.rgb.rgbdiary x.a), that is, result = (x.r, x.g, x.b) * x.a + (y.r, y.g, y.b) * (1-x.a)

Additive is result = (x.r, x.g, x.b) * x.a + (y.r, .g, .b)

The result is the same as Alpha Blend.

But here, za and x.an are actually different parameters, and we can make it different values. If za and x.an are always equal, it is AlphaBlend. If you let za=x.an and assign x.a to 0 at the same time, the result is result = (x.r, x.g, x.b) * x.a + (y.r, .g, .b) * (1-0).

It's the same as Additive.

This is directly multiplied by the color value, and then output with Blend One OneMinusSrcAlpha.

If you do not reset the transparency to 0, but take an intermediate value, you can also get a result between Alpha Blend and Additive, which can be considered when the Alpha Blend is too dark and the Additive is too bright. (in general, special effects art will choose to overlay with textures of two different materials, which is not necessary.)

How to make semi-transparent display of Spine?

GrabPass is not desirable because there is a semi-transparent superposition of transparent objects. So the only way is to draw it to RT and then display it.

At this point, you will encounter the problem of drawing Alpha Blend and Additive at the same time. The ClearColor of the camera needs to be set to 0Magne0Person0Power0 (instead of 0Magne0Power0Phone1), Alpha objects cannot have ColorMask,Additive objects need ColorMask RGB (or alpha value is 0).

When displaying RT, you must use Blend One OneMinusSrcAlpha to be correct, otherwise the part of Addtive will have no effect.

Draw with SrcAlpha OneMunusSrcAlpha. Draw with One OneMunusSrcAlpha.

In this case, setting to semi-transparent can not only modify the alpha value, but also modify all channels at the same time.

Blend One OneMinusSrcAlpha objects only modify the alpha value of Color, which is equivalent to adjusting the display effect between AlphaBlend and Additive, which is actually quite convenient. This middle part is actually very suitable for the air projection screen used to simulate sci-fi scenes, because AlphaBlend is too solid and Additive is too light-like.

And the real air projection screen has never been seen by anyone, after all, it has not been born.

Dynamic vertical drawing should be enabled with Mipmap

Many people misunderstand Mipmap as an optimization tool and it is used when dealing with near and far objects.

The original purpose of Mipmap is to deal with texture aliasing, and optimization is only a by-product.

The more famous example is this, also known as Moore pattern.

In the case of vertical painting, it can be regarded as a kind of jagged texture.

As for the reason for the appearance? In texture filtering, the magnification of the image is actually similar to that in PS, that is, ordinary quadratic linear interpolation, but it is completely different when zooming out. PS doubles the image, and the target pixel is the average of the corresponding 4 pixels, but in texture filtering, only the nearest 1 pixel is taken.

Obviously, this is for efficiency reasons.

MipMap stores a copy of the average of the four pixels in advance, in which case the value can be taken directly. In fact, the efficiency is about the same, but only mipMap is correct.

However, in the middle of two mipMap levels, it is still incorrect to use three-way filtering or multi-sampling filtering, which will interpolate between the two mipMap levels. Because the low resolution image is sampled, it will also cause the image to be blurred.

That is, as the signal science says, "without improving the resolution, we can only discard the high-frequency part of the signal that cannot be restored and avoid drastic changes to achieve the so-called anti-aliasing", that is, if we want to anti-aliasing, you have to bear a certain degree of ambiguity. If you want to resist aliasing without blurring, there is only one way to improve the resolution.

3D is not required for aliasing, as long as your image is displayed with two pixels merged into one pixel (which can also be achieved by zooming), especially if your image resolution is higher than the screen resolution.

However, most of the aliasing effect is not obvious, because the human eye will correct itself. However, if your object is moving, aliasing will produce another effect "flicker", which is very obvious. The sawtooth above can actually be regarded as a kind of line segment that appears from time to time.

If your picture is static, it is more cost-effective to exchange a little bit of aliasing for "clarity" of the picture. But dynamic vertical painting, please do not.

Of course, if your resolution is so high that the human eye can't tell, it really doesn't matter if you're out of shape.

But in that case, vagueness doesn't matter either.

And don't forget that Mipmap does improve performance in many cases.

At this point, the study of "Spine texture compression and semi-transparent display" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report