In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article will explain in detail how to optimize the graphics performance of iOS. The editor thinks it is very practical, so I share it for you as a reference. I hope you can get something after reading this article.
Introduction
When a product is gradually mature, we begin to pay attention to the optimization of product performance. Among them, the optimization of graphics performance plays an important role in iOS client.
Here we will introduce the operating mechanism of Core Animation, first of all, we should not be misled by its name, Core Animation is not only used to do animation, iOS view display is completed through it, so we want to optimize graphics performance must understand Core Animation.
Below, we will understand the working mechanism of Core Animation according to Apple's WWDC video explanation, analyze the causes of specific stutters, how to avoid stutters caused by these problems, and explain from which aspects of optimization can get twice the result with half the effort.
Core Animation working mechanism
As shown in the figure above, Core Animation submits the layer data to the out-of-application process Render Server in App, which is the server of Core Animation, and decodes the data into GPU executable instructions to GPU for execution.
You can see that the problem rendering service is not carried out within the App process, that is, we cannot optimize the rendering part, and we can only optimize it at the stage of the first commit transaction. So what exactly did Core Animation do at this stage? Let's take a look at it!
Commit Transaction
Commit transactions are divided into four phases: layout, display, preparation, and commit.
Layout stage
When addSubview is called, layer is added to the layer tree, and layoutSubviews is called to create the view. At the same time, there will also be a data lookup, for example, app has been localized, and label must find the layout of the corresponding language from the localization file to display these localized strings, which involves the Icano operation. So here is mainly CPU work, and the bottleneck will also be CPU.
Display phase
If you override the drawRect method at this stage, Core Graphics will render. Draw a homestay diagram, or contents, for the view. However, the content drawn in drawRect will not be displayed immediately, but will be changed in advance and updated to the screen when needed. If you call setNeedsDisplay manually or sizeThatFits is called, you can also set the value of the cententMode property to UIViewContentModeRedraw and automatically call the setNeedsDisplay method every time the bounds changes. This stage is mainly CPU and memory consumption, many people like to use the Core Graphics method to draw graphics, think that can improve performance, later we will explain the disadvantages of this method.
Preparation stage
The main work here is the decoding of pictures, because most of them are encoded pictures, and the original data must be read through the coding process. And when we use a picture format that is not supported by iOS, that is, it does not support hard coding, we need to convert it, which is also time-consuming. So here is the GPU consumption, and it also consumes CPU if you do soft decoding.
Submission stage
The last phase is responsible for packaging the layer data and sending it to the rendering service we mentioned above. This process is a recursive operation, and the more complex the layer tree is, the more resources it consumes. For example, CALaler has a lot of implicit animation attributes will also be submitted here, saving many interactions between animation attribute processes and improving performance.
Optimize
Based on the four phases we mentioned above, let's look at what factors affect the performance of App and how to optimize it to improve the performance of our App.
Mix
Usually when we write code, we often add different colors and different transparency to different CALayer, and we finally see that it is the result of mixing all these layers of CALayer.
So how do you mix in iOS? We explained earlier that each pixel contains R (red), G (green), B (blue) and R (transparency), and GPU calculates the RGB value mixed by each pixel. So how do you calculate the mixed values of these colors? Assuming that in normal blending mode, there are two CALayer with pixel alignment, the blending formula is as follows:
R = S + D * (1-Sa)
There is an explanation of each parameter in Apple's documentation:
The blend mode constants introduced in OS X v10.5 represent the Porter-Duff blend modes. The symbols in the equations for these blend modes are:
* R is the premultiplied result
* S is the source color, and includes alpha
* D is the destination color, and includes alpha
* Ra, Sa, and Da are the alpha components of R, S, and D
R is the resulting color, and S and D are the source and target colors that contain transparency, which is actually the value multiplied by transparency in advance. Sa is the transparency of the source color. IOS provides us with a variety of Blend mode:
/ * Available in Mac OS X 10.5 & later. R, S, and D are, respectively
Premultiplied result, source, and destination colors with alpha; Ra
Sa, and Da are the alpha components of these colors.
The Porter-Duff "source over" mode is called `kCGBlendModeNormal':
R = S + D * (1-Sa)
Note that the Porter-Duff "XOR" mode is only titularly related to the
Classical bitmap XOR operation (which is unsupported by
CoreGraphics). , /
KCGBlendModeClear, / * R = 0 * /
KCGBlendModeCopy, / * R = S * /
KCGBlendModeSourceIn, / * R = S*Da * /
KCGBlendModeSourceOut, / * R = S* (1-Da) * /
KCGBlendModeSourceAtop, / * R = S*Da + D* (1-Sa) * /
KCGBlendModeDestinationOver, / * R = S* (1-Da) + D * /
KCGBlendModeDestinationIn, / * R = D*Sa * /
KCGBlendModeDestinationOut, / * R = D* (1-Sa) * /
KCGBlendModeDestinationAtop, / * R = S* (1-Da) + D*Sa * /
KCGBlendModeXOR, / * R = S* (1-Da) + D* (1-Sa) * /
KCGBlendModePlusDarker, / * R = MAX (0, (1-D) + (1-S)) * /
KCGBlendModePlusLighter / * R = MIN (1, S + D) * /
It seems that the calculation is not very complicated, but this is only a simple step in which one pixel covers another. Normally, our real interface will have many layers, and each layer will have millions of pixels, which will have to be calculated by GPU, and the burden is very heavy.
Pixel alignment
Pixel alignment is the perfect alignment between the pixels on the view and the physical pixels on the screen. When we say mixing above, the assumption is that multiple layer are calculated when each pixel is completely aligned. If the pixel is not aligned, GPU needs to perform Anti-aliasing anti-aliasing calculation, which will increase the burden of GPU. In the case of pixel alignment, we only need to mix all the individual pixels on the layer.
So what causes the pixel misalignment? There are two main points:
When the size of the image does not match the size of 2 times and 3 times the size of UIImageView, such as a picture with 2 times 12x12 and 3 times 18x18, the size of UIimageView is 6x6 in order to match the pixel alignment.
The edge pixels are misaligned, that is, the starting coordinates are not integers, and you can use the CGRectIntegral () method to remove the decimal places. Both of these points can cause pixel misalignment. If you want to get better graphics performance, as a developer, you should avoid both situations as much as possible.
Opaque
We talked about a hybrid formula above:
R = S + D * (1-Sa)
If the sa value is 1, the pixels corresponding to the source color are opaque. Then you get R = S, so you only need to copy the top-level layer, and you don't need to do complex calculations. Because the layer of the lower layer is all invisible, there is no need for GPU to do a mixed calculation.
How do you let GPU know that the image is opaque? If you are using CALayer, set the opaque property to YES (the default is NO). If only UIView,opaque is used, the default property is YES. When GPU knows that it is opaque, it will only do simple copy work, which avoids complex calculations and greatly reduces the workload of GPU.
If you load a picture without an alpha channel, the opaque property is automatically set to YES. However, if it is a picture with an alpha value of 100% for each pixel, although the image is opaque, Core Animation still assumes that there are pixels with an alpha value of less than 100%.
Decode
As we mentioned in the previous article, usually in the preparation phase of Core Animation, the picture will be decoded, that is, the compressed image will be decoded into bitmap data. This is a very CPU-consuming thing. The system decodes the image before it is about to be rendered to the screen, and by default it is in the main thread. So we can put the decoding in the child thread, here is a simple list of decoding methods:
NSString * picPath = [[NSBundle mainBundle] pathForResource:@ "tests" ofType:@ "png"]
NSData * imageData = [NSData dataWithContentsOfFile:picPath]; / / read undecoded picture data
CGImageSourceRef imageSourceRef = CGImageSourceCreateWithData ((_ _ bridge CFTypeRef) imageData, NULL)
CGImageRef imageRef = CGImageSourceCreateImageAtIndex (imageSourceRef, 0, (CFDictionaryRef) @ {(id) kCGImageSourceShouldCache:@ (NO)})
CFRelease (imageSourceRef)
Size_t width = CGImageGetWidth (imageRef); / / get the width of the picture
Size_t height = CGImageGetHeight (imageRef); / / get the height of the image
CGColorSpaceRef colorSpace = CGImageGetColorSpace (imageRef)
Size_t bitsPerComponent = CGImageGetBitsPerComponent (imageRef); / / number of bit per color component
Size_t bitsPerPixel = CGImageGetBitsPerPixel (imageRef); / / how many bit per pixel
Size_t bytesPerRow = CGImageGetBytesPerRow (imageRef); / / number of bit per row of bitmap data
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo (imageRef)
CGDataProviderRef dataProvider = CGImageGetDataProvider (imageRef)
CFRelease (imageRef)
CFDataRef dataRef = CGDataProviderCopyData (dataProvider); / / get the decoded data
CGDataProviderRef newProvider = CGDataProviderCreateWithCFData (dataRef)
CFRelease (dataRef)
CGImageRef newImageRef = CGImageCreate (width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitmapInfo, newProvider, NULL, false, kCGRenderingIntentDefault)
CFRelease (newProvider)
UIImage * image = [UIImage imageWithCGImage:newImageRef scale:2.0 orientation:UIImageOrientationUp]
CFRelease (newImageRef)
In addition, Apple provides a property kCGImageSourceShouldCacheImmediately after iOS7. In the CGImageSourceCreateImageAtIndex method, you can start decompressing immediately if you set kCGImageSourceShouldCacheImmediately to kCFBooleanTrue. The default is kCFBooleanFalse.
Of course, decoding can also be achieved by using the void CGContextDrawImage (CGContextRef _ nullable c, CGRect rect, CGImageRef _ nullable image) method in AFNetworking. The specific implementation will not be described here.
Byte alignment
When we talked about pixel alignment earlier, we briefly introduced byte alignment. So what exactly is byte alignment? Why byte alignment? What does it have to do with us to optimize graphics performance?
Byte alignment places some restrictions on the address of the basic data type, that is, the address of an object of a data type must be an integral multiple of its value. For example, if the processor reads 8 bytes of data from memory, the data address must be an integral multiple of 8.
Alignment is to improve the performance of reads. Because the data that the processor reads in memory is not read byte by byte, but read block by block, it is generally called cache lines. If a misaligned data is placed in two blocks, the processor may have to perform two memory accesses. When there is a lot of misaligned data, it will affect the read performance. This may sacrifice some storage space, but it is a better choice for modern computers to improve the performance of memory.
In iOS, if the data of the image is not byte aligned, then Core Animation will automatically copy a copy of the data for alignment. Here we can do byte alignment in advance.
In the method CGBitmapContextCreate (void * _ nullable data, size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow, CGColorSpaceRef _ nullable space, uint32_t bitmapInfo), there is a parameter bytesPerRow, which means to specify the number of bytes per line of memory of the bitmap to be used. The cache lines of the processor of ARMv7 architecture is 32byte.A9 processor is 64byte, here we want to make bytesPerRow an integer multiple of 64.
For details, please refer to the official documents Quartz 2D Programming Guide and WWDC 2012 Session 238 "iOS App Performance: Graphics and Animations". Byte alignment, in general, the feeling has little impact on performance, so don't optimize it too early if it's not necessary.
Off-screen rendering
Off-screen rendering (Off-Screen Rendering) means that GPU opens a new buffer outside the current screen buffer to render. Off-screen rendering is very performance-consuming because the first step is to create an off-screen buffer and two contextual switches are required. First switch to the off-screen environment, and then switch to the current screen after off-screen rendering. Context switching is very expensive. The reason for off-screen rendering is that these layers cannot be drawn directly on the screen and must be pre-synthesized.
There are probably several situations in which off-screen rendering occurs:
When cornerRadius and masksToBounds (clipToBounds in UIView) are used together, using alone does not trigger off-screen rendering. CornerRadius only works on the background color, so layers with contents need to be cropped.
Set the mask (mask) for the layer.
The allowsGroupOpacity property of layer is YES and opacity is less than 1.0 MagneGroupOpacity means that the transparency value of the child layer cannot be greater than that of the parent layer.
Shadow (Shadow) is set.
All of the above cases are off-screen rendering of GPU, and there is a special off-screen rendering of CPU. As long as the implementation of Core Graphics drawing API will result in off-screen rendering of CPU. Because it is not drawn directly to the screen, and first create an off-screen cache.
How can we solve these problems of off-screen rendering? First of all, GroupOpacity has little impact on performance, so I won't say much here. Fillet is inevitable. There are many examples on the Internet that use Core Graphics to draw fillets instead of system fillets, but Core Graphics is a kind of software drawing, using CPU, and its performance is much worse.
Of course, it's a good choice in an interface where CPU utilization is not very high, but sometimes an interface may require CPU to do other expensive things, such as network requests. At this time in the use of Core Graphics to draw a large number of fillet graphics, it is possible to drop frames.
What should we do in this situation? The best thing is for designers to provide rounded corners directly. Another compromise is to blend the layer and overlay the fillet-shaped layer you want on the original layer. The middle part that needs to be displayed is transparent, and the covered part is consistent with the surrounding background.
For shadow, if the layer is a simple geometry or fillet shape, we can optimize performance by setting shadowPath, which can greatly improve performance. Examples are as follows:
ImageView.layer.shadowColor = [UIColor grayColor] .CGColor
ImageView.layer.shadowOpacity = 1. 0
ImageView.layer.shadowRadius = 2.0
UIBezierPath * path = [UIBezierPath bezierPathWithRect:imageView.frame]
ImageView.layer.shadowPath = path.CGPath
We can also force off-screen rendering by setting the value of the shouldRasterize property to YES. It's actually Rasterization.
Since off-screen rendering is so bad, why do we force it on? When an image is mixed with multiple layers, each time you move, the layers are recomposed at each frame, which consumes a lot of performance.
When we turn on rasterization, a bitmap cache is generated for the first time and reused when it is used again. But if the layer changes, the bitmap cache will be recreated.
Therefore, this function generally cannot be used in UITableViewCell, and the reuse of cell degrades the performance. It is best used for drawings with static content with more layers. And the size of the resulting bitmap cache is limited, usually 2.5 screen sizes. If this cache is not used within 100ms, the cache will also be deleted. So we have to depend on the usage scenario.
Instruments
We have talked about so many performance-related factors above, so how do we test performance and how do we know which factors affect graphics performance? Apple is human enough to provide us with a testing tool, Instruments. You can find it in Xcode- > Open Develeper Tools- > Instruments. We can see that there are many testing tools, such as Leaks, which is commonly used to detect memory leaks. Here we will discuss the use of Core Animation.
The Core Animation tool is used to monitor Core Animation performance. Provides a visible FPS value. And provides several options to measure rendering performance, let's explain the power of each option:
Color Blended Layers: if this option is checked, you can see which layer is transparent and GPU is doing a hybrid calculation. The red one is transparent and the green one is opaque.
Color Hits Green and Misses Red: if this option is checked, and if shouldRasterize is set to YES in our code, then red indicates that the cache is not reused for off-screen rendering, and green means that the cache is reused. We certainly want to be able to reuse.
Color Copied Images: according to the official statement, when the color format of the image is not supported by GPU, that is, it is not the color format of 32bit, Core Animation will copy a copy of the data for CPU to convert. For example, if you download a picture in 8bit color format from the network, you need to convert it to CPU, and this area will be displayed in blue. Another situation that triggers the copy method of Core Animation is when the bytes are misaligned.
Color Misaligned Images: check this box to mark the picture as yellow if the picture needs to be zoomed and purple if there is no pixel alignment. Pixel alignment we have introduced above.
Color Offscreen-Rendered Yellow: used to detect off-screen rendering. If it shows yellow, it means off-screen rendering. Of course, it should also be combined with Color Hits Green and Misses Red to see whether the cache is reused.
Color OpenGL Fast Path Blue: this option is only useful for layers that use OpenGL, such as GLKView or CAEAGLLayer. If you don't show blue, you use CPU rendering, draw it off the screen, and show blue to indicate normal.
Flash Updated Regions: yellow when the layer is redrawn, which will affect performance if it happens frequently. You can enhance performance by adding caches. The official document Improving Drawing Performance (https://developer.apple.com/library/archive/documentation/2DDrawing/Conceptual/DrawingPrintingiOS/DrawingTips/DrawingTips.html) explains.
On "iOS how to achieve graphics performance optimization" this article is shared here, I hope the above content can be of some help to you, so that you can learn more knowledge, if you think the article is good, please share it out for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.