In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "what is the principle of iOS rendering". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what is the principle of iOS rendering".
Cdn.xitu.io/2020/6/2/1727430740ce99d7?w=515&h=405&f=png&s=73957 ">
1. The principle of computer rendering the Architecture of CPU and GPU
For modern computer systems, simply speaking, it can be regarded as a three-tier architecture: hardware, operating system and process. For mobile, the process is app, and CPU and GPU are important components of the hardware level. CPU and GPU provide computing power, which is called by app through the operating system.
CPU (Central Processing Unit): the operation core and control core of the whole modern computer system.
GPU (Graphics Processing Unit): a special microprocessor that can perform drawing operations, which is the link between the computer and the display terminal.
The design goals of CPU and GPU are different, and they aim at two different application scenarios respectively. CPU is the core of operation and control, which requires strong generality of operation, compatibility with various data types, and the ability to handle a large number of different instructions such as jumps and interrupts, so the internal structure of CPU is more complex. On the other hand, GPU is faced with unified and simpler operations, and it does not need to deal with complex instructions, but it also shoulders a larger computing task.
Therefore, the architecture of CPU is different from that of GPU. Because the situation faced by CPU is more complex, it can also be seen from the above figure that CPU has more cache space Cache and complex control unit, and computing power is not the main demand of CPU. CPU is designed to have low latency, and more caching also means faster access to data; while complex control units can also handle logical branches more quickly, making it more suitable for serial computing.
On the other hand, GPU has more computing units Arithmetic Logic Unit, stronger computing power and more control units. GPU is designed based on high throughput, and each part of the cache is connected to a stream processor (stream processor), which is more suitable for large-scale parallel computing.
Image rendering pipeline
The image rendering process is roughly divided into the following steps:
In the above image rendering pipeline, except for the first part of the Application phase, GPU is mainly responsible for the follow-up. In order to facilitate the explanation later, the rendering flow chart of GPU is first shown:
The image above shows the rendering pipeline that GPU is responsible for during the rendering of a triangle. We can see that simple triangle rendering requires a lot of calculation, if there are more and more complex vertex, color, texture information (including 3D texture), then the amount of calculation is unimaginable. This is why GPU is more suitable for the rendering process.
Next, I will explain the specific tasks of each part of the rendering pipeline:
Application application processing phase: get the element
This stage specifically refers to the stage in which the image is processed in the application, which is still in the period of CPU responsibility. At this stage, the application may make a series of operations or changes to the image, and finally transmit the new image information to the next stage. This piece of information is called primitives, usually triangles, segments, vertices, and so on.
Geometry geometry processing phase: working with elements
After entering this stage, and the subsequent stage, it is mainly the responsibility of GPU. At this point, GPU can get the element information passed from the previous stage, and GPU will process this part of the element, and then output the new element. This series of stages include:
Vertex shader (Vertex Shader): in this phase, the vertex information in the entity is converted, lighting information is added, texture is added, and so on.
Shape assembly (Shape Assembly): triangles, line segments, and points in the element correspond to three Vertex, two Vertex, and one Vertex respectively. This phase connects the Vertex into the corresponding shape.
Geometric shader (Geometry Shader): add extra Vertex to convert the original elements into new elements to build a different model. To put it simply, it is based on building more complex geometry through triangles, segments, and points.
Rasterization rasterization phase: converting elements to pixels
The main purpose of rasterization is to convert the geometric rendering element information into a series of pixels for subsequent display on the screen. In this stage, according to the element information, the pixel information covered by each element is calculated, and the pixel is divided into different parts.
A simple division is based on the center point, if the center point of the pixel is inside the element, then the pixel belongs to the element. As shown in the image above, the dark blue line is the triangle constructed by the element information, and by covering the center point or not, you can traverse all the pixels that belong to the element, that is, the light blue part.
Pixel pixel processing phase: processing pixels to get a bitmap
After the above rasterization stage, we get the pixels corresponding to the elements, at this time, we need to fill these pixels with color and effect. So the last stage is to fill the pixels with the correct content and eventually display it on the screen. These processed sets of pixels that contain a lot of information are called bitmaps (bitmap). In other words, the final output of the Pixel phase is a bitmap, and the process consists of:
These points can be arranged and dyed differently to form a pattern. When you zoom in on the bitmap, you can see the countless individual squares that make up the entire image. As long as there are enough pixels of different colors, colorful images can be made to represent the scene of nature realistically. Zooming and rotation are easy to be distorted, and the file capacity is large.
Fragment shader (Fragment Shader): also known as Pixel Shader, the purpose of this phase is to give each pixel Pixel the correct color. The source of color is the vertex, texture, lighting and other information obtained before. Because of the need to deal with texture, lighting and other complex information, this is usually the performance bottleneck of the whole system.
Testing and mixing (Tests and Blending): also known as the Merging phase, which deals with the front and back position of the fragments as well as transparency. This phase detects the depth z coordinates of each shaded clip to determine the position before and after the clip and whether it should be discarded. At the same time, the corresponding transparency alpha value is calculated so that the clips are mixed to get the final color.
two。 Screen Imaging and Catton
At the end of the image rendering process, the resulting pixel information needs to be displayed on the physical screen. After the final rendering of GPU, the pixel information is stored in the frame buffer (Framebuffer), and then the video controller (Video Controller) reads the information in the frame buffer and transmits it to the display (Monitor) through digital-to-analog conversion for display. The complete process is shown in the following figure:
The set of pixels processed by GPU, that is, the bitmap, is cached by the frame buffer for later display. The electron beam of the monitor is scanned line by line from the upper left corner of the screen, and the image information of each point on the screen is read from the bitmap in the frame buffer and displayed on the screen accordingly. The scanning process is shown in the following figure:
In the process of electron beam scanning, the screen can show the corresponding results, and each time the whole screen is scanned once, it is equivalent to showing a complete image. When the screen is constantly refreshed and new frames are presented, continuous images can be presented. The frequency at which this screen is refreshed is the frame rate (Frame per Second,FPS). Due to the visual temporary effect of the human eye, when the screen refresh frequency is high enough (FPS is usually about 50 to 60), it can make the picture look continuous and smooth. For iOS, app should try to ensure that 60 FPS is the best experience.
Screen tearing Screen Tearing
In this single-cache mode, the ideal situation is a smooth pipeline: each time the electron beam starts scanning a new frame from scratch, CPU+GPU 's rendering process for that frame is over, and the rendered bitmap has been placed in the frame buffer. But this perfect situation is very fragile and can easily lead to screen tearing:
The rendering process of CPU+GPU is a very time-consuming process. If the bitmap is not rendered when the electron beam begins to scan a new frame, but is rendered when it is scanned to the middle of the screen and is placed in the frame buffer-then the scanned part is the picture of the previous frame, and the unscanned part displays a new frame image, which causes the screen to tear.
Vertical synchronization Vsync + double buffer mechanism Double Buffering
One of the strategies to solve screen tearing and improve display efficiency is to use vertical synchronization signal Vsync and double buffering mechanism Double Buffering. According to Apple's official documentation, iOS devices will always use the Vsync + Double Buffering strategy.
The vertical synchronization signal (vertical synchronisation,Vsync) is equivalent to locking the frame buffer: when the electron beam completes scanning a frame and is about to start scanning from scratch, a vertical synchronization signal is sent out. Only when the video controller receives the Vsync will the bitmap in the frame buffer be updated to the next frame, which ensures that the picture is displayed at the same frame each time, thus avoiding screen tearing.
But in this case, after receiving the Vsync, the video controller will pass in the bitmap of the next frame, which means that the rendering process of the whole CPU+GPU has to be completed in an instant, which is obviously unrealistic. So the double buffering mechanism adds a new standby buffer (back buffer). The rendering result is pre-saved in back buffer. When the Vsync signal is received, the video controller replaces the contents of the back buffer into the frame buffer, which ensures that the replacement operation is completed almost instantly (actually swapping memory addresses).
Drop frame Jank
When Vsync signals and double buffering are enabled, the problem of screen tearing can be solved, but a new problem will be introduced: frame drop. If the CPU and GPU do not render the new bitmap when the Vsync is received, the video controller will not replace the bitmap in the frame buffer. At this point, the screen will be rescanned to show exactly the same picture as the previous frame. Equivalent to two cycles showing the same picture, this is the so-called frame drop situation.
As shown in the figure, An and B represent two frame buffers, and the Vsync signal is received when B is not finished rendering, so the screen can only display the same frame An again, which occurs the first frame drop.
Three buffer Triple Buffering
In fact, there is room for optimization of the above strategies. We notice that when the frame drop occurs, CPU and GPU are idle for a period of time: when the content of An is being scanned on the screen and the content of B has been rendered, CPU and GPU are idle. So if we add a frame buffer, we can use this time for the next step of rendering, and the rendering results will be temporarily stored in the new frame buffer.
As shown in the figure, due to the addition of a new frame buffer, the gap period of dropped frames can be used to a certain extent, and the performance of CPU and GPU can be reasonably utilized, thus reducing the number of dropped frames.
The essence of screen stutter
The direct reason for the use of stutter in mobile phones is to drop frames. As mentioned earlier, the screen refresh frequency must be high enough to be smooth. For iPhone phones, the maximum screen refresh rate is 60 FPS. Generally speaking, as long as 50 FPS is guaranteed, it is already a better experience. However, if you drop too many frames, resulting in a low refresh frequency, it will result in an unsmooth experience.
From this point of view, we can summarize it roughly.
The root cause of screen stutter: CPU and GPU rendering pipelining takes too long, resulting in frame drop.
The meaning of Vsync and double buffering: force synchronous screen refresh to solve the screen tearing problem at the cost of dropping frames.
The meaning of three buffers: reasonable use of CPU and GPU rendering performance to reduce the number of frames dropped.
3. Rendering framework in iOS
The rendering framework of iOS is still in line with the basic architecture of the rendering pipeline, and the specific technical stack is shown in the figure above. On the basis of hardware, there are many kinds of software frameworks in iOS, such as Core Graphics, Core Animation, Core Image, OpenGL and so on, to draw the content, which is encapsulated at a higher level between CPU and GPU.
GPU Driver: the above software frameworks are also dependent on each other, but all frameworks will eventually connect to GPU Driver,GPU Driver through OpenGL, a code block that communicates directly with GPU and directly with GPU.
OpenGL: an API that provides 2D and 3D graphics rendering, it works closely with GPU and makes the most efficient use of GPU's capabilities to achieve hardware-accelerated rendering. The efficient implementation of OpenGL (using graphics acceleration hardware) is generally provided by the display device manufacturer and is very dependent on the hardware provided by the manufacturer. OpenGL extends a lot of things, such as Core Graphics and so on eventually rely on OpenGL, in some cases for higher efficiency, such as game programs, even directly call the interface of OpenGL.
Core Graphics:Core Graphics is a powerful two-dimensional image rendering engine, is the core graphics library of iOS, commonly used such as CGRect is defined in this framework.
Core Animation: on iOS, almost everything is drawn through Core Animation, which has a higher degree of freedom and a wider range of applications.
Core Image:Core Image is a high-performance framework for image processing and analysis. It has a series of ready-made image filters that can process existing images efficiently.
Metal:Metal, similar to OpenGL ES, is also a set of third-party standards implemented by Apple. Rendering frameworks such as Core Animation, Core Image, SceneKit, SpriteKit, and so on are all built on top of Metal.
What is Core Animation?
Render, compose, and animate visual elements. -- Apple
Core Animation, which can be understood as a composite engine in essence, its main responsibilities include: rendering, building and implementing animation.
Usually we use Core Animation to implement animation efficiently and easily, but in fact its predecessor is called Layer Kit, and the implementation of animation is only part of its function. For iOS app, it participates in the construction of app at the underlying depth, regardless of whether it uses Core Animation directly or not. For OS X app, some of the functions can also be easily achieved by using Core Animation.
Core Animation is the perfect underlying support for AppKit and UIKit, and it is also integrated into the workflows of Cocoa and Cocoa Touch. It is the most basic infrastructure for rendering and building app interfaces. The job of Core Animation is to assemble different visual content on the screen as quickly as possible, which is broken down into separate layer (specifically CALayer in iOS) and stored as a tree hierarchy. This tree also forms the basis of UIKit and everything you can see on the screen in iOS applications.
To put it simply, what the user can see on the screen is managed by CALayer. So how exactly is CALayer managed? In addition, in the iOS development process, the most widely used view control is actually UIView rather than CALayer, so what is the relationship between the two?
CALayer is the basis for display: storing bitmap
To put it simply, CALayer is the foundation of screen display. So how does CALayer do it? Let's explore down from the source code. In CALayer.h, CALayer has an attribute called contents:
/ * Layer content properties and methods. * An object providing the contents of the layer, typically a CGImageRef, * but may be something else. (For example, NSImage objects are * supported on Mac OS X 10.6 and later.) Default value is nil. * Animatable. * / @ property (nullable, strong) id contents
An object providing the contents of the layer, typically a CGImageRef.
Contents provides the content of layer, which is a pointer type, and the type in iOS is CGImageRef (or NSImage in OS X). We further found that the definition of CGImageRef by Apple is:
A bitmap image or image mask.
Seeing bitmap, we can now relate to the rendering pipeline mentioned earlier: in fact, the contents property in CALayer saves the bitmap bitmap (often referred to as backing store) rendered by the device rendering pipeline, and when the device screen is refreshed, the resulting bitmap is read from the CALayer and rendered to the screen.
So, if we set the contents property of CALayer in our code, such as this:
/ / Note the relationship between CGImage and CGImageRef: / / typedef struct CGImage CGImageRef;layer.contents = (_ _ bridge id) image.CGImage;**
At run time, the operating system will call the underlying interface to render the image through the CPU+GPU rendering pipeline to get the corresponding bitmap, which is stored in CALayer.contents. When the device screen is refreshed, the bitmap will be read and rendered on the screen.
It is also because the content to be rendered is statically stored each time, so each rendering, Core Animation will trigger a call to drawRect: method, using the stored bitmap for a new round of display.
The relationship between CALayer and UIView
As the most commonly used view control, UIView and CALayer are also inextricably linked, so what is the relationship between the two, and what are the differences between them?
Of course, there are many obvious differences between the two, such as whether they can respond to click events. But in order to thoroughly understand these problems, we must first understand the responsibilities of both.
UIView-Apple
Views are the fundamental building blocks of your app's user interface, and the UIView class defines the behaviors that are common to all views. A view object renders content within its bounds rectangle and handles any interactions with that content.
According to the official documentation of Apple, UIView is the basic structure of app, defining some unified specifications. It will be responsible for rendering the content and handling interactive events. Specifically, the things it is responsible for can be classified into three categories
Drawing and animation: drawing and animation
Layout and subview management: layout and management of sub-view
Event handling: click event handling
CALayer-Apple
Layers are often used to provide the backing store for views but can also be used without a view to display content. A layer's main job is to manage the visual content that you provide...
If the layer object was created by a view, the view typically assigns itself as the layer's delegate automatically, and you should not change that relationship.
As we can see from the official documentation of CALayer, the main responsibility of CALayer is to manage internal visual content, which is consistent with what we mentioned earlier. When we create a UIView, UIView automatically creates a CALayer, provides itself with a place to store the bitmap (that is, the backing store mentioned earlier), and sets itself as a CALayer proxy.
From here, we roughly summarize the following two core relationships:
CALayer is one of the attributes of UIView, which is responsible for rendering and animation, providing the rendering of visual content.
UIView provides the encapsulation of some functions of CALayer and is also responsible for the handling of interactive events.
With these two most critical fundamental relationships, the following explicit similarities and differences that often appear in interview answers can be easily explained. To give a few examples:
The same hierarchical structure: we are very familiar with the hierarchical structure of UIView, because each UIView corresponds to CALayer responsible for page drawing, so CALayer also has a corresponding hierarchical structure.
Partial effect settings: because UIView only encapsulates some of the functions of CALayer, while other special effects such as fillets, shadows, borders, and so on, need to be set by calling the layer property.
Whether or not to respond to click events: CALayer is not responsible for click events, so it does not respond to click events, while UIView will respond.
Different inheritance relationships: CALayer inherits from NSObject,UIView because it is responsible for interactive events, so it inherits from UIResponder.
Of course, there is one last question left: why should CALayer be independent and not be managed by UIView directly? Why not use a unified object to deal with everything?
The main reason for this design is to separate responsibilities, split functions, and facilitate code reuse. Take care of the rendering of visual content through the Core Animation framework, so that Core Animation can be used for rendering on both iOS and OS X. At the same time, the two systems can further encapsulate unified controls according to different interaction rules, for example, iOS has UIKit and UIView,OS X is AppKit and NSView.
4. Core Animation rendering full content Core Animation Pipeline rendering pipeline
After we understand the basics of Core Animation and CALayer, let's take a look at the rendering pipeline of Core Animation.
The whole assembly line has the following steps:
Handle Events: this process will first deal with the click event, during which you may need to change the layout of the page and the interface level.
Commit Transaction: at this time, app will handle the pre-calculation of display content through CPU, such as layout calculation, image decoding and other tasks, which will be explained in detail next. The calculated layer is then packaged and sent to Render Server.
Decode: after the packaged layer is transferred to Render Server, it will be decoded first. Note that after the decoding is completed, you need to wait for the next RunLoop before performing the next Draw Calls.
Draw Calls: after the decoding is completed, Core Animation will call the methods of the lower rendering framework (such as OpenGL or Metal) to draw, and then call GPU.
Render: this phase is mainly rendered by GPU.
Display: display phase, need to wait until the end of the render of the next RunLoop trigger display.
Commit Transaction, what happened?
Generally speaking, the two stages that can be affected in development are Handle Events and Commit Transaction, which are also the parts that developers have the most contact with. Handle Events is to deal with touch events, and Commit Transaction mainly carries out four specific operations: Layout, Display, Prepare, Commit and so on.
Layout: build view
This phase mainly deals with the construction and layout of views, and the specific steps include:
Call the overloaded layoutSubviews method
Create a view and add a child view through the addSubview method
Calculate the view layout, that is, all Layout Constraint
Since this phase is carried out in CPU, usually with CPU or IO restrictions, we should operate as efficiently and lightly as possible to reduce this part of the time, such as reducing unnecessary view creation, simplifying layout calculation, reducing view levels, and so on.
Display: drawing view
This stage is mainly for Core Graphics to draw the view, note that it is not the real display, but to get the primitives data of the elements mentioned above:
The element information is obtained from the creation of the results of the previous phase Layout.
If the drawRect: method is overridden, the overloaded drawRect: method is called, and the bitmap data is manually drawn in the drawRect: method to customize the drawing of the view.
Note that under normal circumstances, the Display phase will only get the element primitives information, while the bitmap bitmap is drawn based on the element information in GPU. But if the drawRect: method is overridden, this method will directly call the Core Graphics drawing method to get the bitmap data, and the system will request an extra piece of memory to temporarily store the drawn bitmap.
Due to the rewriting of the drawRect: method, the rendering process is transferred from GPU to CPU, which leads to a certain loss of efficiency. At the same time, this process uses extra CPU and memory, so you need to draw efficiently, otherwise it is easy to cause CPU stutters or memory explosions.
Extra work for Prepare:Core Animation
This step is mainly: image decoding and conversion
Commit: package and send
This step is mainly: the layer is packaged and sent to Render Server.
Note that commit operations rely on the layer tree to perform recursively, so if the layer tree is too complex, the commit will be very expensive. This is why we want to reduce the view level, thereby reducing the complexity of the layer tree.
The concrete operation of Rendering Pass:Render Server
Render Server is usually OpenGL or Metal. Taking OpenGL as an example, the above figure mainly shows the operations performed in GPU, which mainly include:
GPU received Command Buffer containing element primitives information
Tiler begins to work: first process the vertices through the vertex shader Vertex Shader to update the primitive information
Tiling process: tile the geometry of the tile bucket, which converts the primitive information into pixels, and then writes the result to the Parameter Buffer
After Tiler updates all the element information, or the Parameter Buffer is full, the next step will be started.
Renderer work: the pixel information is processed to get bitmap, and then stored in Render Buffer
The rendered bitmap is stored in Render Buffer for later Display operations.
Using Instrument's OpenGL ES, you can monitor the process. OpenGL ES tiler utilization and OpenGL ES renderer utilization can monitor the operation of Tiler and Renderer respectively.
5. Offscreen Rendering off-screen rendering
Off-screen rendering, as a high-frequency question in interviews, is often mentioned. Let's talk about off-screen rendering from beginning to end.
Specific process of off-screen rendering
According to the previous article, in terms of simplicity, the usual rendering process looks like this:
Through the cooperation of CPU and GPU, App constantly puts the content rendering into the Framebuffer frame buffer, while the display screen continues to get the content from the Framebuffer and display the real-time content.
The process of off-screen rendering is as follows:
Unlike GPU, which directly puts the rendered content into the Framebuffer, you need to create an additional off-screen rendering buffer Offscreen Buffer, put the pre-rendered content into it, and then further overlay and render the content in Offscreen Buffer when the time is right, and then switch the result to Framebuffer.
The efficiency of off-screen rendering
Judging from the above process, when rendering off-screen, App needs to render part of the content in advance and save it to Offscreen Buffer, as well as the need to switch between Offscreen Buffer and Framebuffer when necessary, so it will take a longer processing time (in fact, the switching cost of both steps for buffer is very high).
And Offscreen Buffer itself needs extra space, and a large number of off-screen rendering may already be under too much pressure on memory. At the same time, the total size of the Offscreen Buffer is limited to no more than 2.5 times the total pixels of the screen.
It can be seen that the overhead of off-screen rendering is very high. Once too much content needs to be rendered off-screen, it is easy to cause the problem of frame loss. So in most cases, we should try to avoid off-screen rendering.
Why use off-screen rendering
So why use off-screen rendering? Mainly for the following two reasons:
Some special effects require extra Offscreen Buffer to save the intermediate state of the rendering, so you have to use off-screen rendering.
For the purpose of efficiency, the content can be rendered and saved in Offscreen Buffer in advance to achieve the purpose of reuse.
For the first case, when you have to use off-screen rendering, it is usually triggered automatically by the system, such as shadows, rounded corners, and so on.
One of the most common situations is the use of mask masks.
As shown in the figure, because the final content is superimposed by two layers of rendering results, the intermediate rendering results must be saved with extra memory space, so the system triggers off-screen rendering by default.
For another example, the fuzzy special effects UIBlurEffectView that iOS 8 began to provide:
The whole blurring process is divided into several steps: Pass 1 first renders the content that needs to be blurred, Pass 2 scales the content, Pass 3 / 4 blurs the content of the previous step respectively, and the last step is superimposed with the blurred results to achieve complete fuzzy special effects.
In the second case, the use of off-screen rendering for reuse to improve efficiency is generally an active behavior, which is realized through CALayer's shouldRasterize rasterization operation.
ShouldRasterize rasterization
When the value of this property is YES, the layer is rendered as a bitmap in its local coordinate space and then composited to the destination with any other content.
When rasterization is enabled, off-screen rendering will be triggered, and Render Server will force CALayer's rendered bitmap result bitmap to be saved, so that it can be directly reused the next time you need to render, thus improving efficiency.
The saved bitmap contains layer's subLayer, fillet, shadow, group transparency group opacity, etc., so if the composition of layer contains the above elements, the structure is complex and needs to be used repeatedly, then you can consider turning on rasterization.
Rounded corners, shadows, group transparency, etc., will automatically trigger off-screen rendering, so turning on rasterization can save second and later rendering time. In the case of multi-layer subLayer, off-screen rendering will not be triggered automatically, so it will take more time for the first off-screen rendering, but it can save the cost of subsequent repeated rendering.
However, you need to pay attention to the following when using rasterization:
If layer cannot be reused, it is not necessary to turn on rasterization
If layer is not static and needs to be modified frequently, such as in animation, then enabling off-screen rendering will affect the efficiency.
There is a time limit for rendering cache content off-screen. If the cache content is not used in the 100ms, it will be discarded and cannot be reused.
The cache space for off-screen rendering is limited, and it will fail if it is more than 2.5 times the screen pixel size and cannot be reused.
Off-screen rendering of fillet
Generally speaking, off-screen rendering is automatically triggered after the fillet effect of layer is set. But under what circumstances does setting a fillet trigger off-screen rendering?
As shown in the figure above, layer consists of three layers, and we usually set the fillet first like the following line of code:
View.layer.cornerRadius = 2
According to the description of cornerRadius-Apple, the above code only sets the fillet of backgroundColor and border by default, not the fillet of content, unless layer.masksToBounds is also set to true (corresponding to the clipsToBounds property of UIView):
Setting the radius to a value greater than 0.0 causes the layer to begin drawing rounded corners on its background. By default, the corner radius does not apply to the image in the layer's contents property; it applies only to the background color and border of the layer. However, setting the masksToBounds property to true causes the content to be clipped to the rounded corners.
If only cornerRadius is set but masksToBounds is not set, off-screen rendering will not be triggered at this time because there is no need for overlay cropping. When the cropping attribute is set, masksToBounds will crop the content of layer and all subLayer, so it has to trigger off-screen rendering.
View.layer.masksToBounds = true / / the reason for triggering off-screen rendering
Therefore, Texture also suggests that when it is not necessary to use fillet clipping, try not to trigger off-screen rendering and affect the efficiency:
Specific logic of off-screen rendering
I just said that when you add masksToBounds to the fillet, because masksToBounds will cut all the content on the layer, thus inducing off-screen rendering, then what exactly is the process? let's talk about it in detail.
The overlay rendering of layers probably follows the "painter algorithm", under which it will be drawn by layer, first drawing a scene with a longer distance, and then covering the farther part with a scene with a closer distance.
In ordinary layer rendering, the upper sublayer will cover the lower sublayer, and the lower sublayer can be discarded after drawing, so as to save space and improve efficiency. After all the sublayer are drawn in turn, the whole drawing process is completed, and the subsequent rendering can be carried out. Suppose we need to draw a three-layer sublayer without setting cropping and fillet, then the whole drawing process is shown in the following figure:
When we set cornerRadius and masksToBounds to fillet + crop, as mentioned earlier, the masksToBounds crop attribute will be applied to all sublayer. This means that all sublayer must be re-applied to fillet + cropping, which means that all sublayer cannot be discarded immediately after the first drawing, but must be saved in Offscreen buffer for the next round of fillet + cropping, which induces off-screen rendering as follows:
In fact, it is not just fillet + cropping, but if transparency + group transparency (layer.allowsGroupOpacity+layer.opacity) is set, shadow attributes (shadowOffset, etc.) will have a similar effect, because group transparency and shadows are similar to cropping and will work on layer and all its sublayer, which will inevitably cause off-screen rendering.
Avoid fillet off-screen rendering
Besides minimizing the use of fillet clipping, is there any other way to avoid off-screen rendering caused by fillet + clipping?
As we just mentioned, the essence of off-screen rendering caused by fillets is the overlay of cropping, which causes masksToBounds to reprocess layer and all sublayer. So as long as we avoid using masksToBounds for secondary processing, but preprocess all the sublayer, we can only use the "painter algorithm" to complete the drawing with a single stack.
Then there are probably the following possible ways to implement it:
[change resources] use a picture with rounded corners directly, or replace the background color with a solid background image with rounded corners, so as to avoid using fillet cropping. However, this method depends on the specific situation and is not universal.
[mask] add a mask that is the same as the background color mask to cover the top layer, covering the four corners to create a fillet shape. However, it is difficult to solve the situation where the background color is a picture or a gradient.
[UIBezierPath] draw a closed rectangle with rounded corners with Bezier curves, set only the interior visible in the context, and then render the layer without rounded corners into a picture and add it to the Bezier rectangle. This method is more efficient, but once the layout of layer changes, Bezier curves need to be manually redrawn, so frame, color, and so on need to be manually monitored and redrawn.
[CoreGraphics] rewriting drawRect:, uses CoreGraphics-related methods to draw manually when fillets need to be applied. However, the efficiency of CoreGraphics is also very limited, and there will be efficiency problems if multiple calls are required.
Thank you for your reading, these are the contents of "what is the principle of iOS rendering". After the study of this article, I believe you have a deeper understanding of what the principle of iOS rendering is, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.