Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Application of off-screen rendering in vehicle Navigation

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

Guide reading

Unlike mobile navigation, Amap's car-machine version (AMAP AUTO) directly faces major car factories and many equipment manufacturers. The hardware parameters used by these B-end users are uneven, and the proposed business requirements involve the application of many complex technologies in rendering, which puts forward high requirements for rendering performance.

At first, the car version follows the current screen rendering mode of the mobile version, and each frame needs to render the map elements in real time. However, in the course of business practice, we found that the CPU load increased sharply in multi-screen rendering and multi-view rendering scenarios. Take the Hawkeye scene as an example. In the Hawkeye scene, the map has a multi-view rendering state: one is the main map and the other is the Hawkeye Mini Map, so the rendering engine renders two map instance objects at the same time. The lower right corner of the following image is the Hawkeye image:

After the eagle's eye map is drawn, the average frame rate is reduced by 2 frames, as shown in the following figure:

In view of the above situation, in addition to the conventional optimization of rendering details, batches and textures, we also need to find an overall technical optimization means to greatly improve the rendering performance of the engine. For this reason, we deeply study the off-screen rendering technology, and combined with the navigation business, propose a method to optimize the performance of the view of a specific map based on off-screen rendering technology.

Optimization principle

In OpenGL's rendering pipeline, geometric data and textures are finally rendered into two-dimensional pixels on the screen through a series of transformations and tests. The two-dimensional arrays used to store color values and test results are called framebuffers. When we create a form for OpenGL drawing, the form system generates a default frame buffer that is completely managed by the form system and is only used to output the rendered image to the display area of the window. We can also use to open a buffer outside the current screen buffer to enter the rendering operation. The former is the current screen rendering, and the latter is off-screen rendering.

Compared with the current screen rendering, off-screen rendering:

In a changing scene, the cost is high because off-screen rendering requires the creation of a new buffer and the need to switch contexts several times.

In a stable scene, off-screen rendering can be rendered with a texture, so the performance is greatly improved compared with the current screen rendering.

From the above comparison, we can see that the advantage of using off-screen rendering in a stable scene is greater. However, because the map state is changing all the time, map rendering is usually in the foreground dynamic rendering state. So is there a relatively stable scene? The answer is yes, we divide the state of the map into immersive state and non-immersive state. As the name implies, the state in which the map is in a changing state is called the non-immersive state, and the steady state is called the immersed state.

Entering the immersive map provides conditions for us to use off-screen rendering. According to statistics, when the map is in the foreground state, the immersion time is basically the same as the non-immersion time, so we can use a texture to render the map in the non-immersion scene, which greatly reduces the system overhead. In the Hawkeye map, vector intersection map and other specific view scenes, the map is basically in the immersive state. Therefore, if these views are optimized by off-screen rendering technology, the benefits will be huge.

Engineering practice

Put the above technical optimization principles into the actual navigation application, the process is as follows:

Off-screen rendering is usually implemented using FBO. FBO is Frame Buffer Object, it allows our rendering not to render to the screen, but to render to the off-screen Buffer. However, the usual off-screen rendering of FBO objects does not have anti-aliasing ability, so the OpenGL application with full-screen anti-aliasing ability is opened. Aliasing occurs when FBO objects are rendered off-screen. In the state of non-immersive map, full-screen anti-aliasing ability is turned on, so we must use off-screen rendering technology with anti-aliasing ability to optimize map rendering technology.

Brief introduction of Anti-aliasing off-screen rendering Technology

This section takes the iOS system as an example to briefly describe the off-screen rendering technology with anti-aliasing capability. IOS system deeply customizes OpenGL, and its anti-aliasing ability is based on FBO. As shown in the following figure, IOS operates on an anti-aliased frame cache (FBO) object to achieve full-screen anti-aliasing:

Next, the steps for creating an anti-aliasing FBO are described in detail:

Create a FBO and bind it:

GLuint sampleFramebuffer

GlGenFramebuffers (1, & sampleFramebuffer)

GlBindFramebuffer (GL_FRAMEBUFFER, sampleFramebuffer)

Create a color frame buffer, open up an anti-aliasing object in video memory, and mount the color buffer to the opened object. Create a deep template rendering buffer, open up anti-aliasing video memory space, and bind to the frame buffer:

GLuint sampleColorRenderbuffer, sampleDepthRenderbuffer

GlGenRenderbuffers (1, & sampleColorRenderbuffer)

GlBindRenderbuffer (GL_RENDERBUFFER, sampleColorRenderbuffer)

GlRenderbufferStorageMultisampleAPPLE (GL_RENDERBUFFER, 4, GL_RGBA8_OES, width, height)

GlFramebufferRenderbuffer (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, sampleColorRenderbuffer)

GlGenRenderbuffers (1, & sampleDepthRenderbuffer)

GlBindRenderbuffer (GL_RENDERBUFFER, sampleDepthRenderbuffer)

GlRenderbufferStorageMultisampleAPPLE (GL_RENDERBUFFER, 4, GL_DEPTH_COMPONENT16, width, height)

GlFramebufferRenderbuffer (GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, sampleDepthRenderbuffer)

Test whether the environment created is correct to avoid the possibility of creation failure caused by insufficient video memory space:

GLenum status = glCheckFramebufferStatus (GL_FRAMEBUFFER)

If (status! = GL_FRAMEBUFFER_COMPLETE) {

Return false

}

Now that an off-screen FBO with anti-aliasing capability has been created, the FBO will be applied as follows:

First clear the contents of the anti-aliasing frame cache space:

GlBindFramebuffer (GL_FRAMEBUFFER, sampleFramebuffer)

GlClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)

GlViewport (, framebufferWidth, framebufferHeight)

Start a series of rendering functions, such as preparing vertex data, texture data, VBO,IBO, matrix, state, etc., and execute a series of rendering instructions to select the specified shader and transfer data status

FBO is not a frame cache space with direct rendering ability. After performing the operation of 2, you need to convert the rendered content in the anti-aliasing FBO to the frame cache space where the screen rendering is located by merging each pixel. The principle is shown in the following figure:

The code is as follows:

GlBindFramebuffer (GL_DRAW_FRAMEBUFFER_APPLE, resolveFrameBuffer)

GlResolveMultisampleFramebufferAPPLE ()

GlBindFramebuffer (GL_READ_FRAMEBUFFER_APPLE, sampleFramebuffer)

After the above operations are completed, you need to take some Discard steps to ignore some of the contents previously in the current frame cache:

GlBindRenderbuffer (GL_RENDERBUFFER, colorRenderbuffer)

[context presentRenderbuffer:GL_RENDERBUFFER]

The basic idea of the Android system is the same, and we need to use the anti-aliasing capability provided by the gles3.0 interface to render. We will not expand it here.

Optimization and comparison

The time-consuming flame image of Hawkeye rendering before optimization is as follows:

The time-consuming flame image of the optimized Hawkeye image is as follows:

From the before and after comparison, we can see that the eagle's eye rendering is time-consuming and has almost disappeared.

It is further verified from the rendering frame rate of the system. You can see from the following image that the frame rate has been restored to the same level as if the Hawkeye image is not displayed:

It should be noted that full-screen anti-aliasing resources, in addition to additional memory space, anti-aliasing process will also produce a certain amount of time-consuming. Therefore, while obtaining the income, we also need to measure the cost, and need to analyze the specific problems. In this case, as the comparison results show, the benefits generated by the optimization of anti-aliasing off-screen rendering technology far outweigh the cost.

Pay attention to Gaode technology and find more professional content in the field of travel technology.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report