In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
The Evolution of Desktop sharing coding Technology
Tang Jun, the geek of by technology.
Introduction: how should desktop sharing be divided into functions? How does the technical evolution of data coding evolve? Tang Jun, a senior engineer, combined with his many years of practical experience, gave an original opinion.
Since I have been working in the areas of "online education" and "video conferencing" in recent years, the most important function for users in these two areas is not only voice but desktop sharing, which happens to be my area of expertise.
Desktop sharing can be functionally divided into two major aspects: screen capture and data coding. Among them, screen capture, mainly access to data sources, in the current machine computing power, this function is no longer a bottleneck, so let's mainly talk about the technical evolution of data coding.
Desktop sharing is born out of remote desktop technology.
The earliest remote desktops were analog terminals based on the command line interface. At this time, it does not involve screen capture and coding, the communication between the terminal and the remote machine is the shell command, as well as the execution result of the command, the demand for machine performance and network code is very low.
With the Win95 system on the market and successfully detonated the graphical operating system market, limited by the performance of the machine and network bandwidth, there is no remote desktop tool for the graphical operating system for quite a long time. It was not until the launch of Windows2000, in which Microsoft provided remote desktop components, that it implemented a remote desktop based on a graphical interface for the first time.
The early graphical interface had a low resolution (800 to 600) and a small number of colors (mostly 16-bit colors and 24-bit true colors were rare). However, even so, the amount of screen data (937KB) was a heavy burden on the broadband network at that time (ADSL dial-up, upload bandwidth 512kb~1Mb). In theory, it took 8 to 16 seconds to transmit an uncompressed screen data on the network at that time, which is obviously unacceptable. In order to quickly transfer desktop graphics to the terminal through the network, the compression coding of desktop data arises at the historic moment.
At the beginning of desktop data compression, it is mainly used to solve the problem of large frame data. Therefore, the first thing to be applied is the very popular picture data lossy compression method (JPG) at that time. When the picture quality is not obvious, the compressed image is only 10% of the original size. After using the JPG compression algorithm, it is barely enough to view remote static documents.
In order to further reduce the transmission interval, when there is no way to reduce the data size of each frame, we ask ourselves, is it necessary to transmit complete frame data each time? After analysis, we find that the probability of complete change on the desktop is very small, and most of them are local changes, such as buttons to get focus, a control data to be updated, and so on.
Study and solve the problem in view of the "pain point"
For this reason, we design a block coding strategy: first divide the entire desktop data into blocks (see figure 1), then compare each block with the corresponding block of the previous frame before coding, and use the JPG algorithm to compress the data only when the data has changed. Each time only changes in the block of data are transmitted, and the receiver always makes changes on the frame data shown last time. In this way, the delay of other frame data is greatly reduced without reducing the delay of the first frame data.
(figure 1)
In practical use, it is found that for plain text display (text files, PPT, static web pages, etc.), the background of the font compressed by JPG method is not very clean. After zooming in on the picture, it is found that a gradient transition is used where the edge of the text merges with the background.
JPG compression loses the details of this information. For the desktop data of plain text display (text files, PPT, static web pages, etc.), it is found that most of the text with a small number of colors increases the area of monochrome background. Lossless compression based on color palettes happens to be available for this type of data. We have improved the previous coding strategy again. When it has been determined that the block needs to be encoded, the number of colors used in the block is analyzed, and different coding methods are selected according to the different number of colors.
With the continuous improvement of machine performance, the resolution of the display is getting higher and higher, 1080p full HD has become the mainstream, 4K screen is not uncommon, and more and more. When users use PPT to display data, there are more and more complex backgrounds, implanted charts (videos), and animated page flipping. Although the network bandwidth has also improved, it can not keep up with the improvement of machine performance at all. The above coding scheme produces a large amount of burst data when the desktop changes dramatically in a short time.
According to the above scheme, the display of subsequent data must rely on the update of the previous data. Due to the data backlog caused by the outbreak of data, the real-time performance of desktop sharing is getting worse and worse. After analyzing the above scenarios, we find that during the animation playback of the PPT page, five frames may be generated, and the changes of these five frames are relatively large. If the coding transmission is carried out one by one, there will be a peak in the transmission in a short time, which exceeds the carrying capacity of the bandwidth. But viewers expect to see the PPT on the next page faster than the page switch animation.
For this reason, we introduce a delay coding strategy. When there is a great difference between the two frames on the desktop, we temporarily store the frame to be encoded, wait for the next frame, and start timing at the same time. After the next frame data is acquired, if the difference between the frame and the frame to be encoded is large, use the frame to replace the frame to be encoded and continue to wait; if the difference is relatively small, discard the frame to be encoded, encode the current frame data and send it to the viewer.
If the difference between the two frames has been large, then when the timer (500 milliseconds) times out, encode the current waiting frame and reset the timer. In the development project of full-time cloud conference, we design and implement the above delay coding strategy, which significantly speeds up the viewing delay when switching complex PPT pages.
The new era and new technology are double-edged swords for us.
Since 2007, video streaming technology has made great progress. From the early H261 and H263 to the present H264, as well as H265 and VP9, VP10 coding in response to the increasing popularity of ultra-high definition (4k resolution) video.
Users' expectation for the fluency of desktop sharing is getting closer and closer to the fluency of video, which makes us have to consider whether the compression method of desktop data can use the compression method of video. We find that the mode of video streaming of desktop data has a significant peak-cutting and valley-filling effect on constantly changing desktop sharing.
When dealing with continuous changes, video coding can control the burst data peak by reducing the picture quality in a short period of time (millisecond), and improve the picture quality immediately when the picture change stops. We have reason to believe that the coding method of video streaming media is the "silver bullet" of desktop sharing to support high-definition and ultra-high-definition pictures.
The above process of technological exploration actually takes a lot of time and effort, and it is a process of continuous improvement, because the iteration of products and technologies themselves is not an once and for all thing. Only the author's team, five people, has been in the cycle of "finding problems-certification analysis-improving-finding problems" for more than 7 years, and it is expected to be the same in the future, not in improvement, on the way to improvement, but from all kinds of feedback, the effect is really good.
A few years ago, there was a test in the customer environment, and the effect of the full-time cloud conference was much better than that of another big-name foreign videoconference. it was connected to each other a few minutes earlier than another product, and the effect of the meeting was very good, and it was worth our painstaking efforts. of course, this is also the technical advantage of pure independent research and development, directly using foreign technology, under domestic network conditions, we can basically be unsatisfied.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 216
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.