Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the disadvantages of http1.1 compared to http2

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail what shortcomings http1.1 has relative to http2. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.

After the server is built, we create a new page in the public folder. The resources in this page are mainly composed of images, css and js, and the code is as follows:

Looking at the html section, we found that there are many img tags on the page. At this time, we visit the page and observe the browser's network. The screenshot is as follows:

Observe its rule carefully. When the browser requests image resources, the maximum number of concurrency is 6. To explain here, the maximum number of tcp links for websites with the same domain name is specified by the browser, and the default maximum number of tcp links for the same domain name for Google browser is 6. In this case, all image resources on the page must complete the request response through these 6 tcp long connections.

Here is a more vivid analogy. If there are a hundred people going back and forth across a river, and there are six bridges on the river, each bridge can only be crossed by one person at a time, then at most six people can cross the river at the same time, no more. The request response under the http1.1 agreement is based on this model.

A noun RTT:RTT (Round-Trip Time) is introduced here: round-trip delay. It is an important performance indicator in the computer network, which represents the total delay from the time the sender sends the data to the acknowledgment from the receiver (the receiver sends the acknowledgment immediately after receiving the data).

A http request response needs to go through at least one RTT. If the request or response carries a large amount of data, it may need to go through multiple transmissions, that is, multiple RTT. We simply calculate here that it takes only one RTT for a http request and n seconds for a RTT.

If there is only one TCP persistent connection in a page, the time for 100 resources is now 100*RTT*n, and if you use 6 TCP links, the time is reduced to 100*RTT*n/6.

Under the http1.1 agreement, in view of the browser's limit on the maximum number of concurrency of each domain name, we can use the technology of domain name fragmentation to further shorten the time. What is domain name fragmentation? set multiple domain names to the same website, for example, a.com and b.com all point to the same website, so that the maximum number of concurrency of the browser to the same website will increase with the increase of the domain name. The time is further shortened to: 100*RTT*n/ (number of 6* domain names).

However, with the increase in the number of domain names, some problems will arise:

1. The pressure on the server increases.

2. It takes time for browsers to resolve domain names. The more domain names, the more time it takes.

The technique focused on above is to use the features of the browser to break through the maximum number of concurrency, but to cure the symptoms rather than the root causes.

TCP itself has some problems, such as slow start, when the network speed changes, the speed of TCP will fluctuate, and the speed will also be affected.

TCP itself is competitive, and multiple tcp connections at the same time will preempt each other's network speed. The most important point is: http1.1-based tcp long connections have the phenomenon of head-of-line blocking. If you don't understand what http1.1 is, welcome to read this article: vernacular http head-of-line blocking.

After talking about the features of browser TCP connections under the http1.1 protocol, let's consider a question: what files need to be loaded first in the process of page parsing?

CSS files and js files, of course, but think about when the css files and js files were loaded, after the response to the html file was completed, the browser quickly scanned the key resources in the page, and then downloaded js and css. There may be a free time, this idle time there are two possibilities, too many tags in html, parsing is more laborious, although the css download is complete, but still need to wait for DOM parsing complete, the other is DOM parsing completed CSS is not completed, still need to wait, no matter what we do here, the two will not be synchronized, how should we do? What we can do with the http1.1 version is to minimize the loading of critical resources, whether it's html, css, or js.

Under the http1.1 protocol, we can do this in the following ways:

1. Compress the code and remove the comments

2. Apply async and defer reasonably to js files that do not depend on dom to avoid blocking of dom parsing.

3. Apply media query to css to avoid loading css for some specific scenarios.

4. Adjust the number and size of files reasonably. You can't merge all css or js blindly here. If a css or js is too large, it will also affect the efficiency, so you can only constantly adjust the test. The essence is to reduce the RTT cost of loading resources, and do not exceed the maximum number of concurrency of the browser for the same domain name.

5. Make rational use of CDN.

6. Apply domain name slicing technology.

The above is the content of today's article, to sum up:

1. In the http1.1 version, we can use domain name sharding technology to speed up the download of resources according to the maximum number of TCP links supported by browsers for the same domain name, but this will also bring some problems. The pressure on the server increases, and it takes more time for browsers to resolve domain names, which does not seem to solve our problem very well.

2. The TCP persistent connection in http1.1 has the problem of blocking at the head of the line. Each http request in the same tcp link must be responded to before the subsequent http can continue.

3, some bad features of TCP itself, slow start, multiple TCP links compete for network speed.

We have given a solution to these problems, but after all, it is not too elegant. With the rapid development of the Internet, http1.1 seems to be more and more unable to meet the needs of current users. The time has come for http2. The next article will lead you to use http2 to improve these shortcomings of http1.1.

Finally, two questions: can we achieve the following requirements in the http1.1 version?

1. If you want to load multiple resources on the page, and some resources are more important, what should we do if we want to make them load first?

2. When we request a URL, can the server push the important resources needed by the page in advance instead of waiting for the browser to scan the html and then load it?

3. Since there is competition among multiple TCP links, can we let browsers base all http requests for the same domain name on the same tcp link? This not only reduces competition, but also reduces the time-consuming operation of tcp links.

This is the end of this article on "what are the shortcomings of http1.1 relative to http2". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report