Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of http queue head blocking

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article is to share with you the content of an example analysis of http head-of-line blocking. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

One of the biggest differences between version 1.0 and version 1.1 of the http protocol is that http1.1 adds a persistent connection function, so what is the persistent connection of http?

Before we understand the long connection of http, let's take a look at how the request of http1.0 establishes a connection. First of all, we need to be clear that no matter which version of http is based on the tcp protocol, while the connection of tcp requires three handshakes, the closure of tcp requires four waves, and each http request in http1.0 requires three handshakes and four waves. How many http requests are on the page? How many handshakes and waves need to be established, as shown in the figure:

But in http1.1 added a persistent connection function, this persistent connection function is usually called http persistent connection, the author believes that this name is not very accurate, should be called tcp persistent connection, why do you say so, because in http1.1, if the page launched multiple http requests, at this time only need to establish a tcp connection on it, multiple http request responses will share this tcp connection channel.

At this point, the http request will carry a Http request header: Connection:keep-alive. Most web servers now support tcp persistent connection by default, that is, requests in web pages do not carry Connection:keep-alive request headers. If you do not want to support persistent connections, you need to add Connection:closed request headers.

The procedure for requesting a link from http1.1 is as follows:

By comparing the two flow charts, we find that keeping a long connection in tcp greatly improves the transmission efficiency, but there is still a problem, that is, the peer blocking problem of http.

In general, HTTP follows the "request-response" pattern, that is, each time the client sends a request to the server, the server returns a response. This mode is very simple, but there is a fatal flaw that there are multiple requests in the page, each request must be sent after the previous request response, and after the response of the current request is returned, the next request of the current request can be sent. The process is shown in the following figure:

Take a closer look at the figure above: in the tcp link, the http request must wait for the previous request response before it can be sent, and so on, so you can see that if the response of a http request is not returned in time for some reason in a tcp channel, the subsequent response will be blocked, which is called head-of-line blocking.

To improve speed and efficiency, HTTP1.1 further supports the use of pipelining features over persistent connections. Pipelining allows the client to send the next request before the sent request receives the response from the server, so as to reduce the waiting time and increase throughput. If multiple requests can be sent in the same TCP section, it can also improve network utilization. The process is shown below:

Taking a closer look at the figure above, we find that multiple tcp requests can be sent simultaneously in the same http connection, that is, concurrency. But when responding, you must queue up the response, and the one who arrives first will respond first. Compared with the http request that does not support pipelining, it is indeed more efficient, but there are still limitations. If one of the responses is delayed for a few seconds for some reason, the subsequent responses will be blocked, as shown in the figure:

Observe the response marked by the red line above, because the response marked by the red line is blocked, and all responses behind it will be blocked, which is the head of the line blocking.

And there are some limitations to using HTTP pipelinization:

1. Pipelining requires the server to return a response (FIFO) in the order in which the request is sent. The reason is simple: the HTTP request and response are not identified by sequence numbers, so disordered responses cannot be associated with the request.

2. When the client supports pipelining, it needs to keep the request that has not received a response, and when the connection is accidentally interrupted, it needs to resend this part of the request. If the request only fetches data from the server, it will not have any impact on the resource, while if it is a request to submit information, such as a post request, it may cause the resource to be submitted multiple times and change the resource, which is not allowed. Requests that do not affect server resources have a technical term called idempotent requests. The client must use idempotent requests when using pipelining.

I put together the diagrams that http does not support pipelining and pipelining, and let's compare it:

Because of the problem that HTTP pipelinization itself can cause head-of-line blocking, and some of the limitations mentioned above, modern browsers turn pipelining off by default, and most servers do not support pipelining by default.

So how to solve the congestion at the head of the line?

The HTTP protocol recommends that clients use concurrent long connections. Note that this concurrency refers to tcp concurrent connections. There is a clear limit in RFC2616 that each client can establish two persistent connections. Here, it is emphasized that the number of persistent connections established by the client is initiated for the domain name. For example, when we visit the a.com website, the client establishes two long links with the a.com server.

However, in general, browsers will increase the number of concurrent links to 6 to 8, compared with 6 for Google browser, that is, if there are multiple http requests for the same domain name on the page, Google browser will establish 6 tcp persistent connections for this domain name and process http requests in each persistent connection. But this solution is actually very challenging to the server, and some web optimization solutions will break through the limit of 6 to 8. That is domain name slicing, because persistent connections are aimed at the same domain name, so if developers distribute resources on different domain names, the number of long connections can also be broken.

Assuming that there are 100 pictures on the page, based on this case, let's use the diagram to show the transition from http1.0 to http1.1 in three pictures:

Http1.0 era: 100 http requests to establish 100 tcp connections.

In the era of http1.1, tcp supports persistent connections, and each tcp can handle multiple http requests.

As mentioned earlier, the number of concurrency of tcp can be increased by domain name slicing, but doing so will increase the number of connections to the server. When the server faces a large number of requests, there may be problems, so what to do? you need to use the http2 protocol. We'll talk next time.

Let me summarize the contents of this article:

1. First of all, we clarify the concept that http persistent connection actually refers to tcp persistent connection.

2. Head-of-line blocking is a phenomenon. Http will have head-of-line blocking because of the request-response model. Head-of-line blocking means that in the same tcp link, if the first http request does not respond, the subsequent http request will not respond.

3. The first solution to solve queue head blocking is to issue parallel long connections. The default number for browsers is 6-8 long connections. We can use domain name sharding technology to break this value.

4. Although the concurrent persistent connection solves the queue head blocking of http to some extent, it will have higher requirements on the performance of the server.

Thank you for reading! This is the end of this article on "example Analysis of http team head blocking". I hope the above content can be helpful to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report