Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Comparison between different versions of http Protocol (1.01.1 2.0)

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)06/01 Report--

Http 1.0

Short connection

Each request establishes a TCP connection, which is disconnected immediately after the request is completed. This will lead to two problems: connection cannot be reused, head of line blocking

Unreusable connections can cause each request to go through three handshakes and slow starts. The impact of the three-way handshake is more obvious in the scenario of high latency, while the slow start has a greater impact on large file requests. Head of line blocking can cause bandwidth to be underutilized and subsequent health requests to be blocked.

Http 1.1

To solve the pain points of HTTP 1. 0. Long connection

It is realized through http pipelining. Multiple http requests can reuse a TCP connection. The server handles different Request according to the FIFO principle and adds connection header.

The header is used to describe the connection mode between the client and the server TCP. If the connection is close, the short connection is used. If the connection is keep-alive, the request headers and response headers related to mechanisms such as persistent connection authentication status management Cache cache are used to add Host header.

Http 2.0

Multiplexing (Multiplexing)

Multiplexing allows multiple request-response messages to be initiated simultaneously over a single HTTP/2 connection. In the HTTP/1.1 protocol, the browser client has a certain number of requests under the same domain name at the same time. Requests that exceed the limit will be blocked. This is one of the reasons why some sites have multiple static resource CDN domain names. Take Twitter, for example, http://twimg.com, to solve the problem of browser request limit blocking for the same domain name in disguise. HTTP/2 multiplexing (Multiplexing) allows multiple request-response messages to be initiated simultaneously over a single HTTP/2 connection. Therefore, HTTP/2 can easily achieve multi-stream parallelism without relying on establishing multiple TCP connections. HTTP/2 reduces the basic unit of HTTP protocol communication to one frame, which corresponds to the messages in the logical flow. Messages are exchanged in both directions on the same TCP connection in parallel. Binary framing

HTTP/2 adds a binary framing layer between the application layer (HTTP/2) and the transport layer (TCP or UDP). Without changing the semantics, method, status code, URI and header field of HTTP/1.x, the performance limitation of HTTP1.1 is solved, the transmission performance is improved, and low delay and high throughput are achieved. In the binary framing layer, HTTP/2 divides all transmitted information into smaller messages and frames (frame) and encodes them in binary format, in which the header information of HTTP1.x is encapsulated in HEADER frame and the corresponding Request Body is encapsulated in DATA frame.

HTTP/2 communication is done on a single connection that can carry any number of two-way data streams. In the past, the key to HTTP performance optimization was not high bandwidth, but low latency. TCP connections tune themselves over time, initially limiting the maximum speed of the connection, and if the data is successfully transferred, it increases the transmission speed over time. This tuning is called TCP slow start. For this reason, HTTP connections, which are already sudden and short-lived, become very inefficient. By allowing all data streams to share the same connection, HTTP/2 can use TCP connections more efficiently, so that high bandwidth can really serve the performance improvement of HTTP.

This single-connection multi-resource method reduces the link pressure on the server, takes up less memory and increases the connection throughput; moreover, due to the reduction of TCP connections, the network congestion is improved, and the slow start time is reduced, which makes the congestion and packet loss recovery faster.

Header compression (Header Compression)

HTTP/1.1 does not support HTTP header compression, so SPDY and HTTP/2 arises at the historic moment. SPDY uses the general DEFLATE algorithm, while HTTP/2 uses the HPACK algorithm specially designed for header compression.

* Server push (Server Push)

Server push is a mechanism for sending data before a client request. In HTTP/2, the server can send multiple responses to a request from the client. Server Push makes it meaningless to optimize the use of embedded resources in the HTTP1.x era; if a request is initiated by your home page, the server is likely to respond to the home page content, logo, and stylesheets because it knows that the client will use them. This is equivalent to pooling all the resources in an HTML document, but server push has a big advantage over it: it can be cached! It also makes it possible to share cache resources between different pages while following the same origin.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report