Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the knowledge points of HTTP2.0?

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what are the knowledge points of HTTP2.0". Friends who are interested may wish to have a look. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "what are the knowledge points of HTTP2.0"?

1. Binary framing

The fundamental improvement of HTTP2.0 is the new binary framing layer. Unlike HTTP1.X, which uses newline characters to segment plain text, the binary framing layer is simpler and more efficient. The so-called layer here refers to a new mechanism between the socket interface and the high-level HTTP API visible to the application: HTTP semantics, including various verbs, methods, and prefixes, are unaffected, except that the coding method for transmitting them has changed. HTTP 1.X uses newline characters as delimiters for plain text, while HTTP 2.0 splits all transmitted information into smaller messages and frames and encodes them in binary format. Examples are as follows:

As can be seen from the above figure, when transmitting data in HTTP1.1, the header and the data are sent together, and each transmission requires a header message; in HTTP2.0, the header information is divided into an independent frame for transmission, and the data frame is sent after the first frame is sent, and after the header compression is adopted, only the modified part is sent for each transmission, which greatly improves the efficiency of data transmission.

1.1. Frame header

After a HTTP2.0 connection is established, the client and server communicate by exchanging frames, which are the smallest unit of the HTTP2.0 protocol. All frames have an 8-byte header in the following format:

0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 9 0 1 2 3 4 5 6 7 8 9 0 1 + -+ | R | Length (14) | Type (8) | Flags (8) | +-+ | R | Stream Identifier (31) | | + = + | Frame Payload (0...). +-- + |

R: reserved 2-bit field. The semantics of these bytes are undefined and must remain unset (0) when sent, and this field must be ignored when received.

Length: the frame body length of a 14-bit unsigned integer. 8-byte frame header information is not counted here, the maximum possible length of the body is 2 ^ 14-1 (16383) bytes, and the maximum length of the whole frame (including the header) is 16391 bytes.

Type: 8-bit type of the frame. The frame type defines how the remaining frame header and frame body will be interpreted. The specific implementation must be treated as a type protocol error (PROTOCOL_ERROR) in connection errors when an unknown frame type (any frame not defined in the document) is received.

Flags: the 8-byte field reserved for the frame type has a specific Boolean identity. The identity assigns specific semantics to the determined frame type. It is determined that tags other than the semantics of the frame type definition must be ignored and must be left unset (0) when sent.

R: 1-bit reserved field. The semantics of this field is not set and must remain unset (0) when sent, and must be ignored when accepted.

Stream Identifier: a 31-byte stream identifier that uniquely identifies the stream of the HTTP2.0. 0 is reserved, indicating that the frame is related to the connection as a whole rather than a separate stream.

Through this shared 8-byte header, you can quickly determine the type, flag and length of the frame, and the length of each frame is defined in advance, and the parser can quickly find the beginning of the next frame and parse it, which is also a big improvement for HTTP1.1X.

1.2. Frame Typ

Whether in connection management or in a separate stream, each frame serves a specific purpose. In HTTP2.0, the following 10 frame types are defined:

DATA: used to transport the HTTP message body

HEADERS: used to transmit additional header fields about the stream

PRIORITY: used to specify or reassign the priority of referenced resources

RST_STREAM: used to know the abnormal termination of a stream

SETTINGS: configuration data used to notify both sides of the communication mode

PSUH_PROMISE: used to make offers to create streams and server reference resources

PING: used to calculate round trip time and perform activity check

GOWAY: notifies the remote peer not to establish a new stream on this connection

WINDOW_UPDATE: used for flow control for individual flows or connections

CONTINUATION: used to continue a series of first clips.

Note: the server can use GOWAY-type frames to tell the client the last stream ID to process, thus eliminating some request contention, and the browser can intelligently retry or cancel "suspended" requests accordingly. This is also an important and necessary function to ensure the security of multiplexing connections.

The core of HTTP2.0 performance enhancement lies in the new binary framing layer, which defines how HTTP messages are encapsulated and transmitted between the client and server. Because the new framing method is used in HTTP2.0, the client and server must use the new binary coding mechanism in order to understand each other, so the HTTP1.x client cannot understand the server that only supports HTTP2.0, and vice versa. But this does not affect our use of HTTP2.0, existing applications do not need to pay attention to these changes, because the client and server will do the necessary framing work for them.

1.3. Streams, messages, and frames

The new binary framing mechanism changes the way the client and server exchange data. In order to better understand this core change in HTTP2.0, the following introduces the three concepts and differences of stream, message and frame:

Stream: a two-way byte stream on an established connection, a virtual channel in an HTTP/2 connection.

Message: a complete series of data frames corresponding to a logical message.

Frame: the smallest unit of HTTP2.0 communication, including byte headers and variable-length sequences structured according to the frame type, with each frame containing a header.

All HTTP2.0 communication is done on a TCP connection that can carry any number of two-way data streams. Accordingly, each data stream is sent in the form of a message, which consists of one or more frames, which can be sent out of order and then reassembled according to the flow identifier at the beginning of each frame. Frame is the smallest communication unit in HTTP2.0, and different types of frames play a special role.

two。 Multiplexing 2.1. Queue Transmission and Multiplexing

In HTTP1.x, if the client wants to send multiple parallel requests and improve performance, then multiple TCP connections must be used, and there is a problem of queue head blocking, which seriously affects the efficiency of data transmission. In HTTP2.0, the binary framing mechanism breaks through these limitations and realizes multi-directional request and response. The client and server can decompose the HTTP message into independent frames, then send them out of order, and then reassemble them at the receiver, thus completing the message transmission. Examples are as follows:

As can be seen from the above figure, in the same connection, three streams are transmitted at the same time, two are sent to the client from the server, and one is sent from the client to the server, which greatly improves the utilization of the connection.

This new mechanism is revolutionary and will trigger a series of chain reactions throughout the WEB technology stack, resulting in significant performance improvements because:

Requests can be sent in parallel and interlaced without affecting each other.

Responses can be sent in parallel and interlaced without interference with each other.

Multiple requests and responses can be sent in parallel using only one connection

Reduce page loading time by eliminating unnecessary delays

You don't have to do extra work to bypass HTTP1.X restrictions.

The binary framing mechanism of HTTP2.0 not only solves the problem of queue head blocking in HTTP1.X, but also eliminates the dependence on multiple connections when parallel processing and sending requests and responses. This makes the application faster, the development easier, the deployment cost lower, and the CPU and memory footprint of the client and server reduced.

2.2. One connection per source

After the multiplexing of connections can be realized, HTTP2.0 no longer relies on multiple TCP connections to achieve multi-stream parallelism. Now, multiple data streams are split into frames that can be interlaced and prioritized. Therefore, all HTTP2.0 connections are persistent, and only one connection is needed between the client and the server. One connection per source significantly reduces the associated resource footprint: less socket management on the connection path, less memory footprint, and higher connection throughput. Therefore, there has been a lot of improvement on many levels:

The priority order of all data flows is consistent

Single compression context makes compression better.

Network congestion has been improved due to the reduction of TCP connections.

The slow start time is reduced, and the recovery speed of congestion and packet loss is faster.

Most HTTP connections are short and sudden, but TCP is only efficient when long-term connections transmit large chunks of data. HTTP2.0 can use TCP connections more efficiently by allowing all data streams to share the same connection. HTTP2.0 not only reduces network latency, but also helps increase throughput and reduce operating costs.

Note: everything has its two sides, and a connection from each source certainly brings some problems:

Although the HTTP team leader blocking phenomenon is eliminated, there is still a team leader blocking at the TCP level.

If TCP window scaling is disabled, the bandwidth delay product effect may limit the throughput of the connection

When a packet is lost, the TCP congestion window shrinks.

Although the above problems affect the performance of HTTP2.0, experiments show that a TCP connection is still the best strategy at present: the performance improvement brought by compression and prioritization has exceeded the negative effects caused by queue head blocking. Like all other performance optimizations, removing a performance bottleneck will lead to new bottlenecks. For HTTP2.0, TCP is probably the next bottleneck. This is one of the reasons why server-side TCP configuration is critical to HTTP2.0.

3. Request priority

After the HTTP message is decomposed into many separate frames, by optimizing the interleaving and transmission order of these frames, the most critical frames can be sent first to ensure the rapid deployment of critical tasks. How to define the order in which these frames are sent is a challenge. For convenience, the HTTP / 2 standard allows each stream to have associated weights and dependencies:

Each stream can be assigned a weight value between 1 and 256, and a flow with the same parent node should allocate resources according to the weight ratio.

Each flow can declare an explicit dependency on another stream, including a dependency preference setting that allocates resources to a particular stream rather than the one on which it depends.

Through flow dependency and weight, the client can build a "priority tree" (as shown in the following figure) and send the tree to the server to express how the client is willing to receive the response. After receiving the "priority tree", the server can, based on this information, coordinate the relevant server resources to return responses, such as CPU, storage, network and other resources, giving priority to high-priority streams. And once the response data is available, the server allocates more bandwidth to ensure a high-priority and fast response.

A stream in HTTP 2.0 can only set a unique flow dependency, and the dependent stream is the parent node flow of the current flow. If the declaration of stream dependency is omitted, it depends on the "root stream" by default. Declaring the dependency of a flow indicates that, if possible, the parent flow should allocate resources and respond before the child flow, for example, deliver response D before processing and responding C.

Sibling flows with the same parent flow should allocate server resources in proportion to their weights. For example, if flow A has a priority weight of 12 and his sibling stream B has a weight of 4, then determine the proportion of resources that should be allocated between the two:

Calculate the total weight of all streams: 4 + 12 = 16

Calculate the proportion of each stream to the sum: a = 12 / 16pr 6B = 4max 16.

It can be seen from the above that if A stream gets 3/4 available resource allocation, B stream should get 1/4 of available resources. In order to better understand the priority mechanism, the priority reorganization in the figure above is described from left to right:

Whether stream An or B specifies the same parent dependency, or depends on the implicit "root stream": a has a priority weight of 12, and B has a priority weight of 4, so based on the proportional weight-stream B should get 1/3 of the resource allocated to stream A.

D is dependent on the root stream and C is dependent on D. Therefore, D should be ahead of C, where the weight is irrelevant to resource allocation, because there are no other streams at the same level as C.

D should receive full allocation of resources before C; C should receive full allocation of resources before An and B; B stream should receive 1/3 of the resources allocated to A stream.

Before E and C, D should be fully allocated resources; E and C should be allocated equally before An and B; and An and B should be allocated proportionally according to their weights.

As can be seen from the above example, the combination of flow dependency and weight provides a way to express resource priorities, which can define priorities for different resources and improve performance. The HTTP/2 protocol also allows clients to update priority information at any point in time, which can be specified or changed using header frames or priority frames as if they were created. With this priority identification, clients and servers can adopt different strategies when dealing with different streams, sending streams, messages, and frames in an optimal way. The purpose of priority is to allow the terminal to express how it allows the peer to allocate resources when managing concurrent flows. More importantly, when the transmission capacity is limited, the priority can be used to select a stream to transmit the frame. Providing priority information is optional and the default value is used when not explicitly specified.

Flow dependencies and weights express a transport preference rather than a requirement, so a specific processing or transmission order is not guaranteed. That is, the client cannot force the server to process streams in a particular order with the priority of the flow priority. There are several issues to consider when selecting a HTTP2.0 server:

What if the server turns a blind eye to all priorities?

Must high priority streams be given priority?

Is there a situation where streams with different priorities should be interlaced?

If the server ignores all priority values, it may cause the application to respond more slowly: the browser is waiting for the critical CSS and JavaScript, but the server is sending pictures, causing rendering congestion. However, strictly following the prescribed priority may also lead to sub-optimal results, as this may reintroduce the first-of-line blocking problem, that is, a high-priority slow request will unnecessarily block the delivery of other resources.

Therefore, the server can and should interleave frames with different priority levels, and high-priority streams should be transmitted first whenever possible, but mixing of different priorities is also necessary in order to make more efficient use of the underlying connections. So the expression of priority is just a suggestion.

4. Flow control

Flow control is defined to protect the operation of endpoints under resource constraints. The problem of flow control is that the receiver wants to continue to process other streams on the same connection while processing data on one stream. Transmitting multiple data streams over the same TCP connection means sharing bandwidth. Prioritizing data flows is helpful for sequential delivery, but priority alone is not sufficient to determine resource allocation between multiple data streams or connections. To solve this problem, HTTP2.0 provides a simple mechanism for flow control of data streams and connections:

Flow control is based on each hop, not end-to-end control

Flow control is based on window update frames, that is, the receiver broadcasts how many bytes of a data stream it is going to receive and how many bytes it wants to receive for the entire connection.

The flow control window size is updated through WINDOW_UPDATE frames. This field specifies the flow ID and window size increment mechanism.

The initial value of the flow control window for each new stream and the entire connection is 65535 bytes

Flow control directionality, that is, the receiver may set any window size for each stream or even the entire connection according to its own situation; flow control can be disabled by the receiver, including for individual flows and for the entire connection.

The frame type determines whether flow control rules are applicable. Of the frames defined in this document, only DATA frames are subject to traffic control; all other frames are not affected by the broadcast flow control window. This ensures that important control frames are not blocked by flow control.

HTTP/2 standardizes only the WINDOW_UPDATE frame format. The HTTP2.0 standard does not specify any specific algorithm, value, or when to send WINDOW_UPDATE frames, so implementations can choose their own algorithms to match their own application scenarios to get the best performance.

Flow control schemes and so on ensure that flows on the same connection do not cause destructive interference with each other. Flow control is used in a single flow and throughout the connection, and HTTP/2 provides flow control by using WINDOW_UPDATE frame types. The above mechanism is the same as TCP flow control, but only TCP flow control can not implement differentiation strategy for multiple flows in a HTTP2.0 connection, so there is a special HTTP2.0 flow control mechanism.

5. The server pushes 5.1. Server-side push

A powerful feature added to HTTP2.0 is that the server can send multiple responses to a client request. In other words, in addition to the original request, the server can push additional resources to the client without the need for an explicit request from the client. Examples are as follows:

What is the problem that needs to be solved by such a mechanism? We know that a web application usually contains dozens of resources, and the client needs to analyze the documents provided by the server to find them one by one. So why not let the server push these resources to the client in advance to reduce the extra latency? The server already knows what resources the client is going to request next, so the server-side recommendation is ready to go. In fact, CSS and JS embedded in web pages, or other resources embedded through URI, can also be considered server-side push. To insert the resource directly into the document is to push the resource directly to the client without the client's active request. The only difference in HTTP2.0 is that the process can be taken out of the application and implemented in the HTTP protocol itself, with the following benefits:

The client can cache the resources pushed

Resources that can be rejected by the client

Push resources can be shared by different pages

Push resources can be reused with other resources

The server can push resources according to priority.

With server push, the practice of inserting or embedding resources in the HTTP1.X era can be retired from history. The only situation where it is necessary to embed a resource is that the resource is only used by one page and is not expensive to encode, except that all other scenarios should use server-side push.

Note: all server push flows are initiated by PUSH_PROMISE, which signals to the client that the resources are intentionally pushed by the server in addition to the response to the original request. The PUSH_PROMISE frame contains only the HTTP header with the offered resources.

After receiving the PUSH_PROMISE frame, the client can choose to receive or reject the stream according to its own needs. There are also some restrictions on server push:

First of all, the service must follow the request-response cycle and can only push resources through the response of the request. The server cannot initiate push at will.

Second, the PUSH_PROMISE frame must be sent before the response is returned to avoid contention on the client, otherwise there will be a scenario requested by the client that the server is going to push.

Because the push response is effectively hop-by-hop, the mediation terminal that receives the push response from the server can choose not to forward these to the client. That is, how you use the push response depends on these intermediaries. Similarly, the mediation may choose not to push the additional response to the client without requiring the server to do anything. The server can only push responses that can be cached; the promised request must be secure and must never contain a request body.

5.2. How to realize server-side push

Server push offers many possibilities for optimizing the delivery of resources for applications, but how on earth should the server determine which resources can or should be pushed? HTTP2.0 does not provide detailed rules, so it is possible to have a variety of strategies, each of which may consider an application or server usage scenario.

The application can explicitly initiate a server push in its own code

The application can send a signal to the server through the additional HTTP header, listing the resources it wants to push

The server can automatically learn the relevant resources without relying on the application, and speculate the resources that need to be pushed through analysis.

The above are just a few possible strategies, and of course there are many other implementations, either manually calling a low-level API or a fully automated implementation. In short, there will be a lot of interesting innovations in the server push field. The pushed resource will go directly into the client cache, just like the client request. There is no notification mechanism such as client API or JS callback method, which can be used to determine when the resource arrives. The whole process does not seem to exist for the web application running in the browser.

Although we do not know how to determine which resources can or should be pushed, the mechanism of how to initiate push has been established. The server initiates the push request first, and then the client pushes the response. The details are as follows:

Push Requests

The server push is semantically equivalent to the server responding to a request; however, in this case the request is also sent by the server as a PUSH_PROMISE frame. The PUSH_PROMISE contains a header block with a complete server-side attribute request header field. The response of the push is always related to an explicit request from the client. The server sends PUSH_PROMISE frames on this explicit request flow. PUSH_PROMISE frames generally contain committed flow identifiers, selected from the available server-side flow identifiers. The server should send an PUSH_PROMISE frame before sending any promised response. This avoids a race in which the client sends a request before receiving any PUSH_PROMISE frames. The PUSH_PROMISE can be sent by the server on any stream opened by the client.

Push Responses

After sending the PUSH_PROMISE frame, the server can begin to receive the pushed response as a response initiated by the server using the flow identified by the committed flow. Once the client receives the PUSH_PROMISE frame and chooses to accept the pushed response, the client should not initiate a request for the promised response until the committed stream is closed. If the client decides for any reason that it does not want to accept the response pushed by the server, or if it takes too long for the server to start sending the promised response, the client can send a RST_STREAM frame and use the CANCEL or REFUSED_STREAM code to associate the pushed stream identifier.

6. Header compression

Each communication of HTTP carries a set of headers that describe the transmitted resources and their attributes. In HTTP1.X, this metadata is sent in plain text, usually adding 500-800 bytes to each request. If COOKIE is included, the increased load will reach thousands of bytes. In order to reduce these overhead and improve performance, HTTP2.0 will compress the header metadata:

HTTP2.0 uses "header tables" on the client and server to track and store previously sent key-value pairs. For the same data, it is no longer sent through each request and response.

The header table always exists for the duration of the HTTP2.0 connection and is updated progressively by both the client and the server.

Each new header key-value pair is either appended to the end of the current table or replaces the previous value in the table.

As a result, both ends of the HTTP2.0 connection know which headers have been sent and what the values of these headers are, so that only the differential data can be encoded for the previous data. Key-value pairs that hardly change during communication need only be sent once, which greatly increases the load of the data. Examples are as follows:

The status line information of HTTP/1 (Method, Path, Status, etc.) is split into key-value pairs in HTTP/2 and put into headers (those starting with colons), which can also enjoy dictionary and Huffman compression. In addition, all header names in HTTP/2 must be lowercase.

Header compression requires the client and server to do the following:

Maintain the same static dictionary (Static Table) containing common header names and particularly common combinations of header names and values

Maintain the same dynamic dictionary (Dynamic Table), which can add content dynamically

Huffman coding (Huffman Coding) based on static Huffman code table is supported.

Static dictionaries serve two purposes:

1) for perfectly matched header key-value pairs, such as method: GET, you can directly use one character to represent them.

2) for key-value pairs with matching header names, such as cookie: xxxxxxx, the name can be represented by one character.

At the same time, the browser can tell the server to add cookie: xxxxxxx to the dynamic dictionary so that the entire subsequent key-value pair can be represented by one character. Similarly, the server can update each other's dynamic dictionary. It should be noted that dynamic dictionaries are related to the specific connection context, and different dictionaries need to be maintained for each HTTP2.0 connection. The use of dictionaries can greatly improve the compression effect, where static dictionaries can be used in the first request. Huffman coding can also be used to reduce the size of content that does not exist in static and dynamic dictionaries. HTTP2.0 uses a static Huffman code table, which also needs to be built into the client and server. The core idea of Huffman coding is to use the least number of digits to represent the most information. The Huffman coding table in HTTP2.0 is generated based on the statistical data of a large sample of HTTP headers, the characters that often appear are identified by shorter binary numbers, and the characters with lower frequency are identified by longer binary numbers, which ensures that the header information takes up less space and further compresses the header information.

After receiving the compressed header information, the server will first encode and decode the header information, get the header information, and then combine the maintained static dictionary and dynamic dictionary information to get the complete header information, and then process and respond to the request. Update the dictionary when you need to update the dynamic dictionary information.

HTTP2.0 compression algorithm: earlier versions of SPDY used zlib and custom dictionaries to compress all HTTP headers, reducing header overhead by 85% and 88%, thus significantly reducing the time it takes to load pages. However, in the summer of 2012, there were CRIME security attacks against the TLS and SPDY compression algorithms, and the zlib algorithm was revoked and replaced by the new index table algorithm described earlier. The algorithm has no similar security problems, but can achieve almost the same performance improvement.

7.HTTP2.0 upgrade and Discovery

The migration to HTTP2.0 cannot be completed in an instant, and both the server side and the client side need to make the necessary updates and upgrades before they can be used. The good news is that most modern browsers have built-in efficient background upgrade mechanisms, and for most existing users, these browsers can support HTTP2.0 quickly without causing much trouble. However, the upgrade and update of server-side and intermediate devices is not so easy, it is a long-term process, and it is laborious and costly.

HTTP1.X will last for at least another decade, during which time most servers and clients must support both 1.x and 2.0 standards. Therefore, the client that supports HTTP2.0 must be able to find out whether the server and intermediary devices support the HTTP2.0 protocol before initiating a new request. There are three situations:

New HTTPS connections initiated through TLS and ALPN

New HTTP connections initiated based on previous information

Initiate a new HTTP connection without previous information

One part of the HTTPS negotiation process uses ALPN to discover and negotiate HTTP2.0 support. In the case of ALPN, the TLS handshake information contains a list of protocols supported by the client, and the server directly selects HTTP2. All negotiations are completed at one time in the handshake phase without additional messages.

Note: application layer Protocol negotiation (ALPN) is an TLS extension that supports protocol negotiation during the TLS handshake, eliminating the additional round-trip latency required to pass through HTTP's Upgrade mechanism. The process is as follows:

The customer adds a new ProtocolNameList field to the ClientHello message containing a list of supported application protocols.

The server examines the ProtocolNameList field and returns a ProtocolName field in the ServerHello message indicating the protocol selected by the server.

The server may respond to only one of the protocols, and if it does not support any protocol required by the client, it may choose to abort the connection. As a result, after the TLS handshake is complete, the secure tunnel is established, and the client and server negotiate the application protocol to be used-they can start communicating immediately.

Establishing a HTTP2.0 connection over an unencrypted channel requires more work, because both HTTP1.0 and HTTP2.0 use port 80, and there is no other information about whether the server supports HTTP2.0 or not, so the client can only use the HTTP upgrade mechanism to negotiate to determine the appropriate protocol:

GET / default.htm HTTP/1.1Host: server.example.comConnection: Upgrade, HTTP2-SettingsUpgrade: h3cHTTP2-Settings:

The server that does not support HTTP/2 returns a response to the request that does not contain an upgraded header field:

HTTP/1.1 200 OKContent-Length: 243Content-Type: text/html...

Servers that support HTTP/2 can return a 101 (conversion protocol) response to accept the upgrade request:

HTTP/1.1 101 Switching ProtocolsConnection: UpgradeUpgrade: h3c...

After the 101Null content response is terminated, the server can start sending HTTP/2 frames. These frames must contain a response to the request to initiate the upgrade. The first HTTP/2 frame sent by the server is a SETTINGS frame. Upon receipt of the response, the client sends a connection preface containing a setting (SETTINGS) frame.

With this Upgrade stream, if the server does not support HTTP2.0, the HTTP1.1 response is returned immediately. Otherwise, the server will return the 101switching protocols response in HTTP1.1 format, then immediately switch to HTTP2.0 and return the response using the new binary framing protocol. In either case, no extra round trip is required.

Finally, if the client has HTTP2.0 support information because it keeps it or through other means (such as dns records, manual configuration), it can also send HTTP2.0 framing directly without relying on the Upgrade mechanism.

8.HTTP2.0 data transfer sample initiates a new stream

Before sending application data, you must create a new stream and send the corresponding metadata, such as priority, HTTP header, and so on. The HTTP2.0 protocol provides that both the client and the server can initiate streams, so there are two possibilities:

The client initiates the stream by sending a HEADERS frame that contains a common header with a new stream ID, an optional 31-bit priority value, and a set of HTTP key value pairs

The server initiates a push stream by sending a PSUH_PROMISE frame, which is equivalent to HEADERS, but it contains the ID to be about the stream and has no priority value.

Examples of HEADERS frames are as follows:

The type fields of both frames are used only to communicate the metadata of the new stream, and the payload is sent separately in the DATA frame.

Send application data

After the new stream is created and the HTTP header is sent, the next step is to send application data using DATA frames. The application data can be divided into multiple DATA frames, and the last frame is to flip the END_STREAM field at the beginning of the frame. The data payload will not be encoded or compressed separately. Encoding depends on the application or server, plain text, gzip compression, picture or adaptive compression format can be.

At this point, I believe you have a deeper understanding of "what are the knowledge points of HTTP2.0?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report