Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Summarize the brief history of development from HTTP to HTTP/3

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly explains "summing up the brief history of the development from HTTP to HTTP/3". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "summarize the brief history of the development from HTTP to HTTP/3".

Although the HTTP/3 specification is still in the drafting stage, the latest version of Chrome browsers already supports it by default. Chrome has about 70% of the browser market, so it can be said that HTTP/3 has entered the mainstream world.

The latest revision of this basic agreement is designed to make Web more efficient, secure, and reduce content delivery latency. In some ways, it is a refinement of HTTP2: it addresses similar goals by replacing the underlying TCP protocol with a new proprietary protocol, QUIC.

The best way to understand the advantages of QUIC is to explain the disadvantages of TCP as the transmission of HTTP requests.

To this end, we will start from the beginning in detail.

1. HTTP: Origin

When Sir Tim Berners-Lee designed a simple single-line hypertext exchange protocol in 1991, TCP was already an ancient and reliable protocol. The original definition document of the former (also known as HTTP 0.9) specifically mentions that TCP is the preferred (though not the only) transport protocol:

Note: HTTP currently runs on TCP, but it can also run on any connection-oriented service.

Of course, this proof-of-concept version of HTTP bears little resemblance to the HTTP we now know and like. There is no header and no status code. The typical request is only GET/path. The response contains only HTML, and the closure of the TCP connection ends.

Since browsers are not yet popular, users need to read HTML directly. You can use it to link to other resources, but all tags that existed in earlier versions of this HTML do not request other resources asynchronously. A HTTP request delivers a complete, self-sufficient page.

2. HTTP/1.0 appears

In the following years, the Internet ushered in explosive development. Although the transmission of HTML is still the main feature of HTTP, it has gradually developed into a scalable and flexible general protocol. Three important updates to HTTP laid the foundation for this evolution:

The introduction of the method enables customers to determine the type of operation they want to perform. For example, POST was introduced to allow clients to send data to the server for processing and storage

The status code provides the client with a way to confirm that the server has successfully processed the request-if the processing fails, it can be used to know what kind of error has occurred

Headers add the ability to attach structured text metadata to requests and responses that can modify client or server behavior. For example, encoding and content type headers enable HTTP to transmit not only HTML but also any type of payload. The Compression header allows clients and servers to negotiate supported compression formats, thereby reducing the amount of data transferred over the connection.

At the same time, HTML has evolved to support images, styles, and other linked resources.

Now, browsers need to perform multiple requests to display a web page, which is not possible with the original "connect on request" architecture. Establishing and terminating TCP connections involves the exchange of a large number of packets back and forth, so it is relatively expensive in terms of latency overhead. Web pages do not necessarily consist of a single text file, but as the number of requests per page increases, so does latency.

The following figure shows how much request overhead is involved in establishing a new TCP connection.

A TCP connection requires three requests to establish a connection, and four requests can be completely closed.

People created a "connection" header to solve this problem. The client sends a request with the "connection:keep-alive" header to indicate its intention to keep the TCP connection open for subsequent requests. If the server understands this header and agrees to comply with it, its response will also include the "connection:keep-alive" header.

In this way, both parties keep the TCP channel open and use it for subsequent communication until either party decides to close it. With the development of SSL/TLS encryption technology, this becomes more important because negotiating encryption algorithms and exchanging encryption keys requires an additional request / response cycle on each connection.

A single TCP connection can be made through the "connection:keep-alive" header.

Reuse for multiple requests at that time, many HTTP improvements occurred spontaneously. When popular browser or server applications need new HTTP functionality, they implement it on their own and hope that others will follow suit. Ironically, decentralized Web requires a centralized management body to avoid the incompatibility problems caused by fragmentation.

The original creator of the protocol, Tim Berners-Lee (TimBerners-Lee), recognized the danger and established the World wide Web Consortium (W3C) in 1994, which, together with the Internet Engineering Task Force (IETF), worked to regulate the technology stack of the Internet. As a first step towards bringing more specifications to the existing environment, they documented some of the most commonly used functions in HTTP at that time and named it the HTTP/1.0 protocol.

However, because this "specification" describes a variety of technologies that are usually used inconsistently in "practice", it has never gained standard status. By contrast, work has begun on a new version of the HTTP protocol.

3. Standardization of HTTP/1.1

HTTP/1.1 fixes HTTP/1.0 inconsistencies and adjusts the protocol to perform better in the new Web ecosystem. The two most critical changes introduced in the new release are the default use of persistent TCP connections (which remain active) and HTTP pipelining.

HTTP serialization means that the client does not have to wait for the server to respond to the request before sending a subsequent HTTP request. This feature can make more efficient use of bandwidth and reduce latency, but it has even more room for improvement. HTTP serialization still requires the server to respond in the order in which requests are received, so if a single request in serialization executes slowly, all subsequent responses to the client will be delayed accordingly. This problem is called thread blocking.

The release of style.css was blocked because the large-picture.jpg was requested first.

At this time, Web is getting more and more interactive functions. Web 2.0 is just around the corner, and some web pages contain dozens or even hundreds of external resources. To solve the stub blockage and slow down the page load, the client establishes multiple TCP connections on each host. Of course, the connection overhead has not disappeared. The situation is actually getting worse because more and more applications are using SSL/TLS to encrypt HTTP communications. As a result, most browsers set a limit on the maximum number of simultaneous connections possible to strike a delicate balance.

Many of the larger Web services have realized that the existing restrictions are too strict for their heavily interacting Web applications, so they "play with the system" by distributing their applications through multiple domain names. At least it worked, but the solution was by no means elegant.

Despite some shortcomings, the simplicity of HTTP/1.0 and HTTP/1.1 has made them widely successful, and no one has seriously tried to change them for more than a decade.

4. SPDY and HTTP/2

Google launched the Chrome browser in 2008, which is rapidly popular because of its speed and innovation. It gives Google a strong say on Internet technology. In the early 2010s, Google added support for its Web protocol SPDY to Chrome.

The HTTP/2 standard is based on SPDY and some improvements have been made. HTTP/2 solves the problem of stub blocking by multiplexing HTTP requests over a single open TCP connection. This allows the server to respond to requests in any order, and then the client can reassemble the response when it receives it, speeding up the entire exchange in a single connection.

Because HTTP/2 can be multiplexed, style.css was returned before large-picture.jpg

In fact, using the HTTP/2 server, you can provide resources to the client even before the request! For example, if the server knows that the client probably needs a stylesheet to display the HTML page, it can "push" the CSS to the client without waiting for the corresponding request. While this is beneficial in theory, this feature is rare in practice because it requires the server to understand the HTML structure of its services, but this rarely happens.

In addition to the request body, HTTP/2 also allows the request header to be compressed, which further reduces the amount of data transmitted over the network.

HTTP/2 solves many problems on Web, but not all of them. There is still a similar type of stub problem at the TCP protocol level, and TCP is still the underlying building block of Web. When an TCP packet is lost in transit, the receiver cannot acknowledge the incoming packet until the server resends the lost packet. Because TCP is not designed to follow advanced protocols such as HTTP, a single lost packet blocks the flow of all ongoing HTTP requests until the lost data is resent. This problem is particularly acute in unreliable connections, which is not uncommon in the ubiquitous era of mobile devices.

5. HTTP/3 Revolution

Because the problem of HTTP/2 cannot be solved by the application layer alone, the new iteration of the protocol must update the transport layer. However, creating a new transport layer protocol is not easy. Transport protocols require the support of hardware vendors and the deployment of most network operators to popularize. Operators are reluctant to update because of the cost and workload involved. Take IPv6, for example: it was launched 24 years ago, but it is still a long way from gaining widespread support.

Fortunately, there is another option. The UDP protocol is as widely supported as TCP, but the former is simple enough to serve as a basis for custom protocols running on it. UDP packets are once and for all: no handshakes, persistent connections, or error correction. The main idea behind HTTP3 is to abandon TCP and use the UDP-based QUIC protocol. QUIC adds a lot of necessary functionality (including functionality previously provided by TCP, and more) in a way that makes sense to the Web environment.

Unlike HTTP2, which technically allows unencrypted communications, QUIC strictly requires encryption before a connection can be established. In addition, encryption applies not only to the HTTP payload, but also to all data flowing through the connection, avoiding a host of security issues. Establishing persistent connections, negotiating encryption protocols, and even sending the first batch of data are merged into a single request / response cycle in QUIC, greatly reducing connection wait time. If the client has locally cached password parameters, the connection to the known host can be re-established through a simplified handshake (0-RTT).

In order to solve the problem of wire head blocking at the transmission level, the data transmitted over the QUIC connection is divided into streams. A stream is a short, independent "subconnection" in a persistent QUIC connection. Each stream handles its own error correction and delivery assurance, but uses connection global compression and encryption properties. HTTP requests initiated by each client run on a separate stream, so packet loss does not affect the data transfer of other streams / requests.

HTTP/3 divides connections into separate streams UDP is a stateless protocol (persistent connections are just an abstraction on top of it), enabling QUIC to support features that largely ignore the complexity of packet delivery. For example, in theory, clients should not break the connection when they change their IP address intermediate connection, such as a smartphone jumping from a mobile network to a home wifi, because the protocol allows migration between different IP addresses without having to reconnect.

All existing implementations of the QUIC protocol are currently running in user space, not in the OS kernel. Because clients (such as browsers) and servers are updated more frequently than operating system kernels, it is hoped that new features can be adopted more quickly.

6. Problems in HTTP/3

I think the HTTP/3 standard is a big step towards a faster and more secure Internet, but it's not perfect. Some of its problems are caused by its novelty, while others seem to be inherent in the agreement.

The TCP protocol has been around for a long time and is easy for routers to understand. It has clear unencrypted tags (used to establish and close connections) and can be used to track and control existing sessions. Until the network hardware learns to understand the new protocol, it will treat QUIC traffic simply as a separate stream of UDP packets, which will make network configuration more tricky.

The ability to "restore" connections from the client cache makes the protocol vulnerable to replay attacks: in some cases, malicious attackers can resend previously captured packets that will be interpreted by the server as valid, from the victim. Like Web servers that provide static content, many Web servers are unharmed by such attacks. For applications in a vulnerable environment, you must remember to disable the 0-RTT feature.

This is the story of HTTP to this day. I think HTTP/3 is a big step forward, and I certainly hope that HTTP/3 will be widely adopted in the near future.

Thank you for your reading. the above is the content of "summarizing the brief History of the Development from HTTP to HTTP/3". After the study of this article, I believe you have a deeper understanding of the problem of summarizing the brief history of the development from HTTP to HTTP/3, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report