In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
How to optimize HTTPS, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain for you in detail, people with this need can come to learn, I hope you can gain something.
HTTP 2.0
HTTP 2.0, or Hypertext transfer Protocol 2.0, is the next generation HTTP protocol. It is developed by the Hypertext Transfer Protocol Bis (httpbis) working group of the Internet Engineering Task Force (IETF). It is the first update since the release of http1.1 in 1999. The HTTP/2 protocol evolved from SPDY. SPDY has completed its mission and will soon retire from history (for example, Chrome ended its support for SPDY at the beginning of 2016; Nginx has full support for HTTP/2 in version 1.9.5 and Apache 2.4.16 +.
The image above is the HTTP/2 DEMO of Akamai. By loading 300 images, compare HTTP/1.1 with HTTP/2, first get a visual feel of HTTP/2, and then explain the reason for this feeling, that is, the new features of HTTP/2:
Binary framing
Header compression
Flow control
Multiplexing
Request priority
Server push
Binary framing
Binary framing layer is the core of HTTP2.0 performance enhancement.
HTTP1.x communicates in the form of plain text in the application layer. HTTP2.0 adds a binary framing layer between the application layer (HTTP) and the transport layer (TCP) without changing the semantics, method, status code, URL and header field of the HTTP1.x. HTTP2.0 splits all transmission information into smaller messages and frames and encodes them in binary format, as shown in the following figure
Here we introduce a new communication unit: frame
Frame is the smallest unit of HTTP 2.0 communication, including frame header, flow identifier, priority value, frame payload, etc.
Among them, the frame type can be divided into:
DATA: used to transport the HTTP message body
HEADERS: used to transfer header field
SETTINGS: used to agree on the configuration data of the client and the server. For example, set the size of the two-way flow control window for the first time.
WINDOW_UPDATE: used to adjust the flow of individual streams or individual connections
PRIORITY: used to specify or reassign the priority of referenced resources
RST_STREAM: used for abnormal termination of notification flow
PUSH_ PROMISE: server push license
PING: used to calculate round trip time and perform "active" biopsy
GOAWAY: used to notify the peer to stop creating a stream in the current connection
Flag bits, used for different frame types to define specific message flags. For example, a DATA frame can use End Stream: true to indicate that the message has been communicated; the stream identification bit indicates that the stream ID; priority value to which the frame belongs is used for the HEADERS frame, indicating the request priority; and R indicates reservation.
Here is a HEADERS frame for grabbing the packet:
There are two other concepts to talk about: message and flow
A message is a logical HTTP message (request / response). A series of data frames form a complete message, such as a series of DATA frames and a HEADERS frame.
A stream is a virtual channel in a link, which can carry two-way message transmission, and each stream has a unique certificate identifier. In order to prevent ID conflicts between the two ends of the flow, the flow initiated by the client has odd ID and the flow initiated by the server has even ID.
All HTTP 2.0 communication is done on a single TCP link that can carry any number of bidirectional data flow Stream. Accordingly, each data stream is sent in the form of a message, which consists of one or more frames, which can be sent out of order and then reassembled according to the flow identifier at the beginning of each frame.
Binary framing is mainly used to provide the basis for other features of HTTP2.0. It can encapsulate a data partition into smaller and more convenient data. First of all, in the single-chain multi-resource mode, the link pressure on the server side is reduced, the memory consumption is less, and the link throughput is larger; on the other hand, due to the reduction of TCP links, the network congestion state is improved, and the slow start time is reduced, so that the congestion and packet loss recovery speed is faster.
Header compression
Each time HTTP1.x communicates (request or response), it carries the first piece of information describing the resource attribute. On the other hand, HTTP2.0 uses the header table to track and store the previously sent key-value pairs between the client and the server. The header table always exists during the connection process, and the new key-value pair is updated to the end of the table, so it is not necessary to carry the header in every communication. The definition of request and response header is basically unchanged in HTTP2.0.
In addition, HTTP2.0 uses the header compression technology, and the compression algorithm uses HPACK, which makes the header more compact and faster, which is beneficial to the mobile network environment. It should be noted that the header compression of HTTP2.0 does not conflict with our commonly used compression of message content such as gzip.
Flow control
The goal of flow control for HTTP/2.0 "flow" is to allow multiple flow control algorithms without changing the protocol.
Flow control is specific to a connection. Each type of flow control is between two endpoints of a single hop, not on the entire end-to-end path. (the hop here refers to the hop of the HTTP connection, not the hop of the IP route.)
Flow control is based on WINDOW_UPDATE frames. The receiver announces how many bytes it intends to receive on each stream and the entire connection. This is a credit-based scheme.
Flow control is directional and is fully controlled by the receiver. The receiver can set any window size for each stream and the entire connection. The sender must respect the flow control restrictions set by the receiver. When the client, the server and the intermediary agent act as receivers, they all publish their own flow control windows independently, and as senders, they all abide by the flow control settings of the opposite side.
Whether it is a new stream or an entire connection, the initial value of the flow control window is 65535 bytes.
The type of frame determines whether flow control is applicable to the frame. Currently, only DATA frames are subject to flow control, and all other types of frames do not consume space in the flow control window. This ensures that important control frames are not blocked by flow control.
Flow control cannot be disabled.
HTTP/2 only defines the format and semantics of WINDOW_UPDATE frames, and does not specify how the receiver decides when to send the frame and what value to send, nor does it specify how the sender chooses to send the packet. The specific implementation can choose any algorithm that meets the requirements.
Multiplexing
In HTTP1.1, browser clients have a certain number of restrictions on requests under the same domain name at the same time. Requests that exceed the limit are blocked, and multiplexing in HTTP2.0 optimizes this performance.
Based on the binary framing layer, HTTP2.0 can send both requests and responses on the basis of shared TCP connections. HTTP messages are decomposed into separate frames without breaking the semantics of the message itself, interleaving them, and finally reassembling them at the other end according to the stream ID and header. Comparing HTTP1.x with HTTP2.0, the pipeline mechanism of HTTP1.x is not considered here.
HTTP2.0 successfully solves the first-of-line blocking problem of HTTP1.x (the blocking of TCP layer still can not be solved). At the same time, it does not need to use multiple TCP connections through pipeline mechanism to achieve parallel request and response. Reducing the number of TCP connections greatly improves server performance while eliminating unnecessary delays, thereby reducing page loading time.
Request priority
After dividing HTTP messages into many separate frames, you can further optimize performance by optimizing the interleaving and transmission order of those frames.
Each flow can have a priority value of 31bit: 0 to the highest priority, and 2 to the power of 31 to the lowest priority.
The client clearly specifies the priority, and the server can use this priority as the basis for interacting data, for example, the client priority is set to .css > .js > .jpg. The server returns the results in this order is more conducive to efficient use of the underlying connection and improve the user experience. However, when using request priority, we should pay attention to whether the server supports request priority and whether it will cause first-of-line blocking problems, such as high-priority slow response requests will block the interaction of other resources.
Server push
HTTP2.0 adds the server push function. According to the request of the client, the server can return multiple responses in advance and push additional resources to the client.
As in the following figure, the client requests stream 1 (/ page.html). The server pushed stream 2 (/ script.js) and stream4 (/ style.css) while returning the message of stream 1.
PUSH_PROMISE frame is a signal that the server intentionally pushes resources to the client.
Only the header of the pre-push resource is included in the PUSH_PROMISE frame. If the client has no problem with the PUSH_PROMISE frame, the server sends the response DATA frame after the PUSH_PROMISE frame. If the client has already cached the resource and does not need to push, you can reject the PUSH_PROMISE frame.
PUSH-PROMISE must follow the request-response principle and can only push resources through the response to the request.
PUSH_PROMISE frames must be sent before returning a response to avoid race conditions on the client (race conditions mean that different execution sequences in the case of multithreading will cause different results to be executed by the computer with different correctness)
After the HTTP2.0 connection, the client and the server exchange SETTINGS frames, thus limiting the maximum number of two-way concurrency. Therefore, the client can limit the number of push streams, or completely disable server push by setting this to only 0.
All pushed resources must follow the same origin policy. In other words, the server cannot casually push third-party resources to the client, but must be confirmed by both parties.
HTTP/2 is now supported by most browsers, but HTTP/2 needs to use a version of openssl after 1.0.1e. Through nginx-V, you can check the openssl version of nginx. If the version is low, recompile nginx.
So how to configure support for HTTP/2 in nginx? Quite simply, just add http2 to the listen section of the server.
There are many ways to test whether http2 is enabled. Here are three methods:
1. Browser developer tools
2. Chrome extends HTTP/2 and SPDY indicator
3. Command line client nghttp
In addition, the server push of HTTP/2 requires nginx configuration to be used effectively.
Configure through the http2_push instruction
In this case, the resources style.css, image1.jpg, and image2.jpg that demo.html needs are pushed to the client. We can use it in this way when there are few resources, but it is not realistic when there are many resources.
Automatically push resources to the client
Nginx supports the convention of intercepting link preload headers. To push the resources identified in this write header, you need to start preloading and configure http2_push_preload on in the configuration.
There is also a problem here. For general static resources, we will set the cache validity period. When the client resource is within the validity period of the cache, we force the push of static resources, which will only increase the pressure on the server bandwidth, so we need to specify whether the client needs these resources, and it is unlikely that they have been cached. The possible method is that the client pushes the server on the first visit and includes cookie in the subsequent access request. The server uses cookie to determine whether to push or not. The resources are selectively pushed to the client. The configuration method is as follows:
The tests are as follows:
TLS 1.3
The main purpose of TLS (Transport Layer Security Protocol, Transport layer Security Protocol) is to provide the integrity between privacy and data communication applications. The protocol consists of two layers: TLS recording Protocol (TLS Record) and TLS handshake Protocol (TLS Handshake).
The TLS protocol has been updated many times, and there are many serious loopholes in the current low version of TLS, such as SSL 3.0/TLS 1.0. at present, the versions of TLS supported by the mainstream are 1.1 and 1.2, but they all lag behind the needs of the times. In August 2018, IETF finally announced the official release of the TLS 1.3 specification, which is defined in rfc8446.
Compared to previous versions of TLS, the optimizations are as follows:
Compared with the previous version, a new key agreement mechanism-PSK is introduced.
Support for 0-RTT data transfer, saving round-trip time when establishing a connection
Discarded 3DES, RC4, AES-CBC and other encryption components, abandoned SHA1, MD5 and other hash algorithms
All handshake messages after ServerHello are encrypted, which shows that plaintext is greatly reduced.
Compression of encrypted messages is no longer allowed, and renegotiation is no longer allowed.
DSA certificates are no longer allowed in TLS 1.3
In https, the handshake of each connected TLS consumes a lot of resources and time, so the optimization of TLS 1.3 reduces the time of establishing a connection by one RTT compared with the previous version, and saves a lot of time and improves the response speed.
TLS 1.3 requires openssl 1.1.1 support, and on nginx, nginx 1.13 + support.
When compiling nginx, you need to add the compilation parameter-with-openssl-opt=enable-tls1_3 to enable TLS1.3 support, and add TLSv1.3 to the configuration ssl_protocols. The corresponding TLS1.3 introduces a new algorithm, so ssl_ciphers also needs to add a new algorithm.
By default, nginx does not enable the 0-RTT of TLS 1.3 for security reasons. You can turn it on by instructing ssl_early_data on.
ECC
ECC (Elliptic curve cryptography, Elliptic Curve Cryptography), an algorithm for establishing public keys, is based on elliptic curve mathematics.
Certificates with built-in ECDSA public keys are generally called ECC certificates, and certificates with built-in RSA public keys are generally called RSA certificates.
The mathematical theory of ECC algorithm is very esoteric and complex, and it is difficult to implement in engineering applications, but its unit security strength is relatively high, and its decoding or solving difficulty is basically exponential, so it is difficult for hackers to crack it with the usual methods of brute force cracking. One of the characteristics of RSA algorithm is that its mathematical principle is relatively simple and it is easy to implement in engineering applications, but its unit security strength is relatively low. Therefore, the ECC algorithm can provide higher security strength than the RSA encryption algorithm with less computing power, which effectively solves the engineering implementation problem that "the key length must be increased in order to improve the security strength".
Compared with RSA algorithm, ECC algorithm has the following advantages:
More suitable for mobile Internet: ECC encryption algorithm has a short key length (256bits), which means less storage space, lower CPU overhead and less bandwidth. As more and more users use mobile devices to complete a variety of online activities, ECC encryption algorithm provides a better customer experience for mobile Internet security.
Better security: ECC encryption algorithm provides stronger protection, better protection against attacks than other current encryption algorithms, makes your website and infrastructure more secure than traditional encryption methods, and provides better protection for mobile Internet security.
Better performance: ECC encryption algorithms require shorter key lengths to provide better security, for example, the encryption strength of a 256bit ECC key is equivalent to that of a 3072-bit RSA key (currently the commonly used RSA key length is 2048 bits). As a result, you get higher security at a lower cost of computing power. Tested by relevant foreign authorities, the ECC algorithm is used in Apache and IIS servers, and the response time of Web servers is ten times faster than that of RSA.
Greater return on IT investment: ECC can help protect your infrastructure investment, provide greater security, and quickly handle the explosive growth of secure connections to mobile devices. The key length of ECC increases more slowly than other encryption methods (typically by 128bits, while RSA grows by multiple, such as 1024-2048Murray 4096), which will extend the life of your existing hardware and bring a greater return on your investment.
However, there are two issues to be aware of when using ECC certificates:
1. Not all types of certificates support ECC, but only in the enhanced version of commercial certificates.
2. Some old devices or browsers do not support ECC and may need ECC+RSA dual certificate mode to use.
Brotli
Brotli is a lossless compression algorithm introduced by Google in September 2015. Brotli compresses data through variant LZ77 algorithm, Huffman coding and second-order text modeling. Compared with other compression algorithms, it has higher compression efficiency.
According to a report released by Google, Brotli has the following characteristics:
For common Web resource content, the performance of Brotli is 17-25% better than Gzip.
When the Brotli compression level is 1, the compression ratio is higher than when the Gzip compression level is 9 (highest).
Brotli still provides a very high compression ratio when dealing with different HTML documents.
Brotli support must depend on HTTPS,nginx support Brotli must edit and add brotli module
The source code address of the brotli module is https://github.com/eustas/ngx_brotli.git. After downloading, the nginx is compiled and added with the compilation parameter-add-module=/path/to/ngx_brotli. After adding, enable brotli by adding the configuration in the configuration file.
View the headers in the developer tools:
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.