In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article focuses on "how to improve the performance of your Nginx by 10 times". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to improve the performance of your Nginx by 10 times.
1 recommendation 1: use reverse proxy servers to make applications faster and more secure
If your Web app runs on only one machine, it's easy to improve its performance: a faster one, a few more processors, more memory, and a high-speed disk array. After the change, the speed of WordPress server, Node.js or Java applications running on this machine will be accelerated. (if the application accesses another database server, it's simple: just find two faster machines and connect them with a faster network.)
The trouble is that the speed of the machine is not a problem. In many cases, the Web application is slow because it has to switch between tasks, processing user requests on thousands of connections, reading and writing files to disk, running the application code, and doing something else. As a result, the application server may run out of memory, swap files, or make many requests wait for tasks such as a hard disk Icano.
In addition to upgrading the hardware, you can also choose a completely different approach: add a reverse proxy server to share some of the above tasks. The reverse proxy server is located in front of the machine running the application and is responsible for processing requests from the external network. The reverse proxy server connects directly to the Internet and communicates with the application server using a fast internal network.
The reverse proxy server allows the application server to focus on building the page and then hand it over to the reverse proxy to send to the extranet, regardless of the interaction between the user and the application. Because there is no need to wait for the response from the client, the running speed of the application server can reach the optimal level.
Adding a reverse proxy server can also add flexibility to the Web server. For example, if the server performing a task is overloaded, you can add another similar server at any time, and if this server is down, it is easy to replace it.
Given this flexibility, reverse proxy servers are often a prerequisite for other means of performance optimization, such as:
Load balancing (see recommendation 2). The load balancing service is run on the reverse proxy server and the traffic is evenly distributed to several application servers. With load balancing, adding an application server does not need to modify the application at all.
Caching static files (see recommendation 3), files that can be requested directly, such as pictures or code, can be saved in a reverse proxy server so that they can be sent directly to the client. This can not only respond to requests more quickly, but also reduce the burden on the application server and speed up its operation.
To ensure the security of the site, you can configure the reverse proxy server to improve its security level, through its monitoring to quickly identify and respond to attacks, so as to preserve the security of the application server.
NGINX is specifically designed to use reverse proxy servers to naturally support the above optimizations. Due to the use of event-driven processing mechanisms, NGINX is more efficient than traditional servers. NGINX Plus adds more high-end reverse proxy functions, such as application physical examination, unique request routing, advanced caching and after-sales support.
Comparison between traditional Server and NGINX Worker
Recommendation 2: add load balancing servers
Adding load balancing servers is relatively simple, but it can significantly improve site performance and security. By distributing traffic to multiple servers, you don't have to upgrade the Web server. Even if the application itself is not well written or difficult to expand, load balancing can improve the user experience without making other changes.
The load balancing server is first of all a reverse proxy server (see recommendation 1), which is responsible for forwarding requests from the Internet to other servers. The key here is that the load balancing server can support more than two application servers and use a selection algorithm to distribute requests between different servers. The simplest load balancing algorithm is round robin, which forwards new requests to the next of the available servers in turn. Other algorithms include sending requests to servers with the least active connections. NGINX Plus supports the ability to keep user sessions on the same server, called session persistence.
Load balancing servers can prevent one server from being overloaded while other servers are idle, thus greatly improving performance. At the same time, it also makes it easier to expand the Web server, because you can choose a cheaper server and make sure you make the best use of it.
Protocols that can be scheduled through load balancing include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, FastCGI, SCGI, uwsgi, memcached, and other application forms, including TCP-based applications and other layer 4 protocols. To do this, first analyze the Web application to see where the performance deficiency is, and then determine which one to use.
The same server or the server used for load balancer can also undertake other tasks, such as SSL termination, supporting HTTP/1/x or HTTP/2 depending on the client, and caching static files.
NGINX is often used for load balancing. For more information, please refer to our previous introductory articles, configuration articles, e-books and related online videos, as well as documentation. Our commercial version of NGINX Plus supports more load balancing features, such as load routing based on server response time and load balancing that supports Microsoft NTLM protocol.
Recommendation 3: cache static and dynamic content
Caching can improve the performance of Web applications because content can be delivered to clients more quickly. Caching strategies include preprocessing content, storing content on faster devices, keeping content close to the client, and using these strategies at the same time.
There are two types of caches.
Static content caching, infrequent files, such as images (JPEG, PNG) and code (CSS, JavaScript), can be saved in an edge server for quick retrieval from content or disk.
Dynamic content caching, many Web applications generate a new HTML for each page request, caching each generated HTML for a short period of time, which may significantly reduce the total number of pages to be generated while ensuring that the content delivered is fresh enough.
If a page is viewed 10 times per second and you cache it for 1 second, 90% of requests for that page will come from the cache. If you cache static content separately, it is likely that most of the newly generated pages will come from the cached content.
There are three main technologies for caching content generated by Web applications.
Put the content close to the user. Close to the user, less transmission time.
Put the content on a faster machine. The machine is fast and the retrieval speed is fast.
Remove the content from the overused machine. Sometimes machines are much slower than when they focus on specific tasks because they are distracted by too many tasks. Taking content to other machines at this time is good not only for cached content, but also for non-cached content, because the burden on the host hosting them is reduced.
The caching of Web applications can be implemented inside or outside the Web application server. First, consider caching dynamic content to lighten the load on the application server. Second, caching is used for static content (including temporary copies of dynamically generated content) to further reduce the burden on the application server. Then, consider moving the cache to another machine that is faster or closer to the user, lightening the load on the application server and shortening the transmission time.
Making good use of the cache can significantly accelerate the response speed of the application. For many web pages, static data such as large pictures often account for more than half of the content. It may take several seconds to query and transfer such data without caching, while caching may take only a fraction of a second.
As an example of how to use caching, NGINX and NGINX Plus set the cache through two instructions: proxy_cache_path and proxy_cache specify the location and size of the cache, the maximum cache time, and other parameters. Using the third (and popular) instruction proxy_cache_use_stale can even tell the cache to provide the original old files when the server that is supposed to provide fresh content is too busy or down. For the client, getting the content is better than not getting it at all. From the user's point of view, this can also establish a very stable image of your site or application.
NGINX Plus supports advanced caching functions, including cache decontamination (caching purging) and visual display of cache status through dashboards for real-time monitoring.
To learn more about caching in NGINX, take a look at the reference documentation and NGINX Content Caching in NGINX Plus Admin Guide.
Note: caching involves development, decision-making, and operation and maintenance, and sound caching strategies, such as those mentioned in this article, can reflect the value from an DevOps perspective. In other words, developers, architects, and operators work together to protect the functionality, response time, security and business goals of a website.
Recommendation 4: compress data
Compression can also greatly improve performance. Pictures, videos, music and other files have very mature and efficient compression standards (JPEG and PNG, MPEG-4, MP3), any standard can reduce the file size by an order of magnitude or more.
Text files, including HTML (plain text and HTML tags), CSS, and JavaScript code, are often transferred without compression. Compressing this data is sometimes particularly obvious to improve the perceptual performance of Web applications, especially when the network of mobile users is slow and unstable.
Because text data can play a necessary supporting role for page interaction, while multimedia data is more like the icing on the cake. Smart content compression can reduce HTML, JavaScript, CSS and other text content by more than 30%, so load time can be reduced accordingly.
If you use SSL, compression reduces the amount of data that must be encoded by SSL, thus compensating for the CPU time of compressing that data.
There are many ways to compress data. For example, the section on HTTP/2 in recommendation 6 describes a novel compression idea, which is especially suitable for the first data compression. Another example of text compression is that you can turn on GZIP compression in NGINX. After pre-compressing the text data, you can send the .gz file directly using the gzip_static directive.
Recommendation 5: optimize SSL/TLS
More and more websites are using Secure Sockets Layer (SSL) and later Transport Layer Security (TLS) protocols. SSL/TLS enhances website security by encrypting data sent to users from the source server. Google will improve the search engine rankings of sites using SSL/TLS, which will give a strong impetus to this process.
Despite the increasing adoption rate, many websites are also plagued by the performance loss caused by SSL/TLS. SSL/TLS slows down the site for two reasons.
1. Each initial handshake that opens a new connection must create an encryption key, which is further exacerbated by the way browsers use HTTP/1.x to establish multiple connections to each server.
The operation of encrypting data on the server side and decrypting it on the client side is also an overhead.
To encourage people to use SSL/TLS,HTTP/2 and SPDY (see recommendation 6), the authors designed these two protocols to allow browsers to establish only one connection for one session. This eliminates one of the two main causes of performance degradation caused by SSL. However, there are still many things that can be done when it comes to optimizing SSL/TLS performance.
The method of optimizing SSL/TLS varies from Web server to Web server. Take NGINX as an example. NGINX uses OpenSSL and runs on ordinary machines, providing performance close to that of custom machines. NGINX SSL performance details how to minimize the overhead of SSL/TLS encryption and decryption.
In addition, there is an article that introduces many ways to improve the performance of SSL/TLS. To sum up briefly, the main technologies involved are as follows.
Session cache. Use the ssl_session_cache instruction to turn on caching, caching the parameters used for each SSL/STL connection.
Session ticket or ID. Save the information for a particular SSL/TLS session as a session ticket or ID so that the connection can be reused without having to shake hands again.
OCSP envelope. Reduce handshake time by caching SSL/TLS certificate information.
Both NGINX and NGINX Plus can terminate SSL/TLS, that is, handle the encryption and decryption of client information while maintaining plaintext communication with other servers. These steps can be taken to set up handling SSL/TLS termination in NGINX or NGINX Plus. For using NGINX Plus on servers that accept TCP connections, you can refer to the setup steps here.
Recommendation 6: implement HTTP/2 or SPDY
Sites that already use SSL/TLS are likely to improve performance if they use HTTP/2 or SPDY again, because there is only one handshake for a connection. Switching to SSL/TLS for sites that have not yet used SSL/TLS, HTTP/2, and SPDY (which usually slows performance) can be a step backwards in terms of response time.
Google started the SPDY project in 2012 and is committed to achieving faster speeds on top of HTTP/1.x. HTTP/2 is a SPDY-based standard recently approved by IETF. SPDY is widely supported, but will soon be replaced by HTTP/2.
The key to SPDY and HTTP/2 is to use only one connection, not multiple connections. This connection is multiplexed, so it can carry multiple requests and responses at the same time.
By maintaining only one connection, you can save the setup and administrative costs required for multiple connections. And a connection is particularly important for SSL because it minimizes the handshake time required for SSL/TLS to establish a secure connection.
There is no formal requirement for SSL/TLS,HTTP/2 to be used in the SPDY protocol, but currently all browsers that support HTTP/2 will use it only if SSL/TLS is enabled. In other words, browsers that support HTTP/2 will only use HTTP/2 if the website uses SSL and the server accepts HTTP/2 traffic. Otherwise, the browser communicates based on HTTP/1.x.
After the implementation of SPDY or HTTP/2, the previous performance optimization measures for HTTP, such as domain name sharding, resource merging, picture wizard, etc., are not needed. As a result, code and deployment can also be simplified. You can refer to our white paper on what changes HTTP/2 will bring.
NGINX has supported SPDY for a long time, and most sites that use SPDY today are running NGIN
X . NGINX also took the lead in supporting HTTP/2, and in September 2015, NGINX open source and NGINX Plus began to support HTTP/2.
Over time, NGINX wants most sites to enable SSL and migrate to HTTP/2. This will not only make the site more secure, but also achieve higher performance through simple code as new optimization technologies continue to emerge.
Recommendation 7: upgrade the software
A simple way to improve application performance is to choose software based on reliability and performance. In addition, developers of high-quality components are more likely to continue to improve performance and fix problems, so it's cost-effective to use the latest stable version. The newly released version will attract more attention from developers and users, and will also take advantage of new compiler optimization techniques, including tuning for new hardware.
The newly released stable version has significantly higher performance than the old version. Sticking to upgrades can also keep you abreast of the times in terms of tuning, problem fixes, and security alerts.
Not upgrading the software can also hinder the use of new capabilities. For example, HTTP/2 currently requires OpenSSL 1.0.1. Starting in the second half of 2016, HTTP/2 will require OpenSSL 1.0.2, which was released in January 2015.
NGINX users can start with the latest version of NGINX open source software or NGINX Plus, which supports socket sharing, thread pooling (see below), and continues to optimize performance. So check your software and try to upgrade them to the latest version.
Recommendation 8: tune Linux
Linux is the underlying operating system for most Web servers today, and as the foundation of all infrastructure, Linux is critical to improving performance. By default, many Linux systems are conservative, with desktop office as the requirement and a small amount of resources as the tuning goal. For Web applications, re-tuning is definitely needed in order to achieve the best performance.
Linux optimization varies from Web server to Web server. Take NGINX as an example, you can consider the following aspects.
Stock queue. If you find that some connections are not being processed, you can increase the net.core.somaxconn, which is the maximum number of connections waiting to be processed by NGINX. If the connection limit is too small, you should be able to see an error message, and you can gradually increase this value until the error message no longer appears.
File descriptor. NGINX uses a maximum of two file descriptors per connection. If the system serves many connections, you may need to increase the system-level limit of sys.fs.file_max for descriptors and the limit of nofile for user file descriptors to support the increased load.
Temporary port. When used as a proxy, NGINX creates temporary ports for each upstream server. You can set net.ipv4.ip_local_port_range to increase the range of port values to increase the number of ports available. In addition, you can reduce the value of net.ipv4.tcp_fin_timeout, which controls the wait time for inactive ports to release reuse and speeds up turnover.
For NGINX, please refer to the NGINX performance tuning guide to learn how to easily optimize your Linux system to support greater throughput.
Recommendation 9: tune the Web server
No matter what Web server you use, you need to tune it for the application. The following recommendations apply to any Web server, but give instructions for setting up NGINX only.
Access the log. Instead of writing the log of each request to disk immediately, you can make a cache in memory and enter it in batch. For NGINX, add the buffer=_size_ parameter to the access_log instruction and wait for the memory buffer to be full before writing the log to disk. If you add the * * flush=_time_** parameter, the contents of the buffer will also be written to disk at the specified time.
Buffer. Buffering is used to keep part of the response in memory until the buffer is full, enabling a more efficient response to the client. Responses that cannot be written to memory are written to disk, which degrades performance. When buffering for NGINX is enabled, you can use proxy_buffer_size and proxy_buffers instructions to manage it.
Client active connection. Active connections can reduce time consumption, especially when using SSL/TLS. For NGINX, you can increase the value of keepalive_requests to the default value of 100 for the client, or you can increase the value of keepalive_timeout to make the active connection last longer, resulting in a faster response to subsequent requests.
Upstream active connection. Upstream connections, that is, connections to application servers and database servers, can also benefit from the setting of active connections. For upstream connections, you can increase active connections, that is, the number of idle active connections available to each worker process. This can increase connection reuse and reduce reopening connections. For more information about the activity link, please refer to this blog.
Restrictions. Limiting the resources used by clients can improve performance and security. For NGINX, the limit_conn and limit_conn_zone instructions limit the number of connections to the specified source, while limit_rate limits bandwidth. These settings prevent legitimate users from "embezzling" resources and help prevent attacks. The limit_req and limit_req_zone directives restrict client requests. For connections to upstream servers, you can use the max_conns parameter in the server instructions in the upstream configuration area, which restricts connections to upstream servers and prevents overload. The relevant queue instruction creates a queue that saves the specified number of requests for a specified time after the max_conns limit arrives.
The process of work. The worker process is responsible for processing requests. NGINX uses an event-based model and OS-related mechanisms to effectively distribute requests among worker processes. It is recommended that you set the value of worker_processes to one worker process per CPU. If necessary, most systems support raising the value of worker_connections (the default is 512). You can find the value that best suits your system through experiments.
Socket slicing. Typically, a socket listener distributes new connections to all worker processes. Socket fragmentation creates a socket listener for each worker process, and the kernel specifies the connection for the socket listener when it is available. This can reduce lock contention and improve performance on multi-core systems. To enable socket fragmentation, include the reuseport parameter in the listen instruction.
Thread pool. A time-consuming operation can block any computer process. For Web server software, disk access can hinder many faster operations, such as in-memory computing and replication. In the case of thread pools, slow operations are assigned to a separate set of tasks, while the main processing cycle continues to run faster operations. When the disk operation is complete, the result is returned to the main processing loop. In NGINX, the read () system call and sendfile () are reproduced to the thread pool.
When prompted to modify the settings of any operating system and peripheral devices, modify only one item at a time, and then test the performance. If the change causes a problem, or if it does not improve performance, change it back.
Recommendation 10: monitor real-time dynamics to identify problems and bottlenecks
The key to preserving the high performance of the application is to monitor the application performance in real time. The dynamics of applications in specific devices and corresponding Web infrastructure must be monitored in real time.
Monitoring site activity is mostly passive, it only tells you what happened, and how to find and solve the problem is your own business.
Monitoring can capture the following problems:
1. Server downtime
2. The server is unstable and the connection is missed
3. A large area of cache failure occurs on the server.
4. The content sent by the server is incorrect
Global performance monitoring tools such as New Relic or Dynatrace can help us monitor when pages are loaded remotely, while NGINX can help you monitor application delivery. The performance data of the application can tell you when optimization methods really bring different experiences to users, and when you need to expand to meet more and more traffic.
To help users find problems as soon as possible, NGINX Plus adds an application physical examination feature that reports recurring problems. NGINX Plus also has a session draining feature that blocks new connections before existing tasks are completed, as well as slow startup capacity, allowing recovered servers to achieve the desired speed in the load balancing cluster. When used properly, a health checkup will help you locate the problem before it significantly affects the user experience, while session draining and slow startup allow you to replace the server without affecting perceived performance and online time. This diagram shows NGINX Plus's built-in dashboard for real-time activity monitoring, covering servers, TCP connections, and caching.
Conclusion: the performance is improved by 10 times.
Performance improvements vary greatly depending on Web applications. The actual improvement depends on the budget, time, and the gap between the existing implementation and the desired performance. So how do you get a 10-fold improvement in the performance of your application?
In order to help you understand the potential of each optimization proposal, here are some implementation guidelines for the previous proposal. I hope you can get what you need.
Reverse proxy server and load balancing. The absence of load balancing or pool load balancing can lead to very low performance. Adding a reverse proxy server, such as NGINX, can reduce the round trip of Web applications between memory and disk. Load balancing can transfer tasks from overloaded servers to idle servers, and it is also easy to scale. These changes can greatly improve performance, and a 10-fold performance improvement is easy compared to the worst of the original deployment, and even less than 10-fold is a qualitative leap overall.
Cache dynamic and static content. If your Web server acts as an application server at the same time, you can achieve 10 times the peak performance improvement by caching dynamic content. Caching static content can also improve performance several times.
Compress the data. The use of compression formats such as JPEG, PNG, MPEG-4 and MP3 can significantly improve performance. If all of these techniques are used, compressed text data (code and HTML) can triple the initial page load time.
Optimize SSL/TLS. Secure handshakes have a significant impact on performance, so optimizing them can make initial responses twice as fast, especially for sites with more text content. The performance improvement of optimizing media files under SSL/TLS is very small.
Implement HTTP/2 and SPDY. In the case of SSL/TLS, these two protocols have the potential to improve the overall performance of the site.
Tune Linux and Web servers. Using optimized buffering strategies and active connections to reload time-consuming tasks to a separate thread pool can significantly improve performance. For example, thread pools can improve the performance of disk operation-intensive tasks by at least one order of magnitude.
At this point, I believe you have a deeper understanding of "how to improve the performance of your Nginx 10 times". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.