In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
Today, I would like to share with you the relevant knowledge points of server-side Web performance optimization skills, which are detailed in content and clear in logic. I believe most people still know too much about this knowledge, so share this article for your reference. I hope you can get something after reading this article.
Tip # 1: improve performance and security through reverse proxies
If your web application runs on a single machine, this approach will significantly improve performance: just a faster machine, a better processor, more memory, a faster disk array, and so on. Then the new machine can run your WordPress server, Node.js programs, Java programs, and other programs faster. If your program wants to access the database server, the solution is still simple: add two faster machines and use a faster link between the two computers. )
The problem is that machine speed may not be a problem. Web programs are often slow because computers keep switching between different tasks: interacting with users through thousands of connections, accessing files from disk, running code, and so on. The application server may jitter the thrashing-, such as running out of memory, swapping memory data to disk, and having multiple requests waiting for a task to complete, such as disk IMago.
You can take a completely different approach instead of upgrading the hardware: add a reverse proxy server to share some of the tasks. The reverse proxy server is located at the front end of the machine running the application and is used to handle network traffic. Only the reverse proxy server is directly connected to the Internet; communication with the application server is done through a fast internal network.
Using a reverse proxy server frees the application server from waiting for users to interact with web programs, so that the application server can focus on building web pages for the reverse proxy server so that it can be transmitted to the Internet. On the other hand, the application server does not need to wait for the response from the client, and its running speed can be close to the optimized performance level.
Adding a reverse proxy server can also bring flexibility to your web server installation. For example, if a certain type of server is already overloaded, you can easily add another same server; if a machine goes down, you can easily replace a new one.
Because of the flexibility of reverse proxies, reverse proxies are also a necessary prerequisite for some performance acceleration features, such as:
Load balancing (see Tip # 2)-load balancing runs on a reverse proxy server and is used to distribute traffic to a batch of applications. With the right load balancing, you can add application servers without modifying the application at all.
Cache static files (see Tip # 3)-directly read files, such as images or client code, can be saved on the reverse proxy server and then sent directly to the client, which can improve speed, share the load on the application server, and make the application run faster.
Website security-the reverse proxy server can improve the security of the website and quickly discover and respond to attacks to ensure that the application server is protected.
NGINX software is specially designed to be used as a reverse proxy server and includes a variety of functions mentioned above. NGINX uses an event-driven approach to processing requests, which is more efficient than traditional servers. NGINX plus adds more advanced reverse proxy features, such as application health checks, designed to handle request routing, advanced buffering, and related support.
Tip # 2: add load balancing
Adding a load balancing server is a fairly simple way to improve performance and website security. Instead of making core Web servers bigger and stronger, use load balancing to distribute traffic to multiple servers. Even if the program is not well written, or it is difficult to expand the capacity, just using the load balancing server can improve the user experience.
The load balancing server is first a reverse proxy server (see Tip # 1)-it accepts traffic from the Internet and then forwards the request to another server. In particular, the load balancing server supports two or more application servers and uses allocation algorithms to forward requests to different servers. The simplest load balancing method is the round robin round robin, where each new request is sent to the next server in the list. Other replication equalization methods include sending requests to the server with the least active connections. NGINX plus has the ability to assign sessions for specific users to the same server.
Load balancing can improve performance because it avoids the overload of one server while others have no traffic to deal with. It can also simply expand the server size, because you can add multiple relatively cheap servers and make sure they are fully utilized.
Protocols that can perform load balancing include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, FastCGI, SCGI, uwsgi, memcached, etc., as well as several other application types, including TCP-based applications and other layer 4 protocol programs. Analyze your web application to determine what you want to use and where you lack performance.
The same server or server farm can be used for load balancing, can also be used to handle other tasks, such as SSL end servers, support client HTTP/1.x and HTTP/2 requests, and cache static files.
Tip # 3: cache static and dynamic content
Caching can improve the performance of web applications by accelerating the transmission speed of content. It can adopt the following strategies: preprocess the content to be transferred when needed, save the data to a faster device, store the data closer to the client, or use a combination of these methods.
There are two different types of data buffering:
Static content caching. Files that do not change frequently, such as images (JPEG, PNG) and code (CSS,JavaScript), can be saved on a peripheral server so that they can be quickly extracted from memory and disk.
Dynamic content caching. Many web applications generate a new HTML page for each web request. Simply caching generated HTML content in a short period of time can reduce the amount of content to be generated, and these pages are new enough to meet your needs.
For example, if a page is viewed 10 times per second and you cache it for 1 second, 90% of the requested pages will be extracted directly from the cache. If you cache static content separately, even newly generated pages may be made up of these caches.
Here are the three main caching technologies invented by web applications:
1. Shorten the network distance between the data and the user. A copy of the content is placed on a node closer to the user to reduce the transmission time.
two。 Increase the speed of the content server. The content can be saved on a faster server to reduce the time it takes to extract files.
3. Remove data from the overloaded server. Machines often have to perform some other tasks so that the execution speed of a task is lower than the test results. Caching data on different machines can improve the performance of both cached and non-cached resources because the host is not overused.
The caching mechanism for web applications can be implemented within the web application server. First, caching dynamic content is used to reduce the time it takes for the application server to load dynamic content. Second, caching static content (including temporary copies of dynamic content) is to further share the load on the application server. And the cache is then transferred from the application server to a machine that is faster and closer to the user, thus reducing the pressure on the application server and reducing the time it takes to extract and transfer data.
The improved cache scheme can greatly improve the speed of application. For most web pages, static data, such as large image files, make up more than half of the content. If there is no cache, it may take several seconds to extract and transfer such data, but it can be done in less than 1 second after using caching.
To give an example of how caching is used in practice, NGINX and NGINX Plus use two instructions to set the caching mechanism: proxy_cache_path and proxy_cache. You can specify the location and size of the cache, the maximum time for files to be kept in the cache, and other parameters. Using the third (and quite popular) instruction proxy_cache_use_stale, if the server providing fresh content is busy or dead, you can even get the cache to provide older content so that clients don't get nothing. From the user's point of view, this is a great way to improve the availability of your site or application.
NGINX plus has an advanced caching feature, including support for cache cleanup and displaying cache status information on the dashboard.
Note: the caching mechanism is distributed among application developers, investment decision makers, and actual system operators. Some of the complex caching mechanisms mentioned in this article are valuable from a DevOps perspective, that is, engineers who integrate the functions of application developers, architects, and operators can meet their needs for site functionality, response time, security, and business results (such as the number of transactions completed).
Tip # 4: compress data
Compression is an accelerated method with great potential to improve performance. There are already some well-designed and high-compression standards for all kinds of files, such as photos (JPEG and PNG), video (MPEG-4) and music (MP3). Each standard reduces the file size more or less.
Text data-- including HTML (including plain text and HTML tags), CSS, and code, such as Javascript-- is often transmitted uncompressed. Compressing this type of data can have a greater impact on the perception of application performance, especially for clients on slow or limited mobile networks.
This is because text data is often effective data for users to interact with web pages, while multimedia data may play a more supporting or decorative role. Intelligent content compression can reduce the bandwidth requirements of HTML,Javascript,CSS and other text content, usually by 30% or more and the corresponding page loading time.
If you use SSL, compression reduces the amount of data that needs to be encoded by SSL, which takes up some CPU time and offsets the reduced time of compressed data.
There are many ways to compress text data. For example, in HTTP/2, the compression mode of the novel text specifically adjusts the header data. Another example is that you can turn on GZIP compression in NGINX. After you pre-compress the text data in your service, you can directly use the gzip_static instruction to process the compressed .gz version.
Tip # 5: optimize SSL/TLS
Secure socket (SSL) protocol and its next-generation transport layer security (TLS) protocol are being adopted by more and more websites. SSL/TLS encrypts the data sent to users from the original server to improve the security of the website. Part of the reason for this trend is that Google is using SSL/TLS, which is a positive factor in search engine rankings.
Although SSL/TLS is becoming more and more popular, the speed impact of using encryption has also deterred many websites. There are two reasons why SSL/TLS makes websites slower:
The handshake process of any connection the first time it connects requires the delivery of a key. Browsers using the HTTP/1.x protocol will repeat the above operation for each connection when establishing multiple connections.
In the process of transmission, data needs to be constantly encrypted on the server side and decrypted on the client side.
To encourage authors who use SSL/TLS,HTTP/2 and SPDY (described in the next chapter), they have designed a new protocol that allows browsers to use only one connection for a browser session. This will greatly reduce the time wasted by the first reason mentioned above. However, there are more ways that can now be used to improve the performance of applications that use SSL/TLS to transfer data.
The web server has a corresponding mechanism to optimize SSL/TLS transport. For example, NGINX uses OpenSSL to run on ordinary hardware to provide nearly dedicated hardware transmission performance. The SSL performance of NGINX is documented in detail, and the time to encrypt and decrypt SSL/TLS data and the CPU occupancy rate are greatly reduced.
Tip # 6: use HTTP/2 or SPDY
For sites that already use SSL/TLS, HTTP/2 and SPDY can improve performance because only one handshake is required for each connection. For sites that do not use SSL/TLS, HTTP/2 and SPDY will make the migration to SSL/TLS less stressful (which would otherwise be less efficient) in terms of responsiveness.
Google began recommending SPDY as a faster protocol than HTTP/1.x in 2012. HTTP/2 is the current standard adopted by IETF and is based on SPDY. SPDY has been widely supported, but will soon be replaced by HTTP/2.
The key to SPDY and HTTP/2 is to replace multiplex connections with a single connection. A single connection is reused, so it can carry multiple fragments of requests and responses at the same time.
By using a single connection, these protocols avoid establishing and managing multiple connections as in browsers that implement HTTP/1.x. A single connection is particularly effective for SSL because it minimizes the handshake time when SSL/TLS establishes a secure link.
The SPDY protocol requires SSL/TLS, while the official HTTP/2 standard does not, but currently all browsers that support HTTP/2 can use it only if SSL/TLS is enabled. This means that browsers that support HTTP/2 will enable HTTP/2 only if the website uses SSL and the server receives HTTP/2 traffic. Otherwise, the browser will use the HTTP/1.x protocol.
As an example of supporting these protocols, NGINX has supported SPDY from the beginning, and most websites that use the SPDY protocol run NGINX. NGINX also provided early support for HTTP/2, with open source versions of NGINX and NGINX Plus supporting it since September 2015.
Over time, we at NGINX expect more sites to be fully SSL-enabled and migrate to HTTP/2. This will improve security, and new optimizations will be found and implemented, and simplified code will perform better.
Tip # 7: upgrade the software version
A simple way to improve application performance is to choose your software stack based on the stability and performance evaluation of the software. Furthermore, because developers of high-performance components are more willing to pursue higher performance and solve bug, it is worth using the latest version of the software. New versions tend to attract more attention from developers and the user community. Newer versions tend to take advantage of new compiler optimizations, including tuning new hardware.
A stable new version usually has better compatibility and higher performance than the old version. Keep the software updated all the time, it is very simple to keep the software optimized, solve the bug, and improve security.
Using old software all the time will also prevent you from taking advantage of new features. For example, HTTP/2 mentioned above currently requires OpenSSL 1.0.1. 1.0.2 will be required in mid-2016, and it was only released in January 2015.
NGINX users can start migrating to NGINX's latest open source software or NGINX Plus;, which contain the latest capabilities, such as socket segmentation and thread pooling (see below), which have been optimized for performance. Then take a good look at your software stack and upgrade them to the latest version you can upgrade to.
Tip # 8: Linux system performance tuning
Linux is the operating system used by most web servers, and as the basis of your architecture, Linux obviously has a lot of potential to improve performance. By default, many Linux systems are set to use few resources to match typical desktop application usage. This means that web applications need some fine-tuning to achieve maximum performance.
The Linux optimization here is specific to the web server. Taking NGINX as an example, here are some changes that need to be emphasized when speeding up Linux:
Buffer queue. If you have pending connections, you should consider increasing the value of net.core.somaxconn, which represents the maximum number of connections that can be cached. If the connection limit is too small, you will see an error message, and you can gradually increase this parameter until the error message stops appearing.
File descriptor. NGINX uses up to 2 file descriptors for a connection. If your system has a lot of connection requests, you may need to increase sys.fs.file_max to increase the system's overall limit on the number of file descriptors in order to support the increasing load requirements.
Temporary port. When using proxies, NGINX creates temporary ports for each upstream server. You can set net.ipv4.ip_local_port_range to increase the range of these ports and increase the available port numbers. You can also reduce the timeout judgment of inactive ports to reuse ports, which can be set through net.ipv4.tcp_fin_timeout, which can quickly increase traffic.
Tip # 9: web server performance tuning
No matter what kind of web server you use, you need to optimize it to improve performance. The following recommendations can be used for any web server, but some settings are for NGINX. Key optimization tools include:
Access the log. Instead of writing the log of each request directly to disk, you can cache the log in memory and write back to disk in bulk. For NGINX, adding the parameter buffer=size to the instruction access_log allows the system to write the log to disk only when the cache is full. If you add the parameter flush=time, the cache content will be written back to disk at regular intervals.
Cache. The cache stores some of the responses in memory until it is full, which makes communication with the client more efficient. Responses that cannot be stored in the memory are written back to disk, which reduces performance. When caching is enabled in NGINX, you can use the instructions proxy_buffer_size and proxy_buffers to manage the cache.
Keep the client alive. Keeping connections alive can reduce overhead, especially when using SSL/TLS. For NGINX, you can increase the maximum number of connections starting with the default value of keepalive_requests, so that a client can request multiple requests on a specified connection, and you can also increase the value of keepalive_timeout to allow live connections to survive longer, so that subsequent requests can be processed more quickly.
Keep alive upstream. Upstream connections-that is, connections to machines such as application servers, database servers, and so on-also benefit from connection preservation. For upstream connections, you can increase keepalive, that is, the number of idle live connections per worker process. This increases the number of times the connection is reused and reduces the number of times you need to reopen a new connection.
Restrictions. Limiting the resources used by clients can improve performance and security. For NGINX, the instructions limit_conn and limit_conn_zone limit the number of connections from a given source, while limit_rate limits bandwidth. These restrictions prevent legitimate users from stealing resources and avoid attacks. The directives limit_req and limit_req_zone restrict client requests. For upstream servers, the server instruction in the configuration block of upstream can use the max_conns parameter to limit the number of connections to the upstream server. This avoids server overload. The associated queue instruction creates a queue to hold a specific number of requests for a specified length of time when the number of connections reaches the max_connS limit.
The worker process. The worker process is responsible for processing requests. NGINX uses an event-driven model and operating system-specific mechanisms to effectively distribute requests to different worker processes. This recommendation recommends setting worker_processes to one per CPU. The maximum number of worker_connections (default 512) can be increased on most systems as needed to experimentally find the value that best suits your system.
Socket segmentation. Usually a socket listener assigns a new connection to all worker processes. Socket splitting creates a socket listener for each worker process so that the kernel assigns connections to it when the socket listener is available. This reduces lock contention and improves the performance of multicore systems. To enable socket separation, you need to add the reuseport parameter to the listen instruction.
Thread pool. The computer process may be occupied by a single slow operation. For web server software, disk access can affect many faster operations, such as computing or copying in memory. After using the thread pool, slow operations can be assigned to different task sets, while the main process can always run fast operations. When the disk operation is complete, the result is returned to the loop of the main process. In NGINX, two operations-- read () system call and sendfile ()-- are assigned to the thread pool.
Sports Acrobatics. When changing the settings of any operating system or supporting service, change only one parameter at a time and test performance. If the changes cause problems, or if you can't make your system faster, change it back.
Tip # 10: monitor system activity to resolve problems and bottlenecks
The key to making the system very efficient in application development is to monitor the performance of your system in the real world. You must be able to monitor program activity on specific devices and on your web infrastructure.
Surveillance is the most active-it will tell you what happened and leave the problem to you to discover and eventually solve.
Surveillance can identify several different problems. They include:
The server is down.
There is something wrong with the server and the connection has been lost.
A large number of cache misses occurred on the server.
The server did not send the correct content.
The application's overall performance monitoring tools, such as New Relic and Dynatrace, can help you monitor the time it takes to load a web page remotely, while NGINX can help you monitor the delivery side of the application. When you need to consider adding capacity to your infrastructure to meet traffic requirements, application performance data can tell you that your optimization measures are really working.
To help developers find and solve problems quickly, NGINX Plus adds application perceived health checks-a comprehensive analysis of recurring regular events and alerts you when problems arise. NGINX Plus also provides session filtering, which prevents new connections from being accepted before the current task is completed, and another function is slow startup, allowing a server that recovers from errors to catch up with the load balancer server farm. When used at the time, health checks allow you to find the problem before it becomes serious enough to affect the user experience, while session filtering and slow startup allow you to replace the server. and this process does not have a negative impact on performance and uptime. The following figure shows the dashboard of the built-in NGINX Plus module for real-time activity monitoring, including Web architecture information such as server farm, TCP connections and cache information.
These are all the contents of this article entitled "what are the server-side Web performance optimization techniques?" Thank you for reading! I believe you will gain a lot after reading this article. The editor will update different knowledge for you every day. If you want to learn more knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.