In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article will explain in detail the skills of 10 web applications to improve the performance of 10 times. The content of the article is of high quality, so the editor will share it for you as a reference. I hope you will have some understanding of the relevant knowledge after reading this article.
Improving the performance of Web applications is more important than ever. The share of online economic activity is growing; more than 5% of the economies of developed countries are now online (see Internet statistics in Resources). And our always-online, highly connected modern world means that users' expectations are higher than ever before. If your site does not respond immediately, or if your application does not work immediately, users will quickly turn to your competitors.
For example, a study by Amazon nearly a decade ago proved that even then, revenue increased by 1% for every 100 milliseconds reduction in page load time. Another recent study highlighted the fact that more than half of the website owners surveyed said they had lost revenue or customers due to poor application performance.
How fast does a website need? For every second a page is loaded, about 4% of users give it up. Top e-commerce sites offer the first interaction time from 1 second to 3 seconds, which provides a high conversion rate. It is clear that the risk of web application performance is high and may increase.
It's easy to improve performance, but it's hard to see the results. To help you with your journey, this blog post provides you with 10 tips to help you improve the performance of your site tenfold. This is the first in a series of articles that details how to improve application performance with the help of some well-tested optimization techniques and with the support of NGINX. This series also outlines possible security improvements in the process.
Tip 1-use a reverse proxy server to accelerate and protect the application
If your web application runs on one machine, the solution to the performance problem may be obvious: use a faster machine, have more processors, more RAM, faster disk arrays, and so on. Then, the new machine can run your WordPress server, Node.js applications, Java applications, etc., faster than before. (if your application accesses the database server, the solution may still seem simple: get two faster machines and faster connections between them.)
The problem is that the speed of the machine may not be the problem. Web applications are usually slow because computers switch between different types of tasks: interacting with users on thousands of connections, accessing files from disk, running application code, and so on. The application server may crash-- running out of memory, swapping blocks of memory to disk, and leaving many requests waiting for a single task such as disk Ibino.
Instead of upgrading the hardware, you can take a completely different approach: add a reverse proxy server to uninstall these tasks. The reverse proxy server is located in front of the machine running the application and handles Internet traffic. Only the reverse proxy server connects directly to the Internet; to communicate with the application server through a fast internal network.
Using a reverse proxy server frees the application server from waiting for the user to interact with the web application and allows it to focus on building the page so that the reverse proxy server can be sent through Internet. Application servers that no longer need to wait for a client response can run at a speed close to the optimization benchmark.
Adding a reverse proxy server also increases the flexibility of web server settings. For example, if a server of a given type is overloaded, you can easily add another server of the same type; if the server goes down, you can easily replace it.
Because of the flexibility it provides, reverse proxy servers are also a prerequisite for many other performance improvement features, such as:
Load balancing (see tip 2)-the load balancer runs on a reverse proxy server to share traffic evenly among multiple application servers. With the load balancer, you can add an application server without changing your application.
Cache static files (see tip 3)-directly requested files, such as image files or code files, can be stored on a reverse proxy server and sent directly to the client, which can serve the asset faster and uninstall the application server, making the application run faster.
Protect your site-reverse proxy servers can be configured with high security and monitored to quickly identify and respond to attacks and protect application servers.
The NGINX software is specifically designed to be used as a reverse proxy server with the additional features described above. NGINX uses an event-driven approach, which is more efficient than traditional servers. NGINX Plus adds more advanced reverse proxy features, such as application health checks, special request routing, advanced caching, and support.
Tip 2-add a load balancer
Adding a load balancer is a relatively easy change that can significantly improve the performance and security of the site. Instead of making the core web server bigger and more powerful, use a load balancer to distribute traffic among multiple servers. Even if the application is poorly written or has scalability problems, the load balancer can improve the user experience without making any other changes.
First, the load balancer is a reverse proxy server (see tip 1)-- it receives Internet traffic and forwards requests to another server. The trick is that the load balancer supports two or more application servers, using a variety of algorithms to split requests between servers. The simplest method of load balancing is polling, sending each new request to the next server on the list. Other methods include sending requests to the server with the least active connections. NGINX Plus has the ability to continue a given user session on the same server, which is called session persistence.
Load balancers can greatly improve performance because they prevent one server from overloading while other servers are waiting for traffic. They can also easily expand the capacity of web servers because you can add servers that are relatively low-cost and make sure they are fully utilized.
Protocols that can be load balanced include HTTP, HTTPS, SPDY, HTTP/2, WebSocket, FastCGI, SCGI, uwsgi, memcached, and several other application types, including TCP-based applications and other layer 4 protocols. Analyze your web application to determine which application you are using and where performance is poor.
The same or more servers used for load balancing can also handle several other tasks, such as SSL termination and support for HTTP/1. The client uses x and HTTP/2 and caches static files.
NGINX is commonly used for load balancing. To learn more, download our ebook and choose the software load balancer for five reasons. You can use NGINX and NGINX Plus to get basic configuration instructions for load balancing, part 1, and the complete documentation in the NGINX Plus administration guide. NGINX Plus is our commercial product that supports more professional load balancing features, such as load routing based on server response time and load balancing based on Microsoft NTLM protocol.
Tip 3-caching static and dynamic content
Caching improves the performance of web applications by delivering content to clients faster. Caching can include several strategies: preprocessing content as needed for fast delivery, storing content on faster devices, storing content closer to the client, or using it in combination.
There are two different types of caching to consider:
Static content caching-files that do not change frequently, such as image files (JPEG, PNG) and code files (CSS, JavaScript), can be stored on edge servers for quick retrieval from memory or disk.
Caching dynamic content-many Web applications generate a new HTML for each page request. By caching a copy of the generated HTML in a short period of time, you can significantly reduce the total number of pages that must be generated while still delivering enough fresh content to meet your needs.
For example, if a page has 10 views per second and you cache it for 1 second, 90% of requests for that page will come from caching. If static content is cached separately, even the newly generated page version may consist mainly of cached content.
There are three main techniques for caching content generated by web applications:
Move the content closer to the user-keeping a copy of the content closer to the user can reduce its transmission time.
Move content to a faster machine-content can be saved on a faster machine for faster retrieval.
Remove content from overused machines-machines sometimes run much slower on specific tasks than benchmark tests because they are busy with other tasks. Caching on different machines can improve the performance of cached resources as well as non-cached resources because hosts are less overloaded.
Caching for web applications can be implemented from internal (web application server) to external. First, caching is used for dynamic content to reduce the load on the application server. Caching is then used for static content (including temporary copies of dynamic content) to further uninstall the application server. The cache is then transferred from the application server to a machine that is faster and / or closer to the user, thereby reducing the burden on the application server and reducing retrieval and transmission time.
Improved caching can greatly speed up the application. For many web pages, static data, such as large image files, accounts for more than half of the content. It may take a few seconds to retrieve and transfer such data without caching, but only seconds if the data is cached locally.
As an example of how to use caching in practice, NGINX and NGINX Plus use two instructions to set caching: proxy_cache_path and proxy_cache. You can specify the cache location and size, the maximum time file saved in the cache, and other parameters. With the third (and very popular) instruction proxy_cache_use_ stale, you can even use caching to provide stale content directly when the server that provides fresh content is busy or down, thus providing some content to the client instead of nothing. From the user's point of view, this can greatly increase the uptime of the site or application.
NGINX Plus has advanced caching capabilities, including support for cache cleanup and display of cache status on the dashboard to monitor activity in real time.
For more information about NGINX caching, see the reference documentation and the NGINX Plus administration guide.
Note: caching crosses organizational boundaries between people who develop applications, those who make capital investment decisions, and those who run the network in real time. Complex caching strategies, such as those mentioned here, are a good example of the value of the DevOps perspective, where the application developer, architecture, and operational perspectives are merged to help meet the goals of site functionality, response time, security, and business outcomes such as completed transactions or sales.
Tip 4-compress data
Compression is a huge potential performance accelerator. For photos (JPEG and PNG), video (MPEG-4) and music (MP3), there are well-designed and efficient compression standards. Each of these standards reduces the file size by an order of magnitude or more.
Text data-- including code such as HTML (including plain text and HTML tags), CSS, and JavaScript-- is usually uncompressed. Compressing this data can have a disproportionate impact on the performance of web applications, especially for clients with slow or limited mobile connections.
This is because text data is usually sufficient for users to interact with the page, where multimedia data may be more supportive or decorative. Smart content compression reduces bandwidth requirements for HTML, Javascript, CSS, and other text-based content, typically by 30% or more, and reduces load time accordingly.
If SSL is used, compression reduces the amount of data that must be encoded by SSL, offsetting some of the CPU time required to compress the data.
There are different ways to compress text data. For example, see Tip 6 for a new text compression scheme in SPDY and HTTP/2 that adjusts specifically for header data. As another example of text compression, you can turn on GZIP compression in NGINX. After precompressing the text data on the service, you can use the gzip_static directive to provide the compressed .gz file directly.
Tip 5-optimize SSL/TLS
Secure Sockets layer (SSL) protocol and its successor, Transport layer Security (TLS) protocol, are being used on more and more websites. SSL/TLS encrypts data transferred from the source server to the user to help improve site security. Part of the reason for this trend is that Google is now using SSL/TLS as a positive impact on search engine rankings.
Despite its growing popularity, the performance issues involved in SSL/TLS are still the crux of many sites. SSL/TLS degrades website performance for two reasons:
The initial handshake required to establish an encryption key whenever a new connection is opened. The way browsers use HTTP/1. Establish multiple connections for each server.
The ongoing overhead of encrypting data on the server and decrypting it on the client.
To encourage authors who use SSL/TLS, HTTP/2, and SPDY (described in the next tip), they have designed these protocols so that browsers need only one connection per browser session. This greatly reduces one of the two main sources of SSL overhead. However, more can now be done to improve the performance of applications delivered through SSL/TLS.
The mechanism for optimizing SSL/TLS varies from web server to web server. For example, NGINX uses OpenSSL to run on standard commercial hardware to provide performance similar to dedicated hardware solutions. NGINX SSL performance has a good documentation and reduces the time and CPU consumption of performing SSL/TLS encryption and decryption to a very low level.
In addition, see this article for more information about how to improve SSL/TLS performance. In short, these technologies are:
Session caching-use the ssl_session_cache instruction cache to use SSL/TLS to protect the parameters used for each new connection.
The session ticket or ID-- information is stored in the ticket or ID about a particular SSL/TLS session so that the connection can be smoothly reused without the need for a new handshake.
Reduce handshake time by caching SSL/TLS certificate information.
NGINX and NGINX Plus can be used for SSL/TLS termination-- handling the encryption and decryption of client traffic while communicating in clear text with other servers. To set up NGINX or NGINX Plus to handle SSL/TLS terminations, see the instructions for HTTPS connections and encrypted TCP connections.
Tip 6-implement HTTP/2 or SPDY
For sites that already use SSL/TLS, HTTP/2, and SPDY, performance is likely to improve because only one handshake is required for a single connection. For sites that do not yet use SSL/TLS, HTTP/2 and SPDY will migrate to SSL/TLS (which usually degrades performance), which is a wash from a responsive point of view.
Google introduced SPDY in 2012 as a way to achieve faster performance on top of HTTP/1.x. HTTP/2 is a recently approved SPDY-based IETF standard. SPDY is widely supported, but it will soon be abandoned and replaced by HTTP/2.
The key feature of SPDY and HTTP/2 is to use a single connection instead of multiple connections. A single connection is multiplexed so it can carry multiple requests and responses at the same time.
By making full use of one connection, these protocols avoid the overhead of setting up and managing multiple connections, which is required by the way browsers implement HTTP/1.x. Using a single connection is particularly helpful for SSL because it reduces the handshake time required for SSL/TLS to set up a secure connection.
It is not officially required by the SPDY protocol to use SSL/TLS;HTTP/2, but so far all browsers that support HTTP/2 have only used it when SSL/TLS is enabled. That is, browsers that support HTTP/2 use it only if the website uses SSL and the server accepts HTTP/2 traffic. Otherwise, the browser communicates through HTTP/1.x.
When you implement SPDY or HTTP/2, you no longer need typical HTTP performance optimizations, such as domain sharding, resource merging, and image spriting. These changes make your code and deployment simpler and easier to manage. To learn more about the changes brought about by HTTP/2, read our white paper, "HTTP/2 for Web application developers."
As an example of supporting these protocols, NGINX has supported SPDY from the beginning, and most sites that use SPDY now run on NGINX. NGINX is also a pioneer of HTTP/2 support, and as of September 2015, both NGINX open source and NGINX Plus support HTTP/2.
Over time, we at NGINX expect most sites to be fully SSL-enabled and migrate to HTTP/2. This will lead to improved security and, as new optimizations are discovered and implemented, simpler code executes better.
Tip 7-update the software version
A simple way to improve application performance is to select components for the software stack based on component stability and performance. In addition, since developers of high-quality components may pursue performance enhancements and fix bug over time, it is worthwhile to use newer, stable versions of the software. The new version has received more attention from developers and the user community. The updated build also takes advantage of new compiler optimizations, including tuning for new hardware.
Stable new versions are generally more compatible and perform better than the old ones. When you focus on software updates, it's easier to master tuning optimizations, bug fixes, and security alerts.
Using old software will also prevent you from taking advantage of new features. For example, the HTTP/2 described above currently requires OpenSSL 1.0.1. Starting in mid-2016, HTTP/2 will need OpenSSL 1.0.2, which will be released in January 2015.
NGINX users can start by moving to newer versions of NGINX or NGINX Plus; they include new features such as socket slicing and thread pooling (see tip 9), and performance tuning is ongoing. Then learn more about the software in your stack and use a newer version as much as possible.
Tip 8-tuning the performance of Linux
Linux is the underlying operating system implemented by most web servers today. As the foundation of the infrastructure, Linux represents an important opportunity to improve performance. By default, many Linux systems are conservatively tuned to use few resources and match a typical desktop workload. This means that web application use cases require at least some tuning to achieve significant performance.
Linux optimization is specific to the web server. Taking NGINX as an example, here are some things you can consider to accelerate Linux changes:
Queue backlog-if your connection seems to be stagnating, consider adding net.core. You can queue for a large number of connections that NGINX notices. If the existing connection limit is too small, you will see an error message, and you can gradually increase this parameter until the error message stops.
File descriptor-NGINX uses up to two file descriptors for each connection. If your system is providing a large number of connections, you may need to add sys.fs. File_max is a system-wide limitation of file descriptors, and nofile is a limitation of user file descriptors to support increased load.
Temporary Port-when used as a proxy, NGINX creates a temporary ("temporary") port for each upstream server. You can increase the range of port values set by net.ipv4. Ip_local_port_range to increase the number of ports available. You can also reduce timeouts before network .ipv4 reuses inactive ports. The tcp_fin_timeout setting allows for faster turnover.
For NGINX, check out the NGINX performance tuning Guide to learn how to optimize your Linux system to easily handle large amounts of network traffic!
Tip 9-tuning the performance of the Web server
No matter what web server you use, you need to tune it according to the performance of the web application. The following recommendations usually apply to any web server, but provide specific settings for NGINX. Mainly include: optimization
Access logs-instead of immediately writing a log entry for each request, you can buffer entries in memory and write them to disk as a group. For NGINX, add the buffer=size parameter to the access_log directive to write log entries to disk when the memory buffer is full. If you add the flush=time parameter, the contents of the buffer are also written to disk after the specified time.
Buffering-buffering keeps a portion of the response in memory until the buffer fills up, which improves communication efficiency with the client. Responses that are not suitable for memory are written to disk, which degrades performance. When NGINX buffering is turned on, use the proxy_buffer_size and proxy_buffers instructions to manage it.
Client keepalives connections can reduce overhead, especially when using SSL/TLS. For NGINX, you can increase the maximum number of keepalive_requests clients can issue on a given connection (the default is 100), and you can increase keepalive_timeout to allow keepalive connections to remain open for longer, speeding up subsequent requests.
Upstream maintains a connection to an application server, database server, and so on; you can also benefit from maintaining a connection. For upstream connections, you can increase keepalive, that is, the number of free keepalive connections that remain open for each worker process. This allows for increased connection reuse, reducing the need to open new connections. For more information, please refer to our blog posts, HTTP Keepalive connections, and Web performance.
Limiting the resources used by clients can improve performance and security. For NGINX, the limit_conn and limit_conn_zone instructions limit the number of connections from a given source, while limit_rate limits bandwidth. These settings prevent legitimate users from "occupying" resources and help prevent attacks. The limit_req and limit_req_zone instructions restrict client requests. For connections to upstream servers, use the max_conns parameter to the server instructions in the upstream configuration block. This restricts connections to upstream servers and prevents overload. The associated queue directive creates a queue that saves a specified number of requests for a specified length of time after the max_conns limit is reached.
The worker process is responsible for processing requests. NGINX uses an event-based model and operating system-dependent mechanisms to effectively distribute requests between worker processes. It is recommended that you set the value of worker_processes to one per CPU. If necessary, you can safely start a large number of worker_connections on most systems (the default is 512); try to find the value that best suits your system.
Socket fragmentation; typically, a socket listener assigns a new connection to all worker processes. Socket fragmentation creates socket listeners for each worker process, and the kernel assigns connections to socket listeners when they are available. This can reduce lock contention and improve the performance of multi-core systems. To enable socket fragmentation, include the reuseport parameter on the listen directive.
Thread pool; any computer process can be blocked by a slow operation. For web server software, disk access can support many faster operations, such as calculating or copying information in memory. When using thread pools, slow operations are assigned to a separate set of tasks, while the main processing cycle continues to run faster operations. When the disk operation is complete, the result is returned to the main processing loop. In NGINX, two operations read () system calls and sendfile () are unloaded to the thread pool.
Tip. When you change the settings for any operating system or support service, change the settings one at a time, and then test performance. If the change causes a problem, or does not make the site run faster, change it back.
Tip 10-monitor activities to resolve problems and bottlenecks
The key to a high-performance approach to application development and delivery is to observe the actual performance of the application closely and in real time. You must be able to monitor activity within a specific device and across the web infrastructure.
Monitoring site activity is primarily passive-it tells you what happened, and then allows you to find problems and fix them.
Monitoring can capture several different types of problems. They include:
The server is down.
The server is disconnecting.
The cache loss rate of the server is high.
The server did not send the correct content.
Global application performance monitoring tools such as New Relic or Dynatrace can help you monitor page load times from remote locations, while NGINX can help you monitor application delivery. Application performance data tell you when your optimization has a real impact on users, and when you need to consider adding capacity to your infrastructure to maintain traffic.
To help identify and resolve problems quickly, NGINX Plus adds application-aware health checks-often repetitive composite transactions to alert you to problems. NGINX Plus also has session exhaustion, stops new connections when existing tasks are completed, starts slowly, and allows restored servers to speed up in load-balanced groups. When used effectively, health checks allow you to identify problems before they seriously affect the user experience, while session exhaustion and slow startup allow you to replace servers and ensure that processes do not adversely affect perceived performance or uptime. The figure shows a built-in NGINX Plus activity monitoring dashboard for an web infrastructure with servers, TCP connections, and caching.
Conclusion-performance is improved by 10 times
The performance improvements of any web application vary greatly, and the actual benefits depend on your budget, the time you can invest, and the gaps in existing implementations. So how do you achieve a 10x performance improvement for your application?
This is the end of the 10 tips for improving the performance of web applications by 10 times. I hope the above can be of some help and learn more. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.