Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the configuration skills of Nginx proxy cache

2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail about the Nginx proxy cache configuration skills, the editor thinks it is very practical, so share it for you to do a reference, I hope you can get something after reading this article.

Overview

The content cache resides between the client and the source server (upstream) and keeps copies of everything it sees. If the client requests to cache the stored content, it returns the content directly without connecting to the source server. This improves performance because the content cache is closer to the client and uses the application server more efficiently because they do not have to generate pages from scratch every time.

There may be multiple caches between the Web browser and the application server: the client's browser cache, the intermediate cache, the content delivery network (CDN), and the load balancer or reverse proxy in front of the application server. Caching can greatly improve performance even at the reverse proxy / load balancer level.

Here to give an example, for example, my site uses Next.js server port rendering, because the server performance is relatively poor, of course, $5 server, can not be expected to go there, can be used is very great, can enter this local area network, do not expect too much.

It takes about 7 seconds to open each page, when this includes network latency, but when I make a request directly on the server side (127.0.0.1), the time is close to 5 seconds, and then I rule out the time to get data from the database. The server-side rendering time took 4.5 seconds, which is too slow. At this time, I can think of the fastest solution to the problem is caching, but add the cache there, from the point of view of each step. Adding cache in Nginx is the fastest way to solve the problem.

Nginx is typically deployed as a reverse proxy or load balancer in the application stack and has a full set of caching capabilities. Next we will discuss how to configure basic caching using Nginx.

How to set and configure basic caching

Only two instructions are needed to enable basic caching: proxy_cache_path and proxy_cache.

The proxy_cache_path instruction sets the path and configuration of the cache, and proxy_cache is used to instruct to activate it.

Proxy_cache_path / path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; server {#... Location / {proxy_cache my_cache; proxy_pass http://my_upstream;}}

The parameters of the proxy_cache_path directive define the following settings:

The cached local disk directory is called / path/to/cache/.

Levels sets up a two-level directory hierarchy under / path/to/cache/. Having a large number of files in a single directory slows down file access, so we recommend using a two-level directory hierarchy for most deployments. If levels does not include this parameter, Nginx places all files in the same directory.

Keys_zone sets up a shared memory area for storing cache keys and metadata, such as using timers. With a copy of the key in memory, Nginx can quickly determine whether the request is a HIT or MISS without having to go to disk, greatly speeding up the check. The 1 MB area can store data for about 8000 keys, so the 10 MB area configured in the example can store data for about 80000 keys.

Max_size sets the upper limit of the cache size (10 gigabytes in this case). It is optional; not specifying a value allows the cache to grow to use all available disk space. When the cache size reaches the limit, a process called the cache manager deletes the least recently used cache and restores the size to the files below the limit.

Inactive specifies how long an item can remain in the cache without being accessed. In this example, the cache manager process automatically removes files that have not been requested for 60 minutes from the cache, regardless of whether they have expired or not. The default value is 10 minutes (10m). Inactive content is different from expired content. Nginx does not automatically delete cache header defined as expired content (for example, Cache-Control:max-age=120). Expired (stale) content is deleted only if it is not accessed within a specified period of time. When accessing expired content, Nginx refreshes it from the original server and resets the inactive timer.

Nginx first writes the files sent to the cache to the temporary storage area, and the use_temp_path=off instruction instructs NGINX to write them to the same directory that will be cached. We recommend that you set this parameter to off to avoid unnecessary data replication between file systems. Use_temp_path was introduced in Nginx 1.7.10.

Finally, the proxy_cache instruction activates the cache of everything that matches the URL of the parent location block (/ in the example). You can also include proxy_cache directives in server blocks; it applies to all blocks of servers that do not have their own location directives.

Provide cached content when the upstream server is shut down ()

A powerful feature of Nginx content caching is that Nginx can be configured to provide cached content from the cache when new content cannot be obtained from the original server. This occurs if all source servers for the cached resource are shut down or temporarily occupied.

Instead of passing errors to the client, Nginx provides stale versions of the file from the cache. This provides additional fault tolerance for Nginx proxy servers and ensures uptime in the event of server failure or peak traffic. To enable this feature, include the proxy_cache_use_stale directive:

Location / {#... Proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;}

Using this sample configuration, if Nginx receives an error,timeout or any specified 5xx error from the original server and has an obsolete version of the requested file in its cache, it passes the obsolete file instead of forwarding the error to the client.

How to improve cache performance

Nginx has a wealth of optional settings that can be used to fine-tune the performance of the cache. This is an example of activating some of them:

Proxy_cache_path / path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; server {#... Location / {proxy_cache my_cache; proxy_cache_revalidate on; proxy_cache_min_uses 3; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_background_update on; proxy_cache_lock on; proxy_pass http://my_upstream;}}

These directives configure the following behaviors:

Proxy_cache_revalidate instructs Nginx to refresh content from the source server when using GET conditional requests. If the client requests expired content that is cached but defined by the cache control header, the Nginx includes the If-Modified-Since field in the header of the GET request and sends it to the source server. Because the server has only modified the entire project since the time recorded in the title Last-Modified of the file when Nginx originally cached it.

Proxy_cache_min_uses sets how many times the client must request before being cached before Nginx caching. This is useful if the cache continues to fill up because it ensures that only the most frequently accessed items are added to the cache. The default proxy_cache_min_uses setting is 1.

The directive updating parameter proxy_cache_use_stale is combined with the enable proxy_cache_background_update directive to indicate that Nginx will pass obsolete content when a client requests an item that has expired or is being updated from the original server. All updates will be done in the background. Stale files are returned for all requests before the updated files are fully downloaded.

Enabled with proxy_cache_lock, if multiple client requests are not in the MISS, only the first of these requests is through the original server. The rest of the request waits for the request to be satisfied, and then extracts the file from the cache. If proxy_cache_lock is not enabled, all requests that cause cache misses will be sent directly to the source server.

Split cache across multiple hard drives

If you have multiple hard drives, you can split the cache between them using Nginx. The following example distributes clients evenly across two hard drives based on the request URI:

Proxy_cache_path / path/to/hdd1 levels=1:2 keys_zone=my_cache_hdd1:10m max_size=10g inactive=60m use_temp_path=off; proxy_cache_path / path/to/hdd2 levels=1:2 keys_zone=my_cache_hdd2:10m max_size=10g inactive=60m use_temp_path=off; split_clients $request_uri $my_cache {50% "my_cache_hdd1"; 50% "my_cache_hdd2";} server {#. Location / {proxy_cache $my_cache; proxy_pass http://my_upstream;}}

These two proxy_cache_path instructions define two caches (my_cache_hdd1 and my_cache_hdd2) on two different hard drives.

The split_clients configuration block specifies that half of the requests (50% of the results) are cached in the my_cache_hdd1 and the other half of the my_cache_hdd2. The hash (request URI) based on the $request_uri variable determines which cache is used for each request, and the result is that requests for a given URI are always cached in the same cache.

Note that this method does not override the RAID hard drive settings. If there is a hard drive failure, it may cause the system to behave unpredictably, including the 500 response code that the user sees for the request for the failed hard drive. Proper RAID hard drive settings can handle hard drive failures.

How to detect Nginx Cache

You can add a $upstream_cache_status variable to the response header for detection

Add_header X-Cache-Status $upstream_cache_status

This sample X-Cache-Status adds HTTP headers when responding to the client. The following are possible values of $upstream_cache_status:

MISS-No response was found in the cache, so the response was obtained from the original server. The response is then cached.

BYPASS-the response is obtained from the original server, not from the cache, because the request matches the proxy_cache_bypass instruction

EXPIRED-the entry in the cache has expired. The response contains new content from the original server.

The STALE- content is out of date because the source server did not respond correctly but the proxy_cache_use_stale is configured.

The UPDATING- content is out of date because the entry is currently being updated in response to a previous request and the proxy_cache_use_stale updating is configured.

The REVALIDATED- proxy_cache_revalidate directive is enabled, and Nginx verifies that the contents of the current cache are still valid (If-Modified-Since or If-None-Match).

HIT-the response comes directly from a valid cache

How does Nginx determine whether to cache the response

By default, Nginx respects the headers of the Cache-Control source server. It does not cache the response Cache-Control set to Private,No-Cache or No-Store or Set-Cookie in the response header. Nginx caches only GET and HEAD client requests. You can override these default values as described in the answers below.

If proxy_buffering is set to off,Nginx, the response will not be cached. On default.

Can Nginx ignore Cache-Control?

Cache-Control can be ignored using the proxy_ignore_headers directive

Location / images/ {proxy_cache my_cache; proxy_ignore_headers Cache-Control; proxy_cache_valid any 30m; #...}

Nginx ignores the titles of all content under / images/ Cache-Control. This directive forces cached data to expire, which is required if headers are ignored. Nginx does not cache files that do not expire.

Can Nginx ignore Set-Cookie?

You can use the proxy_ignore_headers instruction.

How Nginx caches POST requests

Use the proxy_cache_methods directive:

Proxy_cache_methods GET HEAD POST

This example enables caching of POST requests.

How Nginx caches dynamic content

As long as the Cache-Control header allows. Caching dynamic content even in a short period of time can reduce the load on the original server and database, thus reducing the time of the first byte because the page does not have to be regenerated for each request.

How not to use Nginx caching

Proxy_cache_bypass instruction

Location / {proxy_cache_bypass $cookie_nocache $arg_nocache; #...}

This directive defines the type of request that Nginx immediately requests content from the source server, rather than first trying to find it in the cache. This is sometimes called "punching" through the cache.

What cache key does Nginx use

The default form of the key generated by Nginx is similar to the MD5 hash of the following Nginx variable: the actual algorithm used by $scheme$proxy_host$request_uri; is slightly more complex.

Proxy_cache_path / path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; server {#... Location / {proxy_cache my_cache; proxy_pass http://my_upstream;}}

For this sample configuration, the cache key http://www.example.org/my_image.jpg is evaluated as md5 ("http://my_upstream:80/my_image.jpg")."

Note that the proxy_host variable is used for hashing values rather than the actual hostname (www.example.com). Proxy_host is defined as the name and port of the proxy server specified in the proxy_pass directive.

To change the variable used as the basis for the key, use the proxy_cache_key directive.

Use Cookie as part of my cache key

The cache key can be configured to any value, for example:

Proxy_cache_key $proxy_host$request_uri$cookie_jessionid

This example merges the value of JSESSIONID cookie into the cache key. Items with the same URI but different JSESSIONID values are cached separately as unique items.

Nginx uses ETag headers

In Nginx 1.7.3 and later, the ETag header fully supports If-None-Match.

How does Nginx handle byte range requests

If the file is up to date in the cache, Nginx follows the byte range request and provides only the specified bytes of the project to the project client. If the file is not cached or if the file is out of date, Nginx downloads the entire file from the original server.

If the request is for a single byte range, Nginx sends the range to the client as soon as it encounters the range in the download stream. If the request specifies multiple byte ranges in the same file, Nginx transfers the entire file to the client when the download is complete.

After the download is complete, Nginx moves the entire resource to the cache so that all future byte range requests, whether single or multiple, are immediately satisfied from the cache.

Note that the upstream server must support byte range requests for Nginx to support byte range requests for the upstream server.

How does Nginx handle Pragma headers

The Pragma:no-cache header is added by the client to bypass all intermediate caches and enter the content of the request directly to the source server. Pragma by default, Nginx does not support headers, but you can configure this feature using the following proxy_cache_bypass directive:

Location / images/ {proxy_cache my_cache; proxy_cache_bypass $http_pragma; #...}

Does Nginx support headers stale-while-revalidate and stale-if-error as well as extended Cache-Control

Supported in Nginx 1.11.10 and later. What do these extensions do:

The extension of the stale-while-revalidate,Cache-Control HTTP header that is currently being updated allows the use of stale cached responses. The stale-if-error extension Cache-Control of the HTTP header allows the use of stale cached responses when errors occur. These headers have a lower priority, as described in the proxy_cache_use_stale instruction above.

Does Nginx support Vary headers?

Vary headers are supported in Nginx 1.7.7 and later.

This is the end of this article on "what are the Nginx proxy cache configuration skills". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report