In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the relevant knowledge of how to configure nginx page cache, the content is detailed and easy to understand, the operation is simple and fast, and has a certain reference value. I believe you will gain something after reading this article on how to configure nginx page cache. Let's take a look.
Page caching of nginx
1. Instruction description
Proxy_cache_path
Syntax: proxy_cache_path path [levels=number] keys_zone=zone_name:zone_size [inactive=time] [max_size=size]
Default value: none
Use Field: http
Directive specifies the path to the cache and some other parameters, the cached data is stored in the file, and the hash value of the proxy url is used as the keyword and file name. The levels parameter specifies the number of cached subdirectories, for example:
Proxy_cache_path / data/nginx/cache levels=1:2 keys_zone=one:10m
The file name is similar to:
/ data/nginx/cache/c/29/b7f54b2df7773722d382f4809d65029c
Levels specifies the directory structure, which can use any 1-digit or 2-digit number as the directory structure, such as x, x x, or x:x:x for example: "2", "2:2", "1:1:2", but only a three-level directory at most.
All active key and metadata are stored in a shared memory pool, which is specified with the keys_zone parameter. One refers to the name of the shared pool, and 10m refers to the size of the shared pool.
Note that each defined memory pool must be a non-repeating path, for example:
Proxy_cache_path / data/nginx/cache/one levels=1 keys_zone=one:10m;proxy_cache_path / data/nginx/cache/two levels=2:2 keys_zone=two:100m;proxy_cache_path / data/nginx/cache/three levels=1:1:2 keys_zone=three:1000m
If the cached data is not requested within the time specified by the inactive parameter, it is deleted. The default inactive is 10 minutes. A process called cache manager controls the cache size of the disk, which is used to delete inactive caches and control the cache size, which are defined in the max_size parameter. When the current cache value exceeds the value specified by max_size, the least used data (lru replacement algorithm) will be deleted. The size of the memory pool is set in proportion to the number of cache pages, and the metadata size of a page (file) is determined according to the operating system, such as 64 bytes in freebsd/i386 and 128bytes in freebsd/amd64.
Proxy_cache
Syntax: proxy_cache zone_name
Default value: none
Use fields: http, server, location
Set the name of a cache region, and the same area can be used in different places.
After 0.7.48, the cache follows the backend "expires", "cache-control: no-cache", "cache-control: max-age=xxx" header fields, and after version 0.7.66, the "cache-control:" private "and" no-store "headers are also followed. Nginx does not process "vary" headers during caching. To ensure that some private data is not seen by all users, the backend must set "no-cache" or "max-age=0" headers, or proxy_cache_key contains user-specified data such as $cookie_xxx. Using the value of cookie as part of proxy_cache_key can prevent private data from being cached. So you can specify the value of proxy_cache_key in different location to separate private data from public data.
The cache instruction relies on the proxy buffer (buffers), and if proxy_buffers is set to off, the cache will not take effect.
Proxy_cache_valid
Syntax: proxy_cache_valid reply_code [reply_code...] Time
Default value: none
Use fields: http, server, location
Set different cache times for different replies, for example:
Proxy_cache_valid 200302 10m *
Set the caching time to 10 minutes for reply codes 200 and 302 and 1 minute for 404 codes.
If only time is defined:
Proxy_cache_valid 5m
Then only responses with codes 200301 and 302 are cached.
You can also use the any parameter for any reply.
Proxy_cache_valid 200 302 10m * valid 301 * valid any 1m
two。 Define a simple nginx cache server
[root@nginx ~] # vim / etc/nginx/nginx.confproxy_cache_path / data/nginx/cache/webserver levels=1:2 keys_zone=webserver:20m max_size=1g; server {listen 80; server_name localhost; # charset koi8-r; # access_log logs/host.access.log main; location / {proxy_pass http://webservers; proxy_set_header x-real-ip $remote_addr; proxy_cache webserver Proxy_cache_valid 200 10m;}}
3. Create a new cache directory
[root@nginx ~] # mkdir-pv / data/nginx/cache/webserver
4. Reload the configuration file
[root@nginx webserver] # service nginx reloadnginx: the configuration file / etc/nginx/nginx.conf syntax is oknginx: configuration file / etc/nginx/nginx.conf test is successful reload nginx: [OK]
5. Let's test it next (Google browser)
Note: when you test with Google browser, you can press F12 to call the development tool and select the network option. We can see, response headers, here we can see whether we are requesting a cache, but we can't see it yet. Let's configure it and test it again.
6. Cache variable description
$server_addr
The server address, which can be determined after completing a system call, and if you want to bypass the system call, you must specify the address in listen and use the bind parameter.
$upstream_cache_status
In version 0.8.3, the value may be:
Miss missed
Expired-expired. The request is transmitted to the back end.
Updating-expired. Since proxy/fastcgi_cache_use_stale is being updated, the old reply will be used.
Stale-expired. Because of proxy/fastcgi_cache_use_stale, the back end will get an expired reply.
Hit hit
[root@nginx ~] # vim / etc/nginx/nginx.confproxy_cache_path / data/nginx/cache/webserver levels=1:2 keys_zone=webserver:20m max_size=1g; server {listen 80; server_name localhost; # charset koi8-r; # access_log logs/host.access.log main; # add two headers add_header x-via $server_addr; add_header x-cache $upstream_cache_status Location / {proxy_pass http://webservers; proxy_set_header x-real-ip $remote_addr; proxy_cache webserver; proxy_cache_valid 200 10m;}}
7. Reload the configuration file
[root@nginx ~] # service nginx reloadnginx: the configuration file / etc/nginx/nginx.conf syntax is oknginx: configuration file / etc/nginx/nginx.conf test is successful reload nginx: [OK]
8. Test it
Note, as we can see from the figure, the server we visited was 192.168.18.208 and the cache hit. You can see if it's very intuitive. Let's take a look at the cache directory.
9. Check the cache directory
[root@nginx ~] # cd / data/nginx/cache/webserver/f/63/ [root@nginx 63] # ls681ad4c77694b65d61c9985553a2763f
Note that there are cache files in the cache directory.
This is the end of the article on "how to configure nginx page cache". Thank you for reading! I believe you all have a certain understanding of the knowledge of "how to configure nginx page cache". If you want to learn more, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.