In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "the configuration scheme of Nginx cache Cache and how to solve the related memory occupation problem". In the daily operation, I believe that many people have doubts about the configuration scheme of Nginx cache Cache and how to solve the related memory occupation problem. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubt of "the configuration scheme of Nginx cache Cache and how to solve the related memory occupation problem". Next, please follow the editor to study!
Five schemes of nginx caching cache
1. One of the traditional caches (404)
The way to do this is to direct nginx's 404 error to the backend, and then use proxy_store to save the page returned by the backend.
Configuration:
Location / {root / home/html/;# home directory expires 1dscape # Expiration time of web page error_page / fetch$request_uri;#404 directed to / fetch directory} location / fetch/ {# 404 directed here internal;# indicates that this directory cannot directly access the expiration time of expires 1dscape # web page alias / html/ Proxy_store will save the file to the proxy_pass//www.jb51.net/;# backend upstream address in this directory, and / fetch is also a proxy proxy_set_header accept-encoding''; # tell the backend not to return compressed (gzip or deflate) content. Saving the compressed content will cause chaos. Proxy_store on;# specifies that nginx saves the files returned by the agent to the proxy_temp_path / home/tmp;# temporary directory, which is in the same hard disk partition as / home/html}
When using, it should also be noted that nginx should have permission to write files under / home/tmp and / home/html. Under linux, nginx will generally be configured to run by nobody users, so these two directories will be chown nobody, set for nobody users, and of course, chmod 777 can also be used, but all experienced system administrators will recommend not to use 777 casually.
2. The second part of traditional cache (!-e)
The principle is basically the same as the 404 jump, but more concise:
Location / {root / home/html/; proxy_store on; proxy_set_header accept-encoding'; proxy_temp_path / home/tmp; if (!-f $request_filename) {proxy_pass//www.jb51.net/;}}
You can see that this configuration saves a lot of code than 404, it uses!-f to determine whether the requested file exists on the file system, proxy_pass to the backend if it does not exist, and the return is also saved with proxy_store.
Both traditional caches have basically the same advantages and disadvantages:
Disadvantage 1: dynamic links with parameters, such as read.php?id=1, are not supported, because nginx only saves the file name, so this link is only saved as read.php under the file system, so that users will return incorrect results when they visit read.php?id=2. At the same time, the home page and secondary directory / / www.jb51.net/download/, in the form of / / www.jb51.net/ are not supported because nginx is very honest and will write such a request to the file system according to the link, which is obviously a directory, so the save failed. These situations need to be written in rewrite in order to save them correctly.
The disadvantage is that 2:nginx does not have any mechanism for cache expiration and cleanup, and these cached files will be permanently stored on the machine, and if there are a lot of things to cache, it will overwhelm the entire hard disk space. To do this, you can use a shell script to clean up regularly, and you can write dynamic programs such as php to do real-time updates.
Disadvantage 3: only 200 status codes can be cached, so the status codes returned by the backend will not be cached. If a pseudo-static link with a large number of visits is deleted, it will continue to penetrate and cause a lot of pressure on the backend.
The disadvantage is that 4:nginx does not automatically choose memory or hard disk as the storage medium, everything is determined by configuration. Of course, there is an operating system-level file caching mechanism in the current operating system, so there is no need to worry too much about io performance problems caused by large concurrent reads on the hard disk.
The disadvantage of nginx traditional cache is also different from other caching software such as squid, so it can also be regarded as its advantage. In production applications, it is often used as a partner with squid, squid for belt? Links are often unstoppable, and nginx can block their access, such as http://jb51.net/? And http://jb51.net/ will be treated as two links on squid, so it will cause two penetrations, while nginx will only be saved once, no matter whether the link becomes http://jb51.net/?1 or http://jb51.net/?123, it cannot be cached through nginx, thus effectively protecting the backend host.
Nginx will be very honest to save the link form to the file system, so that for a link, you can easily check its cache state and content on the cache machine, and it can also be easily used with other file managers such as rsync, etc., it is completely a file system structure.
Both of these traditional caches can save files to / dev/shm under linux, and I usually do the same, so I can use system memory for caching, which makes it much faster to clean up expired content. When using / dev/shm/, in addition to pointing the tmp directory to the partition / dev/shm, if there are a large number of small files and directories, modify the number of inode and the maximum capacity of this memory partition:
Mount-o size=2500m-o nr_inodes=480000-o noatime,nodiratime-o remount / dev/shm
The above command is used on a machine with 3G memory, because / dev/shm default maximum memory is half of the system memory is 1500m, this command increases it to 2500m, while the number of shm system inode may not be enough by default, but the interesting thing is that it can be adjusted at will, here adjusted to 480000 conservative point, but also basically enough.
3. Memcached-based caching
Nginx supports memcached, but the function is not particularly strong, and the performance is still very good.
Location / mem/ {if ($uri ~ "^ / mem/ ([0-9aMurzaMuzz] *) $") {set $memcached_key "$1"; memcached_pass 192.168.1.2 Vera 11211;} expires 70;}
This configuration will point the http://jb51.net/mem/abc to the abc key of memcached to fetch the data.
Nginx currently does not have any mechanism for writing memcached, so writing data to memcached has to be done with a dynamic language in the background, which can be directed to the back end.
4. Based on third-party plug-in ncache
Ncache is a good project developed by Sina Brothers. It uses nginx and memcached to achieve some functions similar to squid cache. I have no experience using this plug-in, you can refer to:
Http://code.google.com/p/ncache/
5. Proxy_cache function newly developed by nginx
Starting from the nginx-0.7.44 version, nginx supports a more formal cache function similar to squid, which is still in the development stage, and the support is quite limited. This cache is to encode the link with md5 and save it, so it can support any link. At the same time, it also supports a non-200 state such as 404 amp 301 and 302.
Configuration:
First configure a cache space:
The copy code is as follows:
Proxy_cache_path / path/to/cache levels=1:2 keys_zone=name:10m inactive=5m max_size=2m clean_time=1m
Note that this configuration is outside the server tag. Levels specifies that the cache space has two layers of hash directories, the first layer directory is 1 letter, and the second layer is 2 letters. The saved file name will be similar to / path/to/cache/c/29/b7f54b2df7773722d382f4809d65029c;keys_zone to give this space a name. 10m refers to the 5m cache with a space size of 10mbinactive, which means the default cache time is 5 minutes. 2m of max_size refers to no cache if a single file exceeds 2m. Clean_time specifies that the cache is cleaned once a minute.
Location / {proxy_pass//www.jb51.net/; proxy_cache name;# use the name keys_zone proxy_cache_valid 200 302 1ht switch 200 and 302 status codes to save 1 hour proxy_cache_valid 301 1dash 301 status codes for one day proxy_cache_valid any 1m switch # other save for one minute}
Ps: there are problems with the stability of versions 0.7.44 to 0.7.51 that support cache, and there are errors in accessing some links, so it is best not to use these versions in a production environment. The known stable version under nginx-0.7 is 0.7.39. Stable version 0.6.36 is also a recent update, and version 0.6.36 can also be used if some of the new tags and new features of 0.7 are not used in the configuration.
General solution to the problem of memory occupation of nginx cache
1. A certain service was brushed a few days ago, reaching millions of requests per minute. At that time, nginx cache was used to solve the problem. However, because a certain service could not be cached for too long, and 5s was set at that time, the problem was that a large number of small files were generated and deleted quickly.
2. Pass
Free-m
You will find that the used is 27g; but viewing the process through top does not take up that much memory.
So where's the memory?
3. By consulting the data, you will find (cat / proc/meminfo)
Slab: 22464312 kb
Sreclaimable: 16474128 kb (these are inode and dentry caches that are kept by the kernel but can be released)
Sunreclaim: 5990184 kb
4. Why isn't this memory cleaned automatically?
Machine system version of a computer room: linux 2.6.32-431.el6.x86_64 # 1 smp fri nov 22 03:15:09 utc 2013 x86'64 x 86'64 x 86'64 gnu/linux (normal, no memory fast to 100%)
Machine system version of a computer room: linux 2.6.32-279.el6.x86_64 # 1 smp fri jun 22 12:19:21 utc 2012 x86'64 gnu/linux (do not release)
5. Set the memory threshold by setting the following parameters
Sysctl-w vm.extra_free_kbytes=6436787sysctl-w vm.vfs_cache_pressure=10000 so far, on the "Nginx cache Cache configuration scheme and related memory occupation problems how to solve" the end of the study, I hope to be able to solve everyone's doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.