In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly shows you "nginx how to achieve shared memory mechanism", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "nginx how to achieve shared memory mechanism" this article.
The shared memory of nginx is one of the main reasons for its high performance, and it is mainly used for caching files. This article will first explain how shared memory is used, and then explain how nginx implements shared memory management.
1. Use the example
The directive for nginx to declare shared memory is:
Proxy_cache_path / Users/Mike/nginx-cache levels=1:2 keys_zone=one:10m max_size=10g inactive=60m use_temp_path=off
This is just a declared shared memory named one with a maximum available memory of 10g. The meanings of the parameters are as follows:
/ Users/Mike/nginx-cache: this is a path parameter that specifies the location where files will be cached in shared memory. The reason why the file is generated here is that for the response issued by the upstream service, a file can be generated and stored on the nginx. If there is the same request later, the file can be read directly or the cache in the shared memory can be read in response to the client.
Levels: in the linux operating system, if all files are placed in one folder, then when the number of files is very large, one disk driver may not be able to read so many files, if placed in multiple folders, then you can take advantage of multiple drivers and read. The levels parameter here specifies how the folder is generated. Assuming that the file name generated by nginx for some response data of the upstream service is e0bd86606797639426a92306b1b98ad9, then for the above levels=1:2, it will take the value from the end of the file name, taking 1 bit (that is, 9) as the first-level subdirectory name, and then 2 bits (that is, ad) as the secondary subdirectory name
Keys_zone: this parameter specifies the name of the current shared memory, here one, followed by 10m indicates that the current shared memory is used to store key with a memory size of 10m
Max_size: this parameter specifies the maximum memory available for the current shared memory
Inactive: this parameter specifies the maximum survival time of the current shared memory. If there is no request to access the memory data during this period, it will be eliminated by the LRU algorithm.
Use_temp_path: this parameter specifies whether to put the generated files in a temporary folder first, and then move to the specified folder later
two。 working principle
The management of shared memory is divided into several parts as shown in the following figure:
As can be seen, it is mainly divided into initialization, management of shared memory, loading of shared memory and use of shared memory. In the initialization process, the proxy_cache_path instruction is parsed first, and then the cache manager and cache loader processes are started respectively. Here, the cache manager process is mainly for the management of shared memory, mainly by clearing the expired data through the LRU algorithm, or by forcibly deleting some unreferenced memory data when resources are tight. The main work of the cache loader process is to read the files already in the file storage directory and load them into shared memory after nginx starts. The use of shared memory is mainly the cache of response data after processing the request. This part will be explained in later articles. This article mainly explains the working principle of the first three parts.
According to the above division, the management of shared memory can be divided into three parts (the use of shared memory will be explained later). The following is a schematic diagram of the processing flow of these three parts:
As can be seen from the above flow chart, in the main process, the work of parsing proxy_cache_path instructions, starting cache manager process and starting cache loader process is mainly carried out. In the cache manager process, the main work is divided into two parts: 1. Check whether the element at the end of the queue is out of date, and if it expires and the number of references is 0, delete the file corresponding to the element; 2. Check whether the current shared memory is tight, and if so, delete all elements with zero references and their files, regardless of whether they are out of date or not. In the processing flow of the cache loader process, the main task is to recursively traverse the files in the directory where the files are stored and their subdirectories, and then load these files into the shared memory. It is important to note that the cache manager process enters the next loop after each traversal of all shared memory blocks, while the cache loader process executes once at 60 seconds after nginx starts, and then exits the process.
3. Source code interpretation
3.1 proxy_cache_path instruction parsing
For the parsing of each instruction in nginx, it defines a ngx_command_t structure in the corresponding module, in which there is a set method that specifies the method used to parse the current instruction. The following is the definition of the ngx_command_t structure corresponding to proxy_cache_path:
Static ngx_command_t ngx_http_proxy_commands [] = {{ngx_string ("proxy_cache_path"), / / specifies the name of the current instruction / / specifies the location where the current instruction is used, that is, the http module, and specifies the number of parameters of the current module Here, it must be greater than or equal to 2 NGX_HTTP_MAIN_CONF | NGX_CONF_2MORE, / / specifies the method ngx_http_file_cache_set_slot, NGX_HTTP_MAIN_CONF_OFFSET, offsetof (ngx_http_proxy_main_conf_t, caches), & ngx_http_proxy_module} pointed to by the set () method.
As you can see, the parsing method used in this instruction is ngx_http_file_cache_set_slot (). Here we read the source code of this method directly:
Char * ngx_http_file_cache_set_slot (ngx_conf_t * cf, ngx_command_t * cmd, void * conf) {char * confp = conf; off_t max_size; u_char * last, * p; time_t inactive; ssize_t size; ngx_str_t s, name, * value; ngx_int_t loader_files, manager_files Ngx_msec_t loader_sleep, manager_sleep, loader_threshold, manager_threshold; ngx_uint_t I, n, use_temp_path; ngx_array_t * caches; ngx_http_file_cache_t * cache, * * ce; cache = ngx_pcalloc (cf- > pool, sizeof (ngx_http_file_cache_t)); if (cache = = NULL) {return NGX_CONF_ERROR } cache- > path = ngx_pcalloc (cf- > pool, sizeof (ngx_path_t)); if (cache- > path = = NULL) {return NGX_CONF_ERROR;} / / initialize the default values for each attribute use_temp_path = 1; inactive = 600; loader_files = 100; loader_sleep = 50; loader_threshold = 200; manager_files = 100; manager_sleep = 50; manager_threshold = 200; name.len = 0; size = 0 Max_size= NGX_MAX_OFF_T_VALUE; / / sample configuration: proxy_cache_path / Users/Mike/nginx-cache levels=1:2 keys_zone=one:10m max_size=10g inactive=60m use_temp_path=off; / / when parsing proxy_cache_path instructions are stored in cf- > args- > elts here, the token entries contained in them, / / the so-called token entries, refer to character fragments value = cf- > args- > elts separated by spaces. / / value [1] is the first parameter of the configuration, that is, the root path that the cache file will save: cache- > path- > name = value [1]; if (cache- > path- > name.data [cache-> path- > name.len- 1] = ='/') {cache- > path- > name.len--;} if (ngx_conf_full_name (cf- > cycle, & cache- > path- > name, 0)! = NGX_OK) {return NGX_CONF_ERROR } / / parse for (I = 2; I) starting from the third parameter
< cf->Args- > nelts; iTunes +) {/ / if the third parameter begins with "levels=", parse the levels subparameter if (ngx_strncmp (value [I] .data, "levels=", 7) = 0) {p = value [I] .data + 7; / / calculate the actual position last = value [I] .data + value [I] .len / / calculate the position of the last character / / start parsing 1:2 for (n = 0; n
< NGX_MAX_PATH_LEVEL && p < last; n++) { if (*p >'0' & & * p
< '3') { // 获取当前的参数值,比如需要解析的1和2 cache->Path- > level [n] = * paired +-'0characters; cache- > path- > len + = cache- > path- > level [n] + 1; if (p = = last) {break;} / / if the current character is a colon, continue with the parsing of the next character / / the NGX_MAX_PATH_ value here is 3, which means that the levels parameter is followed by a 3-level subdirectory if (* paired + =':'& & n).
< NGX_MAX_PATH_LEVEL - 1 && p < last) { continue; } goto invalid_levels; } goto invalid_levels; } if (cache->Path- > len
< 10 + NGX_MAX_PATH_LEVEL) { continue; } invalid_levels: ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid \"levels\" \"%V\"", &value[i]); return NGX_CONF_ERROR; } // 如果当前的参数是以"use_temp_path="开头,则解析use_temp_path参数,该参数值为on或者off, // 表示当前缓存文件是否首先存入临时文件夹中,最后再写入到目标文件夹中,如果为off则直接存入目标文件夹 if (ngx_strncmp(value[i].data, "use_temp_path=", 14) == 0) { // 如果为on,则标记use_temp_path为1 if (ngx_strcmp(&value[i].data[14], "on") == 0) { use_temp_path = 1; // 如果为off,则标记use_temp_path为0 } else if (ngx_strcmp(&value[i].data[14], "off") == 0) { use_temp_path = 0; // 如果都不止,则返回解析异常 } else { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid use_temp_path value \"%V\", " "it must be \"on\" or \"off\"", &value[i]); return NGX_CONF_ERROR; } continue; } // 如果参数是以"keys_zone="开头,则解析keys_zone参数。该参数的形式如keys_zone=one:10m, // 这里的one是一个名称,以供给后续的location配置使用,而10m则是一个大小, // 表示供给存储key的缓存大小 if (ngx_strncmp(value[i].data, "keys_zone=", 10) == 0) { name.data = value[i].data + 10; p = (u_char *) ngx_strchr(name.data, ':'); if (p) { // 计算name的长度,name记录了当前的缓存区的名称,也即这里的one name.len = p - name.data; p++; // 解析所指定的size大小 s.len = value[i].data + value[i].len - p; s.data = p; // 对大小进行解析,会将指定的大小最终转换为字节数,这里的字节数必须大于8191 size = ngx_parse_size(&s); if (size >8191) {continue;}} ngx_conf_log_error (NGX_LOG_EMERG, cf, 0, "invalid keys zone size\"% V\ ", & value [I]); return NGX_CONF_ERROR;} / / parses the inactive parameter if the parameter begins with" inactive= ". This parameter is in the form of inactive=60m, and / / indicates how long the cached file will expire after it has not been accessed (ngx_strncmp (ngx_strncmp [I] .data, "inactive=", 9) = 0) {s.len = value.len-9; s.data = value.data + 9 / / A pair of time is parsed, which will eventually be converted to the length of time in seconds inactive = ngx_parse_time (& s, 1); if (inactive = = (time_t) NGX_ERROR) {ngx_conf_log_error (NGX_LOG_EMERG, cf, 0, "invalid inactive value\"% V\ ", & value [I]); return NGX_CONF_ERROR } continue;} / / if the parameter begins with "max_size=", the max_size parameter is parsed. This parameter is in the form of max_size=10g, and / / indicates the maximum memory space if (ngx_strncmp (value [I] .data, "max_size=", 9) = = 0) {s.len = value.len-9; s.data = value.data + 9 / / A pair of parsed values are converted, which will eventually be in bytes max_size = ngx_parse_offset (& s); if (max_size
< 0) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid max_size value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // 如果参数是以"loader_files="开头,则解析loader_files参数。该参数形如loader_files=100, // 表示在启动nginx的时候默认会加载多少个缓存目录中的文件到缓存中 if (ngx_strncmp(value[i].data, "loader_files=", 13) == 0) { // 解析loader_files参数的值 loader_files = ngx_atoi(value[i].data + 13, value[i].len - 13); if (loader_files == NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid loader_files value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // 如果参数是以"loader_sleep="开头,则解析loader_sleep参数。该参数形如loader_sleep=10s, // 表示每次加载一个文件之后休眠多长时间,然后再加载下一个文件 if (ngx_strncmp(value[i].data, "loader_sleep=", 13) == 0) { s.len = value[i].len - 13; s.data = value[i].data + 13; // 对loader_sleep的值进行转换,这里是以毫秒数为单位 loader_sleep = ngx_parse_time(&s, 0); if (loader_sleep == (ngx_msec_t) NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid loader_sleep value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // 如果参数是以"loader_threshold="开头,则解析loader_threshold参数,该参数形如loader_threshold=10s, // 表示每次加载一个文件能够使用的最长时间 if (ngx_strncmp(value[i].data, "loader_threshold=", 17) == 0) { s.len = value[i].len - 17; s.data = value[i].data + 17; // 对loader_threshold的值进行解析并且转换,最终是以毫秒数为单位 loader_threshold = ngx_parse_time(&s, 0); if (loader_threshold == (ngx_msec_t) NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid loader_threshold value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // 如果参数是以"manager_files="开头,则解析manager_files参数,该参数形如manager_files=100, // 表示当缓存空间用尽时,将会以LRU算法将文件进行删除,不过每次迭代最多删除manager_files所指定的文件数 if (ngx_strncmp(value[i].data, "manager_files=", 14) == 0) { // 解析manager_files参数值 manager_files = ngx_atoi(value[i].data + 14, value[i].len - 14); if (manager_files == NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid manager_files value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // 如果参数是以"manager_sleep="开头,则解析manager_sleep参数,该参数形如manager_sleep=1s, // 表示每次迭代完成之后将会休眠manager_sleep参数所指定的时长 if (ngx_strncmp(value[i].data, "manager_sleep=", 14) == 0) { s.len = value[i].len - 14; s.data = value[i].data + 14; // 对manager_sleep所指定的值进行解析 manager_sleep = ngx_parse_time(&s, 0); if (manager_sleep == (ngx_msec_t) NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid manager_sleep value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } // 如果参数是以"manager_threshold="开头,则解析manager_threshold参数,该参数形如manager_threshold=2s, // 表示每次清除文件的迭代的最长耗时不能超过该参数所指定的值 if (ngx_strncmp(value[i].data, "manager_threshold=", 18) == 0) { s.len = value[i].len - 18; s.data = value[i].data + 18; // 解析manager_threshold参数值,并且将其转换为以毫秒数为单位的值 manager_threshold = ngx_parse_time(&s, 0); if (manager_threshold == (ngx_msec_t) NGX_ERROR) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid manager_threshold value \"%V\"", &value[i]); return NGX_CONF_ERROR; } continue; } ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid parameter \"%V\"", &value[i]); return NGX_CONF_ERROR; } if (name.len == 0 || size == 0) { ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "\"%V\" must have \"keys_zone\" parameter", &cmd->Name); return NGX_CONF_ERROR } / / the values of cache- > path- > manager and cache- > path- > loader here are two functions. It should be noted that / / after nginx starts, two separate processes, one cache manager and one cache loader, will be started, in which cache manager / / will continuously execute the method specified by cache- > path- > manager for each shared memory in a loop, / / to clean up the cache. The other process, cache loader, is executed only once 60 seconds after nginx starts. / / the method to be executed is the method specified by cache- > path- > loader. / / the main function of this method is to load existing file data into the current shared memory cache- > path- > manager = ngx_http_file_cache_manager; cache- > path- > loader = ngx_http_file_cache_loader; cache- > path- > data = cache. Cache- > path- > conf_file = cf- > conf_file- > file.name.data; cache- > path- > line = cf- > conf_file- > line; cache- > loader_files = loader_files; cache- > loader_sleep = loader_sleep; cache- > loader_threshold = loader_threshold; cache- > manager_files = manager_files; cache- > manager_sleep = manager_sleep; cache- > manager_threshold = manager_threshold / / add the current path to the cycle, and check these path later. If the path does not exist, the corresponding path if (ngx_add_path (cf, & cache- > path)! = NGX_OK) {return NGX_CONF_ERROR will be created. } / / add current shared memory to the shared memory list specified by cf- > cycle- > shared_memory cache- > shm_zone = ngx_shared_memory_add (cf, & name, size, cmd- > post); if (cache- > shm_zone = = NULL) {return NGX_CONF_ERROR } if (cache- > shm_zone- > data) {ngx_conf_log_error (NGX_LOG_EMERG, cf, 0, "duplicate zone\"% V\ ", & name); return NGX_CONF_ERROR;} / / here specifies the initialization method for each shared memory, which is executed cache- > shm_zone- > init = ngx_http_file_cache_init when the master process starts. Cache- > shm_zone- > data = cache; cache- > use_temp_path = use_temp_path; cache- > inactive = inactive; cache- > max_size = max_size; caches = (ngx_array_t *) (confp + cmd- > offset); ce = ngx_array_push (caches); if (ce = = NULL) {return NGX_CONF_ERROR;} * ce = cache; return NGX_CONF_OK;}
As you can see from the above code, in the proxy_cache_path method, you mainly initialize a ngx_http_file_cache_t structure. On the other hand, the attributes in the structure are carried out by parsing the parameters of proxy_cache_path.
3.2 cache manager and cache loader processes start
The entry method of the nginx program is the main () method of nginx.c. If the master-worker process mode is turned on, then the ngx_master_process_cycle () method is finally entered, which first starts the worker process to receive requests from the client, then starts the cache manager and cache loader processes respectively, and finally enters an infinite loop to process the instructions sent by the user to nginx on the command line. The following is the source code for cache manager and cache loader process startup:
Voidngx_master_process_cycle (ngx_cycle_t * cycle) {. / / get the configuration of the core module ccf = (ngx_core_conf_t *) ngx_get_conf (cycle- > conf_ctx, ngx_core_module); / / start each worker process ngx_start_worker_processes (cycle, ccf- > worker_processes, NGX_PROCESS_RESPAWN); / / start the cache process ngx_start_cache_manager_processes (cycle, 0) ...}
For the startup of the cache manager and cache loader processes, you can see that it is mainly in the ngx_start_cache_manager_processes () method, and the source code for this method is as follows:
Static void ngx_start_cache_manager_processes (ngx_cycle_t * cycle, ngx_uint_t respawn) {ngx_uint_t I, manager, loader; ngx_path_t * * path; ngx_channel_t ch; manager = 0; loader = 0; path = ngx_cycle- > paths.elts; for (I = 0; I
< ngx_cycle->Paths.nelts; iTunes +) {/ / find whether any path specifies manager as 1 if (path [I]-> manager) {manager = 1;} / / find if any path specifies loader as 1 if (path [I]-> loader) {loader = 1 }} / / if none of the path's manager is specified as 1, it will directly return if (manager = = 0) {return } / / create a process to execute the loop executed in the ngx_cache_manager_process_cycle () method. Note that / / when calling back the ngx_cache_manager_process_cycle method, the second parameter passed here is ngx_cache_manager_ctx ngx_spawn_process (cycle, ngx_cache_manager_process_cycle, & ngx_cache_manager_ctx, "cache manager process", respawn? NGX_PROCESS_JUST_RESPAWN: NGX_PROCESS_RESPAWN); ngx_memzero (& ch, sizeof (ngx_channel_t)); / / create a ch structure to broadcast the creation message of the current process ch.command = NGX_CMD_OPEN_CHANNEL; ch.pid = ngx_ processes [NGX _ process_slot] .pid; ch.slot = ngx_process_slot; ch.fd = ngx_ processes [NGX _ process_slot] .channel [0] / / broadcast the message ngx_pass_open_channel (cycle, & ch) created by the cache manager process process; if (loader = = 0) {return } / / create a process to execute the process specified by ngx_cache_manager_process_cycle (). Note that / / when calling back the ngx_cache_manager_process_cycle method, the second parameter passed here is ngx_cache_loader_ctx ngx_spawn_process (cycle, ngx_cache_manager_process_cycle, & ngx_cache_loader_ctx, "cache loader process", respawn? NGX_PROCESS_JUST_SPAWN: NGX_PROCESS_NORESPAWN); / / create a ch structure to broadcast the creation message of the current process ch.command = NGX_CMD_OPEN_CHANNEL; ch.pid = ngx_ processes [NGX _ process_slot] .pid; ch.slot = ngx_process_slot; ch.fd = ngx_ processes [NGX _ process_slot] .channel [0] / / broadcast the message ngx_pass_open_channel (cycle, & ch) created by the cache loader process process;}
The above code is actually relatively simple, first check whether any path specifies to use cache manager or cache loader, and if so, start the corresponding inheritance, otherwise the cache manager and cache loader processes will not be created. The methods used to start both processes are:
/ / start the cache manager process ngx_spawn_process (cycle, ngx_cache_manager_process_cycle, & ngx_cache_manager_ctx, "cache manager process", respawn? NGX_PROCESS_JUST_RESPAWN: NGX_PROCESS_RESPAWN); / / start the cache loader process ngx_spawn_process (cycle, ngx_cache_manager_process_cycle, & ngx_cache_loader_ctx, "cache loader process", respawn? NGX_PROCESS_JUST_SPAWN: NGX_PROCESS_NORESPAWN)
The main purpose of the ngx_spawn_process () method here is to create a new process, which executes the method specified in the second parameter, and the parameter passed in when executing the method is the structure object specified in the third parameter here. Looking at the way the above two processes are started, the method executed after the new process is created is ngx_cache_manager_process_cycle (), except that the method is called with different parameters, one is ngx_cache_manager_ctx and the other is ngx_cache_loader_ctx. Let's first take a look at the definitions of these two structures:
/ / the ngx_cache_manager_process_handler here specifies the method that the current cache manager process will execute, / / cache manager process specifies the name of the process, and the last 0 indicates how long the current process will execute the / / ngx_cache_manager_process_handler () method after it starts. Here is to execute static ngx_cache_manager_ctx_t ngx_cache_manager_ctx = {ngx_cache_manager_process_handler, "cache manager process", 0} immediately / / the ngx_cache_loader_process_handler here specifies the method that the current cache loader process will execute. / / it will not execute the ngx_cache_loader_process_handler () method static ngx_cache_manager_ctx_t ngx_cache_loader_ctx = {ngx_cache_loader_process_handler, "cache loader process", 60000} until 60 seconds after the cache loader process starts.
As you can see, these two structures mainly define the different behaviors of the cache manager and cache loader processes, respectively. Let's take a look at how the ngx_cache_manager_process_cycle () method calls these two methods:
Static void ngx_cache_manager_process_cycle (ngx_cycle_t * cycle, void * data) {ngx_cache_manager_ctx_t * ctx = data; void * ident [4]; ngx_event_t ev; ngx_process = NGX_PROCESS_HELPER; / / the current process is mainly used to handle cache manager and cache loader work, so it does not need to listen to socket, so it needs to turn off ngx_close_listening_sockets (cycle). / * Set a moderate number of connections for a helper process. * / cycle- > connection_n = 512; / / initialize the current process, mainly by setting some parameter properties, and finally setting the event listening to the channel [1] handle for the current to receive the message ngx_worker_process_init (cycle,-1) of the master process; ngx_memzero (& ev, sizeof (ngx_event_t)) / / for cache manager, the handler here points to the ngx_cache_manager_process_handler () method, / / for cache loader, the handler points to the ngx_cache_loader_process_handler () method ev.handler = ctx- > handler; ev.data = ident; ev.log = cycle- > log; ident [3] = (void *)-1; / / the cache module does not need to use a shared lock ngx_use_accept_mutex = 0 Ngx_setproctitle (ctx- > name); / / add the current event to the event queue with a delay time of ctx- > delay, 0 for cache manager, and 60s for cache loader. / / it should be noted that in the current event handling method, if ngx_cache_manager_process_handler () finishes handling the current event, / / it will add the current event to the event queue again, thus realizing the function of timing processing For the / / ngx_cache_loader_process_handler () method, after it has been processed once, the current event / / is not added to the event queue again, so it means that the current event will only be executed once, and then the cache loader process exits ngx_add_timer (& ev, ctx- > delay); for ( ) {/ / if master marks the current process as terminate or quit, exit the process if (ngx_terminate | | ngx_quit) {ngx_log_error (NGX_LOG_NOTICE, cycle- > log, 0, "exiting"); exit (0);} / / if the master process sends a reopen message, reopen all cache files if (ngx_reopen) {ngx_reopen = 0 Ngx_log_error (NGX_LOG_NOTICE, cycle- > log, 0, "reopening logs"); ngx_reopen_files (cycle,-1);} / / execute event ngx_process_events_and_timers (cycle) in the event queue;}}
In the above code, you first create an event object, and ev.handler = ctx- > handler; specifies the logic to be handled for the event, that is, the method corresponding to the first parameter in the above two structures Then add the event to the event queue, that is, ngx_add_timer (& ev, ctx- > delay); note that the second parameter here is the third parameter specified in the above two structures, that is, the execution time of the hander () method is controlled by the delay time of the event. Finally, in an infinite for loop, the events of the event queue are constantly checked and processed by the ngx_process_events_and_timers () method.
3.3 cache manager process processing logic
As for the flow of cache manager processing, you can see from the above explanation that it is done in the ngx_cache_manager_process_handler () method in its defined cache manager structure. The following is the source code of this method:
Static void ngx_cache_manager_process_handler (ngx_event_t * ev) {ngx_uint_t I; ngx_msec_t next, n; ngx_path_t * * path; next = 60 * 60 * 1000; path = ngx_cycle- > paths.elts; for (I = 0; I
< ngx_cycle->The manager method here points to the ngx_http_file_cache_manager () method if (path [I]-> manager) {n = path [I]-> manager (path [I]-> data); next = (n path- > manager = ngx_http_file_cache_manager;,), which means that this method is the main way to manage cache. After the management method is called, the current event is then added to the event queue for the next cache management loop. The source code for the ngx_http_file_cache_manager () method is as follows:
Static ngx_msec_t ngx_http_file_cache_manager (void * data) {/ / the ngx_http_file_cache_t structure here is the ngx_http_file_cache_t * cache = data; off_t size; time_t wait; ngx_msec_t elapsed, next; ngx_uint_t count, watermark; cache- > last = ngx_current_msec; cache- > files = 0 obtained by parsing the proxy_cache_path configuration item. / / the ngx_http_file_cache_expire () method here constantly checks whether there is expired / / shared memory at the end of the cache queue in an infinite loop, and if so, delete it and its corresponding file next = (ngx_msec_t) ngx_http_file_cache_expire (cache) * 1000 / / next is the return value of the ngx_http_file_cache_expire () method, which returns 0: / / 1 only in two cases. When the number of deleted files exceeds the number specified by manager_files; / / 2. When the total time taken to delete each file exceeds the total time specified by manager_threshold; / / if next is 0, a batch of cache cleaning work has been completed. At this time, you need to sleep for a period of time and then perform the next clean-up work. / / the sleep duration is the value specified by manager_sleep. In other words, the value of next here is actually the waiting time if (next = = 0) {next = cache- > manager_sleep; goto done;} for (;;) {ngx_shmtx_lock (& cache- > shpool- > mutex) for the next cache cleanup. / / size here refers to the total size used in the current cache / / count specifies the number of files in the current cache / / watermark represents the water level, which is 7 size = cache- > sh- > size; count = cache- > sh- > count; watermark = cache- > sh- > watermark; ngx_shmtx_unlock (& cache- > shpool- > mutex) Ngx_log_debug3 (NGX_LOG_DEBUG_HTTP, ngx_cycle- > log, 0, "http file cache size:% O c:%ui wrace% I", size, count, (ngx_int_t) watermark) / / if the memory used by the current cache is less than the maximum size that can be used and the number of cache files is less than the water level, / / indicates that you can continue to store cache files, then jump out of the loop if (size
< cache->Max_size & & count
< watermark) { break; } // 走到这里说明共享内存可用资源不足 // 这里主要是强制删除当前队列中未被引用的文件,无论其是否过期 wait = ngx_http_file_cache_forced_expire(cache); // 计算下次执行的时间 if (wait >0) {next = (ngx_msec_t) wait * 1000; break;} / / if the current nginx has exited or terminated, jump out of the loop if (ngx_quit | | ngx_terminate) {break } / / if the number of files currently deleted exceeds the number specified by manager_files, jump out of the loop, / / and specify the dormancy time if (+ + cache- > files > = cache- > manager_files) {next = cache- > manager_sleep; break;} ngx_time_update () from the next cleanup. Elapsed = ngx_abs ((ngx_msec_int_t) (ngx_current_msec-cache- > last)); / / if the current delete action takes longer than the time specified by manager_threshold, jump out of the loop, / / and specify the dormancy time if (elapsed > = cache- > manager_threshold) {next = cache- > manager_sleep; break from the next cleanup. }} done: elapsed = ngx_abs ((ngx_msec_int_t) (ngx_current_msec-cache- > last)); ngx_log_debug3 (NGX_LOG_DEBUG_HTTP, ngx_cycle- > log, 0, "http file cache manager:% ui eVO% mn% M", cache- > files, elapsed, next); return next;}
In the ngx_http_file_cache_manager () method, you first enter the ngx_http_file_cache_expire () method, which is mainly used to check whether the element at the end of the current shared memory queue is out of date, and if so, to determine whether the element and its corresponding disk file need to be deleted based on the number of references and whether it is being deleted. After this check, you will enter an infinite for loop, where the main purpose of the loop is to check whether the current shared memory is tight, that is, whether the memory used exceeds the maximum memory defined by max_size, or whether the total number of files currently cached exceeds 7 to 8 of the total number of files. If either of these two conditions is met, an attempt is made to force the cache file to be cleared. The so-called forced purge is to delete all elements with a referenced number of 0 in the current shared memory and their corresponding disk files. Here we first read the ngx_http_file_cache_expire () method:
Static time_t ngx_http_file_cache_expire (ngx_http_file_cache_t * cache) {u_char * name, * p; size_t len; time_t now, wait; ngx_path_t * path; ngx_msec_t elapsed; ngx_queue_t * Q; ngx_http_file_cache_node_t * fcn U_char key [2 * NGX_HTTP_CACHE_KEY_LEN]; ngx_log_debug0 (NGX_LOG_DEBUG_HTTP, ngx_cycle- > log, 0, "http file cache expire"); path = cache- > path; len = path- > name.len + 1 + path- > len + 2 * NGX_HTTP_CACHE_KEY_LEN; name = ngx_alloc (len + 1, ngx_cycle- > log); if (name = = NULL) {return 10 } ngx_memcpy (name, path- > name.data, path- > name.len); now = ngx_time (); ngx_shmtx_lock (& cache- > shpool- > mutex); for (;;) {/ / if the current nginx has exited or terminated, then jump out of the current cycle if (ngx_quit | | ngx_terminate) {wait = 1; break } / / if the current shared memory queue is empty, jump out of the current loop if (& cache- > sh- > queue) {wait = 10; break;} / / get the last element of the queue Q = ngx_queue_last (& cache- > sh- > queue) / / get the node of the queue fcn = ngx_queue_data (Q, ngx_http_file_cache_node_t, queue); / / calculate the length of the node's expiration time from the current time wait = fcn- > expire-now; / / if the current node does not expire, exit the current loop if (wait > 0) {wait = wait > 10? 10: wait; break } ngx_log_debug6 (NGX_LOG_DEBUG_HTTP, ngx_cycle- > log, 0, "http file cache expire: #% d% d xdxdxdxd", fcn- > count, fcn- > exists, fcn- > key [0], fcn- > key [1], fcn- > key [2], fcn- > key [3]) / / the count here indicates the number of references to the current node. If the number of references is 0, delete the node if directly (fcn- > count = = 0) {/ / the main action here is to remove the current node from the queue and delete the corresponding file ngx_http_file_cache_delete (cache, Q, name); goto next } / / if the current node is being deleted, the current process does not have to process it. If (fcn- > deleting) {wait = 1; break } / / this indicates that the current node has expired, but the number of references is greater than 0, and no process is deleting the node / / what is calculated here is the name of the file p = ngx_hex_dump (key, (u_char *) & fcn- > node.key, sizeof (ngx_rbtree_key_t)) after the hex calculation of the node. Len = NGX_HTTP_CACHE_KEY_LEN-sizeof (ngx_rbtree_key_t); (void) ngx_hex_dump (p, fcn- > key, len) / / since the current node has expired in time, but a request is referencing the node, and no process is deleting the node, / / indicating that the node should be retained, try to remove the node from the end of the queue and recalculate the next expiration time for it, / / then insert it into the queue header ngx_queue_remove (Q) Fcn- > expire = ngx_time () + cache- > inactive; ngx_queue_insert_head (& cache- > sh- > queue, & fcn- > queue); ngx_log_error (NGX_LOG_ALERT, ngx_cycle- > log, 0, "ignore long locked inactive cache entry% * s, count:%d", (size_t) 2 * NGX_HTTP_CACHE_KEY_LEN, key, fcn- > count) Next: / / this is the logic that will be executed only after the last node in the queue is deleted and the corresponding file is deleted / / the cache- > files here records the number of nodes that have been processed. The meaning of manager_files is that / / when the LRU algorithm forcibly clears files, the maximum number of files specified by this parameter will be cleared. The default is 100. / / so here if cache- > files if greater than or equal to manager_files, then jump out of the loop if (+ + cache- > files > = cache- > manager_files) {wait = 0; break;} / / update the current nginx cache ngx_time_update (); / / elapsed equals the total time spent by the current delete action elapsed = ngx_abs ((ngx_msec_int_t) (ngx_current_msec-cache- > last)) / / if the total time exceeds the value specified by manager_threshold, jump out of the current loop if (elapsed > = cache- > manager_threshold) {wait = 0; break;}} / / release the current lock ngx_shmtx_unlock (& cache- > shpool- > mutex); ngx_free (name); return wait;}
As you can see, the main processing logic here is the element at the end of the queue first. According to the LRU algorithm, the element at the end of the queue is the one most likely to expire, so you only need to check that element. Then check whether the element is out of date, and if not, exit the current method, otherwise check whether the current element has 0 references, that is, if the current element has expired and the number of references is 0, delete the element and its corresponding disk file directly. If the current number of element references is not 0, it is checked to see if it is being deleted. It is important to note that if an element is being deleted, the deletion process sets its reference number to 1 to prevent other processes from deleting as well. If it is being deleted, the current process will not process the element, and if it has not been deleted, the current process will try to move the element from the end of the queue to the queue header, the main reason for this is that although the element has expired, but its number of references is not 0, and no process is deleting the element, then the element is still an active element, so it needs to be moved to the queue header.
Let's take a look at how cache manager forces the removal of elements when resources are tight. Here is the source code for the ngx_http_file_cache_forced_expire () method:
Static time_t ngx_http_file_cache_forced_expire (ngx_http_file_cache_t * cache) {u_char * name; size_t len; time_t wait; ngx_uint_t tries; ngx_path_t * path; ngx_queue_t * Q; ngx_http_file_cache_node_t * fcn Ngx_log_debug0 (NGX_LOG_DEBUG_HTTP, ngx_cycle- > log, 0, "http file cache forced expire"); path = cache- > path; len = path- > name.len + 1 + path- > len + 2 * NGX_HTTP_CACHE_KEY_LEN; name = ngx_alloc (len + 1, ngx_cycle- > log); if (name = = NULL) {return 10;} ngx_memcpy (name, path- > name.data, path- > name.len); wait = 10 Tries = 20; ngx_shmtx_lock (& cache- > shpool- > mutex); / / continuously traversing each node in the queue for (Q = ngx_queue_last (& cache- > sh- > queue); Q! = ngx_queue_sentinel (& cache- > sh- > queue); Q = ngx_queue_prev (Q) {/ / get the data of the current node fcn = ngx_queue_data (Q, ngx_http_file_cache_node_t, queue) Ngx_log_debug6 (NGX_LOG_DEBUG_HTTP, ngx_cycle- > log, 0, "http file cache forced expire: #% d% d xdxdxdxd", fcn- > count, fcn- > exists, fcn- > key [0], fcn- > key [1], fcn- > key [2], fcn- > key [3]) / / if the reference number of the current node is 0, delete the node if (fcn- > count = = 0) {ngx_http_file_cache_delete (cache, Q, name) {ngx_http_file_cache_delete (Q, name); wait = 0;} else {/ / try the next node. If the reference number of 20 consecutive nodes is greater than 0, it will jump out of the current cycle if (--tries) {continue } wait = 1;} break;} ngx_shmtx_unlock (& cache- > shpool- > mutex); ngx_free (name); return wait;}
As you can see, the processing logic here is relatively simple, mainly checking whether the number of references of the elements in the queue is 0, starting from the end of the queue. If it is 0, delete it directly, and then check the next element. If it is not 0, check the next element, and so on. It is important to note here that if you check that a total of 20 elements are being referenced, you will jump out of the current loop.
3.4 cache loader process processing logic
As mentioned earlier, the main processing flow of cache loader is in the ngx_cache_loader_process_handler () method, and the following is the main processing logic of this method:
Static void ngx_cache_loader_process_handler (ngx_event_t * ev) {ngx_uint_t I; ngx_path_t * * path; ngx_cycle_t * cycle; cycle = (ngx_cycle_t *) ngx_cycle; path = cycle- > paths.elts; for (I = 0; I
< cycle->Paths.nelts; iTunes +) {if (ngx_terminate | | ngx_quit) {break;} / / the loader method here points to the ngx_http_file_cache_loader () method if (path [I]-> loader) {path [I]-> loader (path [I]-> data); ngx_time_update ();}} / / exit the current process exit (0) after loading is complete;}
Here, the main processing flow of cache loader and cache manager is very similar, mainly by calling the loader () method of each path to load data, and the specific implementation method of loader () method is also defined when the proxy_cache_path configuration item is parsed. The specific definition is as follows (in the last part of Section 3.1):
Cache- > path- > loader = ngx_http_file_cache_loader
Here we continue to read the source code of the ngx_http_file_cache_loader () method:
Static void ngx_http_file_cache_loader (void * data) {ngx_http_file_cache_t * cache = data; ngx_tree_ctx_t tree; / / if loading has been completed or is being loaded, return if (! cache- > sh- > cold | | cache- > sh- > loading) {return;} / / try to lock if (! ngx_atomic_cmp_set (& cache- > sh- > loading, 0, ngx_pid)) {return } ngx_log_debug0 (NGX_LOG_DEBUG_HTTP, ngx_cycle- > log, 0, "http file cache loader"); / / the tree here is one of the main process objects loaded. The loading process is carried out recursively: tree.init_handler = NULL; / / encapsulates the operation of loading a single file tree.file_handler = ngx_http_file_cache_manage_file / / the operation before loading a directory. The main purpose here is to check whether the current directory has operation permissions tree.pre_tree_handler = ngx_http_file_cache_manage_directory; / / after loading a directory. Here is actually an empty method tree.post_tree_handler = ngx_http_file_cache_noop. / / here we mainly deal with special files, that is, files that are neither files nor folders. Here, we mainly delete the file tree.spec_handler = ngx_http_file_cache_delete_file; tree.data = cache; tree.alloc = 0; tree.log = ngx_cycle- > log; cache- > last = ngx_current_msec; cache- > files = 0 / / start traversing all files in the specified directory recursively, and then process them according to the method defined above, that is, load if (ngx_walk_tree (& tree, & cache- > path- > name) = = NGX_ABORT) {cache- > sh- > loading = 0; return;} / / tag loading status cache- > sh- > cold = 0; cache- > sh- > loading = 0 Ngx_log_error (NGX_LOG_NOTICE, ngx_cycle- > log, 0, "http file cache:% V% .3fM, bsize:% uz", & cache- > path- > name, ((double) cache- > sh- > size * cache- > bsize) / (1024 * 1024), cache- > bsize);}
During the loading process, you first encapsulate the target load directory in a ngx_tree_ctx_t structure and specify the method used to load the file. The final loading logic is mainly carried out in the ngx_walk_tree () method, and the whole loading process is realized by recursion. The following is how the ngx_walk_tree () method is implemented:
Ngx_int_t ngx_walk_tree (ngx_tree_ctx_t * ctx, ngx_str_t * tree) {void * data, * prev; u_char * p, * name; size_t len; ngx_int_t rc; ngx_err_t err; ngx_str_t file, buf; ngx_dir_t dir; ngx_str_null (& buf) Ngx_log_debug1 (NGX_LOG_DEBUG_CORE, ctx- > log, 0, "walk tree\"% V\ ", tree); / / Open the target directory if (ngx_open_dir (tree, & dir) = = NGX_ERROR) {ngx_log_error (NGX_LOG_CRIT, ctx- > log, ngx_errno, ngx_open_dir_n"\ "% s\" failed ", tree- > data); return NGX_ERROR } prev = ctx- > data; / / the alloc passed here is 0, so it will not enter the current branch if (ctx- > alloc) {data = ngx_alloc (ctx- > alloc, ctx- > log); if (data = = NULL) {goto failed;} if (ctx- > init_handler (data, prev) = = NGX_ABORT) {goto failed;} ctx- > data = data;} else {data = NULL;} for ( ;) {ngx_set_errno (0); / / read the contents of the current subdirectory if (ngx_read_dir (& dir) = = NGX_ERROR) {err = ngx_errno; if (err = = NGX_ENOMOREFILES) {rc = NGX_OK } else {ngx_log_error (NGX_LOG_CRIT, ctx- > log, err, ngx_read_dir_n "\"% s\ "failed", tree- > data); rc = NGX_ERROR;} goto done;} len = ngx_de_namelen (& dir); name = ngx_de_name (& dir) Ngx_log_debug2 (NGX_LOG_DEBUG_CORE, ctx- > log, 0, "tree name% uz:\"% s\ ", len, name); / / if it is currently read, it means that it is the current directory and skip the directory if (len = = 1 & & name [0] = ='.') {continue } / / if the current read is.., it means that it returns the identity of the directory at the next level. Skip the directory if (len = = 2 & & name [0] = ='.'& & name [1] = ='.) {continue;} file.len = tree- > len + 1 + len / / update the available cache size if (file.len + NGX_DIR_MASK_LEN > buf.len) {if (buf.len) {ngx_free (buf.data);} buf.len = tree- > len + 1 + len + NGX_DIR_MASK_LEN; buf.data = ngx_alloc (buf.len + 1, ctx- > log); if (buf.data = = NULL) {goto failed }} p = ngx_cpymem (buf.data, tree- > data, tree- > len); * paired + ='/'; ngx_memcpy (p, name, len + 1); file.data = buf.data; ngx_log_debug1 (NGX_LOG_DEBUG_CORE, ctx- > log, 0, "tree path\"% s\ ", file.data) If (! dir.valid_info) {if (ngx_de_info (file.data, & dir) = = NGX_FILE_ERROR) {ngx_log_error (NGX_LOG_CRIT, ctx- > log, ngx_errno, ngx_de_info_n "\"% s\ "failed", file.data); continue }} / / if you are currently reading a file, call ctx- > file_handler () to load the contents of the file if (ngx_de_is_file (& dir)) {ngx_log_debug1 (NGX_LOG_DEBUG_CORE, ctx- > log, 0, "tree file\"% s\ ", file.data). / / set the relevant properties of the file ctx- > size = ngx_de_size (& dir); ctx- > fs_size = ngx_de_fs_size (& dir); ctx- > access = ngx_de_access (& dir); ctx- > mtime = ngx_de_mtime (& dir); if (ctx- > file_handler (ctx, & file) = NGX_ABORT) {goto failed } / / if you are currently reading a directory, first call the set pre_tree_handler () method, and then call the / / ngx_walk_tree () method to recursively read the subdirectory Finally, call the set post_tree_handler () method} else if (ngx_de_is_dir (& dir)) {ngx_log_debug1 (NGX_LOG_DEBUG_CORE, ctx- > log, 0, "tree enter dir\"% s\ ", file.data). Ctx- > access = ngx_de_access (& dir); ctx- > mtime = ngx_de_mtime (& dir); / / rc = ctx- > pre_tree_handler (ctx, & file); if (rc = = NGX_ABORT) {goto failed } if (rc = = NGX_DECLINED) {ngx_log_debug1 (NGX_LOG_DEBUG_CORE, ctx- > log, 0, "tree skip dir\"% s\ ", file.data); continue;} / / Recursive read current directory if (ngx_walk_tree (ctx, & file) = = NGX_ABORT) {goto failed } ctx- > access = ngx_de_access (& dir); ctx- > mtime = ngx_de_mtime (& dir); / / Application reads the directory's post logic if (ctx- > post_tree_handler (ctx, & file) = = NGX_ABORT) {goto failed }} else {ngx_log_debug1 (NGX_LOG_DEBUG_CORE, ctx- > log, 0, "tree special\"% s\ ", file.data); if (ctx- > spec_handler (ctx, & file) = = NGX_ABORT) {goto failed;}} failed: rc = NGX_ABORT;done: if (buf.len) {ngx_free (buf.data) } if (data) {ngx_free (data); ctx- > data = prev;} if (ngx_close_dir (& dir) = = NGX_ERROR) {ngx_log_error (NGX_LOG_CRIT, ctx- > log, ngx_errno, ngx_close_dir_n "\" s\ "failed", tree- > data);} return rc;}
As you can see from the above processing flow, the real logic for loading the file is in the ngx_http_file_cache_manage_file () method, which is the source code of the method as follows:
Static ngx_int_t ngx_http_file_cache_manage_file (ngx_tree_ctx_t * ctx, ngx_str_t * path) {ngx_msec_t elapsed; ngx_http_file_cache_t * cache; cache = ctx- > data / / add files to shared memory if (ngx_http_file_cache_add_file (ctx, path)! = NGX_OK) {(void) ngx_http_file_cache_delete_file (ctx, path);} / / if the number of files loaded exceeds the number specified by loader_files, hibernate for a period of time if (+ + cache- > files > = cache- > loader_files) {ngx_http_file_cache_loader_sleep (cache) } else {/ / update the current cache time ngx_time_update (); / / calculate the current load hype time elapsed = ngx_abs ((ngx_msec_int_t) (ngx_current_msec-cache- > last)); ngx_log_debug1 (NGX_LOG_DEBUG_HTTP, ngx_cycle- > log, 0, "http file cache loader time elapsed:% M", elapsed) / / if the load operation takes longer than the time specified by loader_threshold, the specified time if (elapsed > = cache- > loader_threshold) {ngx_http_file_cache_loader_sleep (cache);}} return (ngx_quit | | ngx_terminate)? NGX_ABORT: NGX_OK;}
The loading logic here is relatively simple, the main process is to load the file into shared memory, and will determine whether the number of loaded files exceeds the limit, and if so, it will hibernate for the specified length of time; in addition, it will also determine whether the total time taken to load the file exceeds the specified length of time, and if so, it will also sleep for the specified length of time.
These are all the contents of the article "how to implement the shared memory mechanism in nginx". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.