In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Explain the usage of nginx module in detail
Socket-related configuration
(1) server {...} configure a virtual host Server {listen address [: PORT] | PORT; server_name SERVER_NAME; root / PATH/TO/DOCUMENT_ROOT } (2) listen PORT | address [: port] | unix:/PATH/TO/SOCKET_FILE listen address [: port] [default_server] [ssl] [http2 | spdy] [backlog=number] [rcvbuf=size] [sndbuf=size] default_server: set as the default virtual host Ssl: limit the ability to provide services over ssl connections; backlog=number: backup queue length; rcvbuf=size: receive buffer size; sndbuf=size: send buffer size (3) server_name name...; indicate the host name of the virtual host; can be followed by multiple strings separated by white space characters; support * wildcard with any length of characters Server_name *. Magedu.com www.magedu.* supports regular expression pattern matching for characters starting with ~; server_name ~ ^ www\ d +\ .magedu\ .com $matching mechanism: (a) exact string matching first; (b) left * wildcard; (c) right * wildcard; (d) regular expression (4) tcp_nodelay on | whether the TCP_NODELAY option is enabled for the connection of off; in keepalived mode; it means whether to send packets one by one in persistent connection mode without packing and sending tcp_nopush on together | whether the TCP_CORK option is enabled for off; in sendfile mode; it means that the header of http is sent together in user space (5) sendfile on | whether sendfile is enabled for off; Indicates that the header of the response message and the beginning of the file are sent in a message, and the file is sent in a message.
two。 Define path-related configurations:
(1) root path; sets the web resource path mapping; it is used to indicate the directory path of the document on the local file system corresponding to the url requested by the user; available locations: http, server, location, if in location; (2) location [= | | ~ * | ^ ~] uri {.} there can be multiple location configuration segments in one server, which is used to realize the path mapping from uri to the file system. Ngnix checks all defined location based on the URI requested by the user, finds a best match, and then applies its configuration; =: make an exact match to the URI Location = / {...} ~: regular expression pattern matching for URI, case-sensitive; ~ *: regular expression pattern matching for URI, not case-sensitive ^ ~: do a match check on the left half of the URI without case sensitivity; unsigned: match all url; matching priorities starting from this uri: =, ^ ~, ~ / ~ *, unsigned Root / vhosts/www/htdocs/ http://www.www.com/index.html-> / vhosts/www/htdocs/index.html server {root / vhosts/www/htdocs/ location / admin/ {root / webapps/app1/ Data/} (3), Alias path Defining path aliases, another mechanism for document mapping Can only be used in location context; Note: the meaning of using root instruction in location is different from that of alias instruction; (a) root, the given path corresponds to / to the left of / uri/ in location; (b) alias, the given path corresponds to / (4) index file... to the right of / uri/ in location; default resource; http, server, location (5) error_page code... [= [response]] uri;Defines the URI that will be shown for the specified errors.
3. Define the configuration of the client request:
(1) keepalive_timeout timeout [header_timeout]; (load balancer is short, backend server is long) set the timeout for persistent connections. 0 means persistent connections are prohibited; default is 75s; (2) the maximum number of resources allowed to be requested by keepalive_requests number; on a persistent connection is 100 by default; (3), keepalive_disable none | browser.; which browser disables persistent connections? (4) the timeout for send_timeout time; to send a response message to the client, here, the interval between two write operations; (5) the buffer size of the body portion of client_body_buffer_size size; (when post uploads a large file) to receive the client request message; default is 16k Beyond this size, it will be temporarily stored to the location defined by the client_body_temp_path instruction on disk; (6) client_body_temp_path path [level1 [level2 [level3]; set the temporary storage path and subdirectory structure and number of the body part used to store the client request message
4. Configuration related to restrictions on clients:
(1) limit_rate rate; limits the transmission rate of the response to the client, in units bytes/second,0 means unlimited; (2) limit_except method. {.} restrict the use of methods other than the specified request method; limit_except GET {allow 192.168.1.0; deny all;}
5. Configuration optimized for file operations:
(1) aio on | off | threads [= pool]; whether to enable aio feature (2) directio size | off; enables the O_DIRECT flag on the Linux host, which means that the file is used when the file is greater than or equal to the given size, such as directio 4m; (3) open_file_cache off; (cache metadata) open_file_cache max=N [inactive=time]; nginx can cache the following three kinds of information: (a) file descriptor, file size and last modification time; (b) open directory structure (C) related information about files that are not found or do not have permission to access; max=N: the upper limit of cache items that can be cached; when the upper limit is reached, the LRU algorithm (least recently used) will be used for cache management. Inactive=time: the inactive duration of the cache item. A cache item that is missed within the time specified here or whose number of hits is less than the number specified by the open_file_cache_min_uses instruction is inactive; (4) the frequency of checking the validity of open_file_cache_valid time; cache items The default is 60s; (5) at least how many times should open_file_cache_min_uses number; be hit within the time specified in the inactive parameter of the open_file_cache instruction before it can be classified as an active item; (6) open_file_cache_errors on | whether off; caches information such as the file in which an error occurred during lookup.
6.ngx_http_access_module module:
Implement the access control function based on ip
(1) allow address | CIDR | unix: | all; (2) deny address | CIDR | unix: | all
7.ngx_http_auth_basic_module module:
(1) auth_basic string | off; (2) auth_basic_user_file file;location / admin/ {alias / webapps/app1/data/; auth_basic "Admin Area"; auth_basic_user_file / etc/nginx/.ngxpasswd;} Note: the htpasswd command is provided by httpd-tools
8.ngx_http_stub_status_module module:
Used to output basic status information of nginx
Active connections: 291server accepts handled requests 16630948 16630948 31070465Reading: 6 Writing: 179 Waiting: 106 Active connections: number of active connections; accepts: total number of client requests accepted; handled: total number of client requests processed; requests: total number of requests sent by the client; Reading: number of connections at the beginning of reading the client request message; Writing: number of connections in the process of sending a response message to the client Waiting: the number of idle connections waiting for the client to send a request; stub_status; configuration example: location / basic_status {stub_status;}
9.ngx_http_log_module module
(1) log_format name string...; string can use variables embedded in nginx core modules and other modules; (2) access_log path [format [buffer=size] [gzip [= level]] [flush=time] [if=condition]]; configuration of access_log off; access log file path, format and related buffers Buffer=size flush=time (3) open_log_file_cache max=N [inactive=time] [min_uses=N] [valid=time]; open_log_file_cache off; caches metadata information related to log files; max: maximum number of file descriptors cached; min_uses: access greater than or equal to this value within the period specified by inactive; inactive: inactive duration Valid: the interval between verifying whether each cache item in the cache is active
10.ngx_http_gzip_module:
(1) gzip on | off;Enables or disables gzipping of responses. (2) gzip_comp_level level;Sets a gzip compression level of a response. Acceptable values are in the range from 1 to 9. (3) gzip_disable regex...; Disables gzipping of responses for requests with "User-Agent" header fields matching any of the specified regular expressions. (4) gzip_min_length length; response message size threshold for enabling compression; (5) gzip_buffers number size; supports the number of buffers configured for compression and the size of each cache. (6) gzip_proxied off | expired | no-cache | no-store | private | no_last_modified | no_etag | auth | any; nginx as a proxy server receives the response message sent from the proxy server, under what conditions the compression function is enabled Off: do not enable no-cache for proxy requests, no-store,private: means that compression is enabled if the Cache-Control value of the first part of the response message received from the proxied server is any of these three; (7) gzip_types mime-type...; compression filter enables compression only for the content of the MIME type set here Example: gzip on;gzip_comp_level 6 per gzipkeeper minied length 64 per gzipkeeper proxied any;gzip_types text/xml text/css application/javascript
8.ngx_http_ssl_module module:
(1) ssl on | off;Enables the HTTPS protocol for the given virtual server. (2) ssl_certificate file; 's current virtual host uses a certificate file in PEM format; (3) the private key file on the current ssl_certificate_key file; virtual host matches its certificate; (4) ssl_protocols [SSLv2] [SSLv3] [TLSv1] [TLSv1.1] [TLSv1.2]; supports ssl protocol version, defaults to the last three (5), ssl_session_cache off | none | [builtin [: size]] [shared:name:size]; builtin [: size]: use the built-in cache of OpenSSL, which is private to each worker process; [shared:name:size]: use a shared cache between worker; (6) the connection on the side of ssl_session_timeout time; client can reuse the valid duration of ssl parameter cached in ssl session cache Configuration example: server {listen 443 ssl; server_name www.magedu.com; root / vhosts/ssl/htdocs; ssl on; ssl_certificate / etc/nginx/ssl/nginx.crt; ssl_certificate_key / etc/nginx/ssl/nginx.key; ssl_session_cache shared:sslcache:20m;}
9.ngx_http_rewrite_module module:
(1) rewrite regex replacement [flag] checks the URI requested by the user based on the pattern described by regex, and then replaces it with the new URI; specified by replacement. Note: if there are multiple rewrite rules in the configuration block at the same level, they will be checked one by one from the bottom down; after being replaced by a conditional rule, there will be a new round of replacement checking, so there is an implicit loop mechanism. The flag bit indicated by [flag] is used to control this loop mechanism; last: stop other subsequent rewrite operations on the current URI in the current location after the rewrite is completed, and then start a new round of rewrite check on the new URI; restart a new cycle in advance; break: stop other subsequent rewrite operations on the current URI in the current location after the rewrite is completed, and then jump directly to other configurations after the rewrite rule configuration block; end the loop Redirect: after the rewrite is completed, the new URI generated by the rewrite is returned directly to the client in the form of temporary redirection, and the client initiates the request; it cannot start with http:// or https://. (301) permanent: after the rewrite is completed, the new URI generated by the rewrite is returned directly to the client, and the client initiates the request. (2) returnreturn code [text]; return code URL;return URL;Stops processing and returns the specified code to a client. (3) rewrite_log on | whether off; enables rewriting logs; (4) if (condition) {.} introduces a new configuration context; when the conditions are met, execute the configuration instructions in the configuration block. Server, location: condition: comparison operator: = =! = ~: pattern matching, case-sensitive; ~ *: pattern matching, not case-sensitive;! ~: pattern mismatch, character case-sensitive;! ~ *: pattern mismatch, character case-insensitive Existence judgment of files and directories:-e,!-e-f,!-f-d,!-d-x,!-x
10.ngx_http_referer_module module
(1) valid_referers none | blocked | server_names | string...; define the legal available value of the referer header; none: the request message header does not have a referer header; blocked: the referer header of the request message has no value; server_names: parameter, which can have a value as a hostname or hostname mode; arbitrary_string: a direct string, but can use * as a wildcard Regular expression: the string to which the specified regular expression pattern is matched; to start with ~, for example ~. *\ .magedu\ .com Configuration example: valid_referers none block server_names * .magedungx_http_proxy_module module: .com * .mageedu.com magedu.* mageedu.* ~\ .magedu\.; if ($invalid_referer) {return http://www.magedu.com/invalid.jpg;} if ($invalid_referer) {return 403;}
11.ngx_http_proxy_module module:
(1) proxy_pass URL; Context: location, if in location, limit_except Note: if the path after proxy_pass does not contain uri, it will pass the uri of location to the backend host server {. Server_name HOSTNAME; location / uri/ {proxy http://host[:port];}...} http://HOSTNAME/uri-- > when the path after http://host/uri proxy_pass is a uri, it replaces the uri of location with the uri of proxy_pass. Server {... Server_name HOSTNAME; location / uri/ {proxy http://host/new_uri/;}.. } http://HOSTNAME/uri/-- > http://host/new_uri/ if location defines its uri using the pattern of regular expressions, or uses the proxy_pass directive in if statements or limt_execept, then the proxy_pass must not be able to use the uri passed when the uri; user requests after the service to which the proxy is directly attached Server {... Server_name HOSTNAME; location ~ | ~ * / uri/ {proxy http://host;}.} (2) proxy_set_header field value; sets the value of the request header of the request message sent to the backend host Context: http, server, location proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; (3) proxy_cache_path defines the cache available for the proxy function Context: http proxy_cache_path path [levels=levels] [use_temp_path=on | off] keys_zone=name:size [inactive=time] [max_size=size] [manager_files=number] [manager_sleep=time] [manager_threshold=time] [loader_files=number] [loader_sleep=time] [loader_threshold=time] [purger=on | off] [purger_files=number] [purger_sleep=time] [purger_threshold=time]; (4) proxy_cache zone | off Indicate the cache to be invoked, or turn off the caching mechanism Context: http, server, location (5) content for "keys" in proxy_cache_key string; cache; default value: proxy_cache_key $scheme$proxy_host$request_uri; (6) proxy_cache_valid [code...] Time; defines the length of time to cache the response content of a specific response code; it is defined in http {...}; proxy_cache_path / var/cache/nginx/proxy_cache levels=1:1:1 keys_zone=pxycache:20m max_size=1g; is defined in the configuration segment where the caching function needs to be invoked, such as server {.} Proxy_cache pxycache; proxy_cache_key $request_uri; proxy_cache_valid 200302 301h; proxy_cache_valid any 1m; proxy_cache_use_stale http_502 (7) proxy_cache_use_stale (backend server has a problem, reverse proxy responds with cache) proxy_cache_use_stale error | timeout | invalid_header | updating | http_500 | http_502 | http_503 | http_504 | http_403 | http_404 | off. Determines in which cases a stale cached response can be used when an error occurs during communication with the proxied server. (8) proxy_cache_methods GET | HEAD | POST.; If the client request method is listed in this directive then the response will be cached. "GET" and "HEAD" methods are always added to the list, though it is recommended to specify them explicitly. (9) proxy_hide_header field; By default, nginx does not pass the header fields "Date", "Server", "X-Pad", and "Xmuri Accelmuri." From the response of a proxied server to a client. The proxy_hide_header directive sets additional fields that will not be passed. (10) proxy_connect_timeout time; Defines a timeout for establishing a connection with a proxied server It should be noted that this timeout cannot usually exceed 75 seconds. The default is 60s; the longest is 75s; (11) proxy_read_timeout time; Defines a timeout for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response. (12) proxy_send_timeout time; Sets a timeout for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request. If the proxied server does not receive anything within this time, the connection is closed.
12.ngx_http_headers_module module:
Add a custom header to the response message that the proxy server responds to the client, or modify the value of the specified header
(1) add_header name value [always] adds a custom header; add_header X-Via $server_addr; add_header X-Accel $server_name; (2) expires [modified] time; expires epoch | max | off; is used to define the value of the Expire or Cache-Control header
13.ngx_http_fastcgi_module module:
(1) fastcgi_pass address; address is the address of fastcgi server; location, if in location Http://www.ilinux.io/admin/index.php-> / admin/index.php (uri) / data/application/admin/index.php (2) the default home page resource for fastcgi_index name; fastcgi; (3) fastcgi_param parameter value [if_not_empty]; Sets a parameter that should be passed to the FastCGI server. The value can contain text, variables, and their combination. Configuration example 1: premise: configure fpm server and mariadb-server services Location ~ *\ .php$ {fastcgi_pass 127.0.0.1 fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME / usr/share/nginx/html$fastcgi_script_name; include fastcgi_params;} configuration example 2: get fpm server status information through / pm_status and / ping Location ~ * ^ / (pm_status | ping) ${include fastcgi_params; fastcgi_pass 127.0.0.1 ping 9000; fastcgi_param SCRIPT_FILENAME $fastcgi_script_name } (4) fastcgi_cache_path path [levels=levels] [use_temp_path=on | off] keys_zone=name:size [inactive=time] [max_size=size] [manager_files=number] [manager_sleep=time] [manager_threshold=time] [loader_files=number] [loader_sleep=time] [loader_threshold=time] [purger=on | off] [purger_files=number] [purger_sleep=time] [purger_threshold=time]; define the cache of fastcgi The cache location is the file system on disk, defined by the path specified by path; levels=levels: the number of levels of cache directories and the number of directories at each level Name and size of memory space mapped by levels=ONE:TWO:THREE leves=1:2:2 keys_zone=name:size KBH inactive=time inactive length the upper limit of cache space used for caching data on max_size=size disk (5) fastcgi_cache zone | off; calls the specified cache space to cache data Http, server, location (6) fastcgi_cache_key string; defines the string of key used as cached items; (7) fastcgi_cache_methods GET | HEAD | POST...; for which request methods to use caching; (8) cached items in fastcgi_cache_min_uses number; cache space must be accessed at least a specified number of times within the inactive time defined by inactive before they can be considered active items. (9) fastcgi_cache_valid [code...] The cache duration of different time; response codes; example: http {... Fastcgi_cache_path / var/cache/nginx/fastcgi_cache levels=1:2:1 keys_zone=fcgi:20m inactive=120s;... Server {... Location ~ *\ .php$ {... Fastcgi_cache fcgi; fastcgi_cache_key $request_uri; fastcgi_cache_valid 200 302 10m; fastcgi_cache_valid 301 1h; fastcgi_cache_valid any 1m;...}.} (10) fastcgi_keep_conn on | off By default, a FastCGI server will close a connection right after sending the response. However, when this directive is set to the value on, nginx will instruct a FastCGI server to keep connections open.
14.ngx_http_upstream_module module
(1) upstream name {...} defines a backend server group, which introduces a new context; Context: http upstream httpdsrvs {server... Server... .} (2) server address [parameters]; server members and related parameters in the context of upstream; Context: upstream address format: unix:/PATH/TO/SOME_SOCK_FILE IP [: PORT] HOSTNAME [: PORT] parameters: weight=number weight, default is 1 Maximum number of max_fails=number failed attempts; server will be marked as unavailable when the number of times specified here is exceeded; fail_timeout=time sets the timeout for marking the server as unavailable; maximum number of concurrent connections for the current server in max_conns; backup marks the server as "standby", that is, the server is enabled only when all servers are unavailable Down is marked as "unavailable"; (3) least_conn; least join scheduling algorithm, which is wlc; (4) ip_hash; source address hash scheduling method when server has different weights; (5) hash key [consistent]; scheduling requests based on the hash table of the specified key, where the key can be directly text, variable, or a combination of both. Function: classify requests, and the same kind of requests will be sent to the same upstream server; If the consistent parameter is specified the ketama consistent hashing method will be used instead. Example: hash $request_uri consistent; (improve hit ratio) hash $remote_addr; (6) keepalive connections; the number of idle persistent connections reserved for each worker process; the maximum number of idle persistent connections per worker to the back-end service
15.ngx_stream_core_module module:
(1) proxy_pass address; (2) proxy_timeout timeout: the default is 10s, the connection is established, and the timeout of the request sent (3) proxy_connect_timeout time: set the timeout for nginx to attempt to establish a connection with the proxied server; default is 60s; example: stream {...} defines stream-related services Context:main stream {upstream sshsrvs {server 192.168.22.2 server 22; server 192.168.22.3 server 22; least_conn;} server {listen 10.1.0.6 server 22022; proxy_pass sshsrvs;}}
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.