In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you the example analysis of the Chinese description of Nginx configuration parameters. I hope you will get something after reading this article. Let's discuss it together.
PS: recently read the Nginx chapter, the introduction of its nginx is very detailed, now the frequently used Nginx configuration parameters in Chinese excerpts and nginx to do load balancing of my real demonstration example copied down for later review!
Detailed description of Nginx configuration parameters in Chinese
# define the users and user groups under which Nginx is running
User www www
#
# number of nginx processes, which is recommended to be equal to the total number of cores of CPU.
Worker_processes 8
#
# Global error log definition type, [debug | info | notice | warn | error | crit]
Error_log / var/log/nginx/error.log info
#
# process files
Pid / var/run/nginx.pid
#
# the maximum number of file descriptors opened by a nginx process. The theoretical value should be the maximum number of open files (system value ulimit-n) divided by the number of nginx processes, but the nginx allocation request is not uniform, so it is recommended to be consistent with the value of ulimit-n.
Worker_rlimit_nofile 65535
#
# working mode and upper limit of connections
Events
{
# refer to the event model, use [kqueue | rtsig | epoll | / dev/poll | select | poll]. The epoll model is a high-performance network Icano model in the kernel of Linux version 2.6. if it runs on FreeBSD, use the kqueue model.
Use epoll
# maximum number of connections per process (maximum number of connections = number of connections * number of processes)
Worker_connections 65535
}
#
# configure http server
Http
{
Include mime.types; # File extension and File Type Mapping Table
Default_type application/octet-stream; # default file type
# charset utf-8; # default Encoding
Server_names_hash_bucket_size 128; # hash table size of the server name
Client_header_buffer_size 32k; # upload file size limit
Large_client_header_buffers 4 64k; # set request delay
Client_max_body_size 8m; # set request delay
#
# enable directory list access, suitable for download server, off by default.
Autoindex on; # display directory
Autoindex_exact_size on; # shows that the file size defaults to on, showing the exact size of the file. After changing the unit from bytes to off, it shows the approximate size of the file, in kB or MB or GB.
Autoindex_localtime on; # shows that the file time is off by default, and when the file time is changed from GMT time to on, the file time is displayed as the server time of the file
#
Sendfile on; # enables efficient file transfer mode. The sendfile instruction specifies whether nginx calls the sendfile function to output files. For common applications, set it to on. If it is used for downloading and other application disk IO heavy-load applications, it can be set to off to balance the processing speed of disk and network Ibano and reduce the load of the system. Note: if the picture does not display properly, change this to off.
Tcp_nopush on; # prevent network congestion
Tcp_nodelay on; # prevent network congestion
#
Keepalive_timeout 120; # (in s) sets the timeout for client connections to remain active, after which the server closes the link
#
# FastCGI related parameters are to improve the performance of the website: reduce resource consumption and improve access speed. The following parameters can be understood literally.
Fastcgi_connect_timeout 300
Fastcgi_send_timeout 300
Fastcgi_read_timeout 300
Fastcgi_buffer_size 64k
Fastcgi_buffers 4 64k
Fastcgi_busy_buffers_size 128k
Fastcgi_temp_file_write_size 128k
#
# gzip module settings
Gzip on; # enable gzip compressed output
Gzip_min_length 1k; # the minimum number of bytes of pages allowed to be compressed, which is obtained from the content-length stolen by header. The default is 0, no matter how much the page is compressed. It is recommended to set the number of bytes greater than 1k. Less than 1k may increase the pressure.
Gzip_buffers 4 16k; # indicates that 4 units of 16k memory are requested as the compression result stream cache. The default value is to apply for memory space of the same size as the original data to store the gzip compression results.
Gzip_http_version 1.1; # compressed version (default 1.1, most browsers already support gzip decompression. If the front end is squid2.5, please use 1.0)
Gzip_comp_level 2; # compression level. 1 compression ratio is the smallest, processing speed is fast. 9 compression ratio is the largest, cpu resources are consumed, and processing speed is the slowest, but because the compression ratio is the largest, the packet is the smallest and the transmission speed is fast.
Gzip_types text/plain application/x-javascript text/css application/xml
# Compression type already contains text/html by default, so you don't have to write it any more, and there will be no problem writing it, but there will be a warn.
The gzip_vary on;# option allows the front-end cache server to cache gzip-compressed pages. For example: using squid to cache nginx compressed data
#
# you need to use it to limit the number of IP connections
# limit_zone crawler $binary_remote_addr 10m
#
# # load balancing of upstream, four scheduling algorithms (the main example below) # #
#
# configuration of virtual host
Server
{
# listening port
Listen 80
# there can be multiple domain names, separated by spaces
Server_name wangying.sinaapp.com
Index index.html index.htm index.php
Root / data/www/
Location. *\. (php | php5)? $
{
Fastcgi_pass 127.0.0.1:9000
Fastcgi_index index.php
Include fastcgi.conf
}
# Image caching time setting
Location. *\. (gif | jpg | jpeg | png | bmp | swf) ${
Expires 10d
}
# JS and CSS cache time settings
Location. *\. (js | css)? ${
Expires 1h
}
# Log format setting
Log_format access'$remote_addr-$remote_user [$time_local] "$request"'
'$status $body_bytes_sent "$http_referer"'
'"$http_user_agent" $http_x_forwarded_for'
# define the access log of this virtual host
Access_log / var/log/nginx/access.log access
#
# set the address to view Nginx status. The StubStatus module can obtain the working status of Nginx since it was last started. This module is not a core module and needs to be manually specified during Nginx compilation and installation before it can be used.
Location / NginxStatus {
Stub_status on
Access_log on
Auth_basic "NginxStatus"
Auth_basic_user_file conf/htpasswd
The contents of the # htpasswd file can be generated using the htpasswd tool provided by apache.
}
}
}
Nginx multiple servers to achieve load balancing
Nginx load balancer server:
IP:192.168.1.1
List of Web servers:
Web1:192.168.1.2
Web2:192.168.1.3
Purpose: when users access the 192.168.1.1 server, load balance to Web1 and Web2 servers through Nginx
Http {# # upstream load balancing, four scheduling algorithms # # scheduling algorithm 1: polling. Each request is assigned to a different back-end server one by one in chronological order. If a back-end server goes down, the failure system is automatically eliminated, so that user access is not affected by upstream webhost {server 192.168.1.2 server 80; server 192.168.1.3 server 80;} # scheduling algorithm 2:weight (weight). Weights can be defined according to machine configuration. The higher the weight, the greater the probability of being assigned upstream webhost {server 192.168.1.2 weight=2; server 80 weight=3;} # scheduling algorithm 3:ip_hash. Each request is allocated according to the hash result of accessing IP, so that visitors from the same IP regularly access a back-end server, which effectively solves the session sharing problem of dynamic web pages: upstream webhost {ip_hash; server 192.168.1.2 session 80; server 192.168.1.3 IP 80;} # scheduling algorithm 4:url_hash (third-party plug-ins need to be installed). This method allocates requests according to the hash results of accessing url, so that each url is directed to the same back-end server, which can further improve the efficiency of the back-end cache server. Nginx itself does not support url_hash. If you need to use this scheduling algorithm, you must install Nginx's hash package upstream webhost {server 192.168.1.2 url_hash 80; server 192.168.1.3 url 80; hash $request_uri } # scheduling algorithm 5:fair (third-party plug-ins need to be installed). This is a more intelligent load balancing algorithm than the above two. This algorithm can intelligently balance the load according to the page size and load time, that is, the request is allocated according to the response time of the back-end server. Nginx itself does not support fair. If you need to use this scheduling algorithm, you must download Nginx's upstream_fair module # # Virtual Host configuration (using scheduling algorithm 3:ip_hash) server {listen 80 Server_name wangying.sinaapp.com; # enables reverse proxy for "/" location / {proxy_pass http://webhost; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; # backend Web server can obtain the user's real IP proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for through X-Forwarded-For The following are some configuration of reverse proxies, optional. Proxy_set_header Host $host; client_max_body_size 10m; # maximum single file bytes allowed for client requests client_body_buffer_size 128k; # buffer proxy buffer maximum bytes requested by client, proxy_connect_timeout 90; # nginx connection timeout with backend server (proxy connection timeout) proxy_send_timeout 90 # backend server data return time (proxy sending timeout) proxy_read_timeout 90; # response time of backend server (proxy receiving timeout) proxy_buffer_size 4k; # setting proxy server (nginx) buffer size proxy_buffers 4 32k; # proxy_buffers buffer, setting proxy_busy_buffers_size 64k for web pages below 32k on average # buffer size (proxy_buffers*2) proxy_temp_file_write_size 64k under heavy load; # set cache folder size. If it is greater than this value, it will be passed} from the upstream server.
Test piece
Domain name: wangying.sinaapp.com
Parsed to 192.168.1.1 respectively
When customers visit these three sites, Nginx loads are balanced to Web1 and Web2 servers according to the ip_ hash values visited by customers.
Configuration of virtual host
Local single server implements dynamic and static separation multi-port reverse proxy configuration
Nginx load balancer server:
IP:192.168.1.1:80
List of Web servers (same machine):
Web1:192.168.1.1:8080
Web1:192.168.1.1:8081
Web1:192.168.1.1:8082
To achieve the goal:
Users access http://wangying.sinaapp.com and balance their load to ports 8080, 8081, 8082 of the local server
Http {# because the server load is balanced to the local ports 8080, 8081, and 8082, the local port 8080 is added for script parsing server {listen 8080; server_name wangying.sinaapp.com; root / mnt/hgfs/vmhtdocs/fastdfs/; location ~\. Php$ {fastcgi_pass 127.0.0.1 listen 9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name Include fastcgi_params;} # as you can see at port 80 below, 8080 listen 8081 is only responsible for parsing php dynamic programs, so static file configuration does not need to set} server {8081; server_name wangying.sinaapp.com; root / mnt/hgfs/vmhtdocs/fastdfs/; index index.php index.html index.htm; location ~\. Php$ {fastcgi_pass 127.0.0.1 listen 9000 Fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params }} # 8082 can be modeled on the above server configuration, just modify listen # # Local multi-port load balancer configuration # # because it is a server, it can replace the host name after its private network ip # upstream is just an identity, can be a word, or a domain name It corresponds to the same proxy_pass http://webhost as upstream webhost {server 127.0.0.1 upstream webhost 8080 Server 127.0.0.1 server 8082;} # Local port 80, accept requests to do load balancing server {listen 80; server_name wangying.sinaapp.com; # Local static and dynamic Separation reverse proxy configuration # all php pages are handed over to the local fastcgi to handle location ~\. Php$ {proxy_pass http://webhost; proxy_set_header Host $host Proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;} # all static files are read directly by nginx. (gif | jpg | jpeg | png | bmp | swf) ${expires 10d;} # JS and CSS cache time settings location ~. *\. (js | css)? ${expires 1h;}}
Here are some additions from other netizens
I. main configuration segment
1. Configuration necessary for normal operation
# run users and groups, group identity can be omitted
User nginx nginx
# specify the pid file of the nginx daemon
Pid path/to/nginx.pid
# specify the maximum number of file handles that can be opened by all worker processes
Worker_rlimit_nofile 100000
2. Configuration related to performance optimization
# the number of worker processes, usually slightly less than the number of CPU physical cores, can also be obtained automatically using auto
Worker_processes auto
# consanguinity binding of CPU (it is also impossible to avoid context switching of CPU)
# advantage: improve the hit rate of cache
# context switch: unnecessary consumption of CPU
# http://blog.chinaunix.net/uid-20662363-id-2953741.html
Work_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000
# timer resolution (after the request arrives at the corresponding nginx,nginx user request, to obtain the system time and log, the high concurrency may be obtained many times per second)
# lowering this value can reduce the number of gettimeofday () system calls
Timer_resolution 100ms
# indicates the nice value of the worker process: the smaller the number, the higher the priority
# Nice value range:-20pm 19
# corresponding priority: 100139
Worker_priority number
II. Event-related configuration
Events {
# load balancing locks used by master to schedule user requests to worker processes: on represents a response to new requests that allows multiple worker to take turns and serialize
Accept_mutex {off | on}
# delay waiting time. Default is 500ms
Accept_mutex_delay time
# Lock file path used by accept_mutex
Lock_file file
# indicate the time model to be used: it is recommended to let Nginx choose it.
Use [epoll | rtsig | select | poll]
# maximum number of concurrent connections opened by a single worker process, worker_processes*worker_connections
Worker_connections 2048
# tell nginx to accept as many links as possible after receiving a new link notification
Multi_accept on
}
Third, for debugging and locating problems
# whether to run nginx; debugging in daemon mode should be set to off
Daemon {on | off}
# whether to run with master/worker model; it can be set to off when debugging
Master_process {on | off}
# error_log location level. To use debug, you need to use the-- with-debug option when compiling nginx
Error_log file | stderr | syslog:server=address [, parameter=value] | memory:size [debug | info | notice | warn | error | crit | alert | emerg]
Summary: parameters that need to be adjusted frequently: worker_processes, worker_connections,work_cpu_affinity,worker_priority
How the new configuration changes take effect:
Nginx-s reload other parameters stop,quit,reopen can also be found using nginx-h
IV. Configuration of nginx as a web server
Http {}: introduced by the ngx_http_core_module module
Configuration Framework:
Http {
Upstream {
...
}
Server {
Location URL {
Root "/ path/to/somedir"
...
} # similar to that in httpd, it is used to define the mapping between URL and the local file system
Location URL {
If... {
...
}
}
} # each server is similar to one of the httpd
Server {
...
}
}
Note: http-related quota instructions can only prevent http, server, location, upstream, if contexts, but some instructions only apply to some of these five contexts.
Http {
# turn on or off the nginx version number in the error page
Server_tokens on
#! server_tag on
#! server_info on
# optimize disk IO settings and specify whether nginx calls the sendfile function to output files. General applications can be set to on, downloads and other applications with high disk IO, which can be set to off
Sendfile on
# set nginx to send all header files in one packet instead of sending them one by one
Tcp_nopush on
# set nginx not to cache data, but to send segment by segment
# timeout of persistent connections. Default is 75s.
Keepalive_timeout 30
# the maximum number of resources that can be requested in a persistent connection
Keepalive_requests 20
# disable persistent connections for defined types of User Agent
Keepalive_disable [msie6 | safari | none]
# whether to use the TCP_NODELAY option for persistent connections without merging and transferring multiple small files
Tcp_nodelay on
# timeout for reading the header of the http request message
Client_header_timeout #
# timeout for reading the body part of the http request message
Client_body_timeout #
# timeout for sending response messages
Send_timeout #
# set the parameters for users to save shared memory of various key. 5m means 5 megabytes.
Limit_conn_zone $binary_remote_addr zone=addr:5m
# set the maximum number of connections for a given key. Here the key is addr, and the set value is 100. this means that a maximum of 100 connections are allowed for each IP address at the same time.
Limit_conn addr 100
# include refers to including the contents of another file in the current file
Include mime.types
# Settings files use the default mine-type
Default_type text/html
# set the default character set
Charset UTF-8
# setting nginx to send data in the form of gzip compression reduces the amount of data sent, but increases request processing time and CPU processing time, which requires a tradeoff
Gzip on
# add vary to the proxy server. Some browsers support compression, while others do not. Judge whether compression is needed according to the HTTP header of the client.
Gzip_vary on
# nginx look for resources that have been pre-processed by gzip before compressing them
#! gzip_static on
# disable gzip for the specified client
Gzip_disable "MSIE [1-6]\."
# allow or disable compression based on requests and corresponding response streams. Any represents compression of all requests
Gzip_proxied any
# set the minimum number of bytes to enable compression for data. If the request is less than 10240 bytes, it will not be compressed, which will affect the request speed.
Gzip_min_length 10240
# set the data compression level, between 1 and 9, and the slowest compression ratio of 9 is the maximum
Gzip_comp_level 2
# set the data format to be compressed
Gzip_types text/plain text/css text/xml text/javascript application/json application/x-javascript application/xml application/xml+rss
# the maximum number of cache files is also specified when developing the cache. Delete the cache for 20s if the file is not requested
Open_file_cache max=100000 inactive=20s
# refers to how often to check the valid information of the cache
Open_file_cache_valid 60s
# the minimum number of accesses to the file cache. Only those with more than 5 accesses will be cached
Open_file_cache_min_uses 5
# whether to cache error messages when searching for a file
Open_file_cache_errors on
# maximum number of bytes per file that the client is allowed to request
Client_max_body_size 8m
# maximum number of bytes of client requests buffered by the punch agent
Client_header_buffer_size 32k
# reference all configuration files under / etc/nginx/vhosts. If there are many hostnames, you can create one file for each hostname to facilitate management.
Include / etc/nginx/vhosts/*
}
Fifth, virtual host setting module
# list of load balancer servers (I usually configure the load balancer category in the configuration file of the corresponding virtual host)
Upstream fansik {
# backend server access rules
Ip_hash
# weight parameter indicates the weight value. The higher the weight value, the greater the probability of being assigned.
Server 192.168.1.101:8081 weight=5
Server 192.168.1.102:8081 max_fails=3 fail_timeout=10s
}
Server {
# listening on port 80
Listen 80
# define the host name, which can be multiple, and the name can also use regular expressions (~) or wildcards
# (1) check the exact match first
# (2) left wildcard match check: * .fansik.com
# (3) right wildcard match check: mail.*
# (4) regular expression matching check: such as ~ ^. *\ .fansik\ .com $
# (5) detault_server
Server_name www.jb51.net
# set the access log of this virtual host
Access_log logs/www.jb51.net.access.log
Location [= | ~ | ~ * | ^ ~] uri {.}
Function: allows you to match a defined URI according to the location requested by the user, and then the request will be processed by the configuration in the corresponding location configuration block
=: indicates exact match check
~: regular expression pattern matching check, case-sensitive
~ *: regular expression pattern matching check, insensitive to character case
^ ~: the first half of URI matches, and regular expressions are not supported
! ~: the beginning indicates a case-sensitive mismatched regular
! ~ *: the beginning indicates a case-insensitive mismatched regular
/: universal matching, any request will be matched to
Location / {
# define the default site root location of the server
Root html
# define the name of the index file on the home page
Index index.html index.htm
# reference the configuration of the reverse proxy, and the configuration file directory depends on the compilation parameters
# if you add it at compile time-conf-path=/etc/nginx/nginx.conf specifies the path to the configuration file, then put proxy.conf in the / etc/nginx/ directory
# if you do not have a configuration file path, then put the proxy.conf configuration in the conf directory of nginx
Include proxy.conf
# define backend load server group
Proxy_pass http://fansik;
}
The difference between alias path and root path
Location / images/ {
Root "/ data/images"
}
/ / www.jb51.net/images/a.jpg
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.