Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Nginx profile nginx.conf detailed procedure

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "the detailed process of Nginx configuration file nginx.conf". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Now let the editor to take you to learn the "Nginx configuration file nginx.conf detailed process" bar!

User nginx

# user

Worker_processes 8

# working process, adjusted according to hardware, greater than or equal to cpu cores

Error_log logs/nginx_error.log crit

# error Log

Pid logs/nginx.pid

# where the pid is placed

Worker_rlimit_nofile 204800

# specify the maximum descriptor that the process can open

This instruction means that when a nginx process opens the maximum number of file descriptors, the theoretical value should be the maximum number of open characters.

The number of pieces (ulimit-n) is divided by the number of nginx processes, but the nginx allocation request is not so uniform, so it is best to be consistent with the value of ulimit-n.

Now the number of open files under the linux 2.6 kernel is 65535. If the number of open files is 65535, you should fill in 65535 accordingly.

This is because assigning requests to processes during nginx scheduling is not so balanced, so if you enter 10240 and the total concurrency reaches 30-40,000, a process may exceed 10240, and a 502error will be returned.

Events

{

Use epoll

# using epoll's Istroke O model

Supplementary note:

Similar to apache, nginx has different event models for different operating systems.

A) Standard event model

Select and poll belong to the standard event model. If there is no more effective method in the current system, nginx will choose select or poll.

B) efficient event model

Kqueue: used in FreeBSD 4.1, OpenBSD 2.9, NetBSD 2.0 and MacOS X. Using kqueue on MacOS X systems with dual processors may cause the kernel to crash.

Epoll: used in Linux kernel version 2.6 and later.

/ dev/poll: used for Solaris 7 11 Universe 99, HP/UX 11.22 + (eventport), IRIX 6.5.15 + and Tru64 UNIX 5.1A +.

Eventport: used in Solaris 10. To prevent kernel crashes, it is necessary to install security patches

Worker_connections 204800

# the maximum number of connections to the worker process, adjusted according to the hardware and used in conjunction with the previous work process, as large as possible, but do not run the cpu to 100%.

The maximum number of connections allowed per process, theoretically the maximum number of connections per nginx server is worker_processes*worker_connections

Keepalive_timeout 60

Keepalive timeout.

Client_header_buffer_size 4k

The buffer size of the client request header can be set according to the paging size of your system. Generally, the size of a request header will not exceed 1k, but since the paging of the system is generally larger than 1k, it is set to the paging size here.

The page size can be obtained with the command getconf PAGESIZE.

[root@web001 ~] # getconf PAGESIZE

4096

However, there are cases where the client_header_buffer_size exceeds 4k, but the client_header_buffer_size value must be set to an integral multiple of the system page size.

Open_file_cache max=65535 inactive=60s

This specifies the cache for open files, which is not enabled by default. Max specifies the number of caches, which is recommended to be the same as the number of open files. Inactive refers to how long it takes to delete the cache after the file has not been requested.

Open_file_cache_valid 80s

This refers to how often the valid information in the cache is checked.

Open_file_cache_min_uses 1

The minimum number of times a file is used during the time of the inactive parameter in the open_file_cache directive. If this number is exceeded, the file descriptor is always opened in the cache. As in the example above, if a file is not used once in inactive time, it will be removed.

}

# configure the http server and use its reverse proxy function to provide load balancing support

Http

{

Include mime.types

# set the mime type, which is defined by the mime.type file

Default_type application/octet-stream

Log_format main'$host $status [$time_local] $remote_addr [$time_local] $request_uri'

'"$http_referer"$http_user_agent"$http_x_forwarded_for"'

'$bytes_sent $request_time $sent_http_x_cache_hit'

Log_format log404'$status [$time_local] $remote_addr $host$request_uri $sent_http_location'

$remote_addr and $http_x_forwarded_for are used to record the ip address of the client

$remote_user: used to record the client user name

$time_local: used to record access time and time zone

$request: url and http protocols used to record requests

$status: used to record the status of the request; success is 200

$body_bytes_s ent: the size of the file body content that the record is sent to the client

$http_referer: used to record links accessed from that page

$http_user_agent: record the relevant information of the client's poison browser

Usually the web server is placed after the reverse proxy so that the customer's IP address cannot be obtained. The IP address obtained through $remote_add is the iP address of the reverse proxy server. In the http header information of the forwarding request, the reverse proxy server can add x_forwarded_for information to record the IP address of the original client and the server address of the original client request.

Access_log / dev/null

# after setting the log format with the log_format directive, you need to use the access_log directive to specify the storage path of the log file

# access_log / usr/local/nginx/logs/access_log main

Server_names_hash_bucket_size 128

# the hash table that holds the server name is controlled by the instructions server_names_hash_max_size and server_names_hash_bucket_size. The parameter hash bucket size is always equal to the size of the hash table and is a multiple of the cache size of one processor. After reducing the number of access times in memory, it is possible to speed up the lookup of hash table key values in the processor. If hash bucket size is equal to the size of a processor cache, then when looking for keys, the number of times to look in memory at worst is 2. The first time is to determine the address of the memory unit, and the second time is to look up the key value in the memory unit. Therefore, if Nginx gives a hint that you need to increase hash max size or hash bucket size, the first thing is to increase the size of the previous parameter.

Client_header_buffer_size 4k

The buffer size of the client request header, which can be set according to the paging size of your system. Generally, the header size of a request will not exceed 1k, but since the paging of the system is generally larger than 1k, it is set to the paging size here. The page size can be obtained with the command getconf PAGESIZE.

Large_client_header_buffers 8 128k

Customer request header buffer size

By default, nginx uses the client_header_buffer_size buffer to read the header value, if

Header is too large, it uses large_client_header_buffers to read

If you set too small HTTP header / too large Cookie, it will report 400error nginx 400bad request.

If the bank exceeds buffer, it will report a HTTP 414error (URI Too Long).

The longest HTTP header size accepted by nginx must be larger than one of the buffer, or it will be reported as 400.

HTTP error (Bad Request).

Open_file_cache max 102400

Use the fields: http, server, location to specify whether caching is enabled. If enabled, the following information is recorded: open file descriptor, size information, and modification time. Directory information that exists. Error message in the process of searching for a file-without this file, it cannot be read correctly. Refer to the open_file_cache_errors directive option:

Max-specifies the maximum number of caches. If the cache overflows, the longest used file (LRU) will be removed.

Example: open_file_cache max=1000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on

Open_file_cache_errors

Syntax: open_file_cache_errors on | off default: open_file_cache_errors off uses fields: http, server, location this directive specifies whether to search a file is to record a cache error.

Open_file_cache_min_uses

Syntax: open_file_cache_min_uses number default: open_file_cache_min_uses 1 uses fields: http, server, location this instruction specifies the minimum number of files that can be used within a certain time range in the parameters that are invalid in the open_file_cache directive. If a larger value is used, the file descriptor is always open in cache.

Open_file_cache_valid

Syntax: open_file_cache_valid time default: open_file_cache_valid 60 uses fields: http, server, location this directive specifies when to check valid information about cached items in open_file_cache.

Client_max_body_size 300m

Set the size of files uploaded through nginx

Sendfile on

# sendfile directive specifies whether nginx calls the sendfile function (zero copy mode) to output the file

For ordinary applications, it must be set to on.

If it is used for downloading and other application disk IO heavy-load applications, it can be set to off to balance the disk and network IO processing speed and reduce the system uptime.

Tcp_nopush on

This option allows or disables the option to use socke's TCP_CORK, which is only used when using sendfile

Proxy_connect_timeout 90

# timeout of backend CVM connection _ initiating handshake waiting for response timeout

Proxy_read_timeout 180

# after a successful connection, _ wait for the response time of the backend server _ actually entered the queue of the backend for processing (it can also be said that the time it takes for the backend server to process the request)

Proxy_send_timeout 180

# backend server data return time _ that is, the backend server must transmit all the data within the specified time

Proxy_buffer_size 256k

# set the buffer size of the first part of the reply read by the proxy server. Usually this part of the reply contains a small response header. By default, this value is the size of a buffer specified in the instruction proxy_buffers, but it can be set to smaller.

Proxy_buffers 4 256k

# set the number and size of buffers used to read replies (from the proxied server). The default is also the paging size, which may be 4k or 8k depending on the operating system

Proxy_busy_buffers_size 256k

Proxy_temp_file_write_size 256k

# set the size of the data when writing to proxy_temp_path to prevent a worker process from blocking for too long when transferring files

Proxy_temp_path / data0/proxy_temp_dir

# the path specified by proxy_temp_path and proxy_cache_path must be in the same partition

Proxy_cache_path / data0/proxy_cache_dir levels=1:2 keys_zone=cache_one:200m inactive=1d max_size=30g

# set the memory cache space to 200MB and automatically clear the content that has not been accessed in one day, and the hard disk cache space is 30GB.

Keepalive_timeout 120

Keepalive timeout.

Tcp_nodelay on

Client_body_buffer_size 512k

If you set it to a large number, such as 256k, it is normal to submit any image less than 256k using either firefox or IE browsers. If you annotate the directive, use the default client_body_buffer_size setting, which is twice the operating system page size, 8k or 16k, and the problem occurs.

Whether you use firefox4.0 or IE8.0, submit a larger image, about 200k, and return a 500Internal Server Error error

Proxy_intercept_errors on

Means to cause nginx to block replies with a HTTP reply code of 400 or higher.

Upstream img_relay {

Server 127.0.0.1:8027

Server 127.0.0.1:8028

Server 127.0.0.1:8029

Hash $request_uri

}

Nginx's upstream currently supports four allocation modes.

1. Polling (default)

Each request is assigned to a different backend server one by one in chronological order. If the backend server down is dropped, it can be automatically eliminated.

2 、 weight

Specify the polling probability. The weight is proportional to the access ratio, which is used in the case of uneven performance of the backend server.

For example:

Upstream bakend {

Server 192.168.0.14 weight=10

Server 192.168.0.15 weight=10

}

2 、 ip_hash

Each request is allocated according to the hash result of accessing the ip, so that each visitor accesses a back-end server on a regular basis, which can solve the session problem.

For example:

Upstream bakend {

Ip_hash

Server 192.168.0.14:88

Server 192.168.0.15:80

}

3. Fair (third party)

Requests are allocated according to the response time of the back-end server, and priority is given to those with short response time.

Upstream backend {

Server server1

Server server2

Fair

}

4. Url_hash (third party)

Allocate requests according to the hash result of accessing url, so that each url is directed to the same backend server, which is more effective when the backend server is cached.

Example: add hash statement to upstream. Other parameters such as weight cannot be written in server statement. Hash_method is the hash algorithm used.

Upstream backend {

Server squid1:3128

Server squid2:3128

Hash $request_uri

Hash_method crc32

}

Tips:

Upstream bakend {# defines the Ip and device status of load balancing devices

Ip_hash

Server 127.0.0.1:9090 down

Server 127.0.0.1:8080 weight=2

Server 127.0.0.1:6060

Server 127.0.0.1:7070 backup

}

Added in server where load balancing is required

Proxy_pass http://bakend/;

The status of each device is set to:

1.down indicates that the server before the order does not participate in the load for the time being.

The default 2.weight is that the larger the 1.weight, the greater the weight of the load.

3.max_fails: the number of requests allowed to fail defaults to 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded

Time to pause after 4.fail_timeout:max_fails failure.

5.backup: all other non-backup machines down or request backup machines when they are busy. So this machine will be the least stressed.

Nginx supports setting multiple groups of load balancers at the same time, which can be used by unused server.

If client_body_in_file_only is set to On, the data from client post can be recorded into a file to be used as debug.

The directory of client_body_temp_path settings record files can be set up to 3-tier directories

Location matches the URL. You can redirect or do new agent load balancing.

Server

# configure a virtual machine

{

Listen 80

# configure listening port

Server_name image.***.com

# configure access domain name

Location *\. (mp3 | exe) ${

# load balancing addresses ending with "mp3 or exe"

Proxy_pass http://img_relay$request_uri;

# set the port or socket of the proxy server, and URL

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

# the above three lines are designed to transmit the user's information received by the proxy server to the real server

}

Location / face {

If ($http_user_agent ~ * "xnp") {

Rewrite ^ (. *) $http://211.151.188.190:8080/face.jpg redirect

}

Proxy_pass http://img_relay$request_uri;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

Error_page 404 502 = @ fetch

}

Location @ fetch {

Access_log / data/logs/face.log log404

# set the access log of this server

Rewrite ^ (. *) $http://211.151.188.190:8080/face.jpg redirect

}

Location / image {

If ($http_user_agent ~ * "xnp") {

Rewrite ^ (. *) $http://211.151.188.190:8080/face.jpg redirect

}

Proxy_pass http://img_relay$request_uri;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

Error_page 404 502 = @ fetch

}

Location @ fetch {

Access_log / data/logs/image.log log404

Rewrite ^ (. *) $http://211.151.188.190:8080/face.jpg redirect

}

}

Server

{

Listen 80

Server_name *. Com *. Cn

Location *\. (mp3 | exe) ${

Proxy_pass http://img_relay$request_uri;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

}

Location / {

If ($http_user_agent ~ * "xnp") {

Rewrite ^ (. *) $http://i1.***img.com/help/noimg.gif redirect

}

Proxy_pass http://img_relay$request_uri;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

# error_page 404 http://i1.***img.com/help/noimg.gif;

Error_page 404 502 = @ fetch

}

Location @ fetch {

Access_log / data/logs/baijiaqi.log log404

Rewrite ^ (. *) $http://i1.***img.com/help/noimg.gif redirect

}

# access_log off

}

Server

{

Listen 80

Server_name *. * * img.com

Location *\. (mp3 | exe) ${

Proxy_pass http://img_relay$request_uri;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

}

Location / {

If ($http_user_agent ~ * "xnp") {

Rewrite ^ (. *) $http://i1.***img.com/help/noimg.gif;

}

Proxy_pass http://img_relay$request_uri;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

# error_page 404 http://i1.***img.com/help/noimg.gif;

Error_page 404 = @ fetch

}

# access_log off

Location @ fetch {

Access_log / data/logs/baijiaqi.log log404

Rewrite ^ (. *) $http://i1.***img.com/help/noimg.gif redirect

}

}

Server

{

Listen 8080

Server_name ngx-ha.***img.com

Location / {

Stub_status on

Access_log off

}

}

Server {

Listen 80

Server_name imgsrc1.***.net

Root html

}

Server {

Listen 80

Server_name *. Com w.***.com

# access_log / usr/local/nginx/logs/access_log main

Location / {

Rewrite ^ (. *) $http://www.***.com/

}

}

Server {

Listen 80

Server_name * .com w.*.com

# access_log / usr/local/nginx/logs/access_log main

Location / {

Rewrite ^ (. *) $http://www.*******.com/;

}

}

Server {

Listen 80

Server_name * .com

# access_log / usr/local/nginx/logs/access_log main

Location / {

Rewrite ^ (. *) $http://www.******.com/;

}

}

Location / NginxStatus {

Stub_status on

Access_log on

Auth_basic "NginxStatus"

Auth_basic_user_file conf/htpasswd

}

# set the address to view the status of Nginx

Location ~ /\ .ht {

Deny all

}

# prohibit access to .htxxx files

}

Comments: variabl

The Ngx_http_core_module module supports built-in variables, and their names are the same as apache's built-in variables.

The first is to describe the lines in the customer request title, such as $http_user_agent,$http_cookie, and so on.

There are also some other variables.

$args this variable is equal to the parameter in the request line

$content_length is equal to the value of "Content_Length" of the request line.

$content_type is equivalent to the value of "Content_Type" in the request header

$document_root is equivalent to the value specified by the currently requested root directive

$document_uri is the same as $uri

$host is the same as the value specified in the "Host" line in the request header or the name of the server arrived by request (no Host line)

$limit_rate allows limited connection rate

$request_method is equivalent to request's method, usually "GET" or "POST"

$remote_addr client ip

$remote_port client port

$remote_user is equivalent to user name and is authenticated by ngx_http_auth_basic_module

The path name of the file currently requested by $request_filename, which is a combination of root or alias and URI request

$request_body_file

$request_uri contains the complete initial URI of the parameters

$query_string is the same as $args

The $sheeme http model (http,https) is required to evaluate for example

Rewrite ^ (. +) $$sheme://example.com$; Redirect

$server_protocol is equivalent to request's protocol, using "HTTP/ or" HTTP/

The ip of the server that $server_addr request arrives, and the value of this variable is usually obtained for system call. To avoid system calls, it is necessary to specify ip in the listen instruction and use the bind parameter.

Name of the server to which the $server_name request arrives

The port number of the server to which the $server_port request arrives

$uri is equivalent to the URI in the current request, but different from the initial value, such as when redirecting internally or using index

At this point, I believe you have a deeper understanding of the "detailed process of Nginx configuration file nginx.conf". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report