Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed explanation of nginx.conf file of online nginx_cache server

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

# users and groups used

User www www

# specify the number of work-derived processes (generally equal to twice the total number of cores of cpu, for example, if there are two quad-core cpu, the total number of cores is 8)

Worker_processes 4

# specify the path where the error log is stored. The error log recording level is optional: [debug | info | notice | warn | error | crit]

Error_log / usr/local/nginx/logs/nginx_error.log crit

# specify the path where pid is stored

Pid / usr/local/nginx/logs/nginx.pid

# specify the maximum descriptor a process can open: number

# working mode and upper limit of connections

The instruction # refers to the maximum number of file descriptors opened by a nginx process, the theoretical value should be the maximum number of open files (ulimit-n) divided by the number of nginx processes, but the nginx allocation request is not so uniform, so it is best to be consistent with the value of ulimit-n.

Now the number of open files under the linux 2.6 kernel is 65535. If the number of open files is 65535, you should fill in 65535 accordingly. This is because the allocation of request processes during nginx scheduling is not so balanced, so if you enter 10240 and the total concurrency reaches 30-40,000, a process may exceed 10240, and a 502error will be returned.

Worker_rlimit_nofile 65535

Events

{

# epoll model is recommended for Linux system and kqueue model is recommended for FreeBSD system

Use epoll

Multi_accept on

# maximum number of connections per process (maximum number of connections = number of connections * number of processes)

# adjust according to the hardware and work with the previous work process to make it as large as possible, but don't run the cpu to 100%. The maximum number of connections allowed by each process is theoretically the maximum number of connections per nginx server.

Worker_connections 65535

}

Http

{

# File extension and file type mapping table

Include mime.types

# default file type

Default_type application/octet-stream

# set the character set used. If a website has multiple character sets, please do not set it casually, but let the programmer set it through the Meta tag in the HTML code (default encoding)

# charset utf-8

Server_names_hash_bucket_size 128

Client_header_buffer_size 32k

# the buffer size of the client request header, which can be set according to the paging size of your system. Generally, the request header size will not exceed 1k, but since the paging size of the system is generally greater than 1k, it is set to the paging size here.

# the page size can be obtained by using the command getconf PAGESIZE.

# but there are also client_header_buffer_size values that must be set to an integral multiple of the system page size.

Client_body_buffer_size 512k

# customer request header cache size. Nginx uses client_header_buffer_size as buffer by default to read the header value, and if the header is too large, it uses large_client_header_buffers to read it.

Large_client_header_buffers 4 32k

# set the size of files that can be uploaded by the client (set the size of files uploaded through nginx)

Client_max_body_size 300m

# enable efficient file transfer mode, and sendfile instructs nginx whether to call the sendfile function to output files. For common applications, set it to on. If it is used for downloading and other application disk IO overloaded applications, it can be set to off to balance the processing speed of disk and network IO and reduce the system uptime.

Sendfile on

# this option allows or stays the option of using socke's TCP--CORK, this option is only used when using sendfile

Tcp_nopush on

Tcp_nodelay on

Server_tokens off

Keepalive_timeout 60; # keepalive timeout in seconds.

Client_header_timeout 15

Client_body_timeout 15

Send_timeout 15

# timeout of backend server connection _ timeout from initiating handshake to waiting for response

# connection timeout between nginx and backend server (proxy connection timeout)

Proxy_connect_timeout 180

# response time of back-end server after successful connection (proxy received timeout)

# successful connection _ waiting for the response time of the backend server-in fact, it has entered the backend pairing for processing (it can also be said that the time it takes for the backend server to process the request)

Proxy_read_timeout 180

# backend CVM data backhaul time (proxy send timeout)

# backend server data return time, that is, the backend server must transmit all the data within the specified time

Proxy_send_timeout 180

# set the cache size for proxy server (nginx) to store account information

# set the buffer size of the first part of the reply read by the proxy server, usually this part of the reply contains

# A small response header, which by default is the size of a buffer specified in the instruction proxy_buffers, but can be set to smaller

Proxy_buffer_size 16k

# proxy_buffers buffer, settings with an average of less than 32k for web pages

# set the number and size of buffers used to read replies (from the proxied server). The default is also the paging size, which may be 4k or 8k depending on the operating system

Proxy_buffers 4 64k

# buffer size under high load (proxy_buffers*2)

Proxy_busy_buffers_size 128k

# set the size of the data when writing to proxy_temp_path to prevent a worker process from blocking for too long when transferring files

# set the cache folder size. If it is larger than this value, it will be transferred from the upstream server.

Proxy_temp_file_write_size 128k

# gzip module settings

Gzip on; # enable gzip compressed output

Gzip_min_length 1k; # minimum compressed file size

Gzip_buffers 4 32k; # compressed cache

Gzip_http_version 1.1; # compressed version (default 1.1, if the previous paragraph is squid2.5, please use 1.0)

Gzip_comp_level 6; # compression level

# gzip_types text/plain application/x-javascript text/css application/xml

The gzip_types text/xml text/plain text/css application/javascript application/x-javascript application/rss+xml; # compression type already contains textml by default, so you don't have to write it any more, and there will be no problem writing it, but there will be a warn.

Gzip_disable "MSIE [1-6]\."

Gzip_vary on

# waf

# lua_package_path "/ usr/local/nginx/conf/waf/?.lua"

# lua_shared_dict limit 10m

# init_by_lua_file / usr/local/nginx/conf/waf/init.lua

# access_by_lua_file / usr/local/nginx/conf/waf/waf.lua

# cache

# the path specified by proxy_temp_path and proxy_cache_path must be in the same partition

Proxy_temp_path / data/proxy_cache/proxy_temp_dir

# set the memory cache size to 500MB and automatically clear the content that has not been accessed for 7 days, and the hard disk cache space is 30GB.

Proxy_cache_path / data/proxy_cache/qmcaifu.com/www levels=1:2 keys_zone=www:500m inactive=7d max_size=30g

Proxy_cache_path / data/proxy_cache/qmcaifu.com/m levels=1:2 keys_zone=m:500m inactive=7d max_size=30g

# setting of log format

$remote_addr and # $and $http_x_forwarded_for are used to record the ip address of the client

$remote_user: used to record the client user name

$time_local: used to record access time and time zone

$request: url and http protocols used to record requests

$body_bytes_sent: the size of the file body content that the record is sent to the client

$http_referer: used to record links accessed from that page

$http_user_agent: record the relevant information of the customer browser

Log_format qmcaifu.com'$remote_addr-$remote_user [$time_local] "$request"'

'$status $body_bytes_sent "$http_referer"'

'"$http_user_agent"$http_x_forwarded_for"'

'"$upstream_cache_status" $request_time $upstream_addr $http_host $upstream_response_time'

# load balancer configuration

Upstream backend_www {

For # upstream load balancer, weight is the weight. You can define the weight according to the machine configuration. The weigth parameter represents the weight. The higher the weight, the greater the probability of being assigned.

Server 10.161.158.176:80

Check interval=3000 rise=2 fall=3 timeout=5000

}

Include vhost/*.conf

}

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report