In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
# configure users or groups. Default is nobody nobody
User nobody
# start the process, which is adjusted according to the hardware, usually equal to the number of CPU or twice as much as CPU
Worker_processes 1
# specify the log path and level. This setting can be put into global blocks, http blocks, and server blocks with the following level: debug | info | notice | warn | error | crit | alert | emerg
# error_log logs/error.log
# error_log logs/error.log notice
# error_log logs/error.log info
# specify the address where the nginx process runs the file
Pid logs/nginx.pid
# working mode and upper limit of connections
Events {
# set network connection serialization to prevent the occurrence of panic. Default is on.
Accept_mutex on
# set whether a process accepts multiple network connections at the same time. Default is off.
Multi_accept on
# epoll is a way of multiplexing IO (I paw O Multiplexing)
# only for kernels above linux2.6, it can greatly improve the performance of nginx
# linux recommends that epoll,FreeBSD should not be specified under kqueue,window.
# Note: similar to apache, nginx has different event models for different operating systems
# A) Standard event model: Select and poll belong to the standard event model. If there is no more effective method in the current system, nginx will choose select or poll
# B) efficient event model: Kqueue: used in FreeBSD 4.1, OpenBSD 2.9, NetBSD 2.0 and MacOS X. Using kqueue on MacOS X systems with dual processors may cause the kernel to crash.
# Epoll: used in Linux kernel version 2.6 and later.
# / dev/poll: used in Solaris 7 / 11, IRIX, and Tru64 UNIX 5.1A +.
# Eventport: for Solaris 10. To prevent kernel crashes, it is necessary to install security patches
# use [kqueue | rtsig | epoll | / dev/poll | select | poll]
Use epoll
# maximum number of concurrent links for a single background worker process process
Worker_connections 1024
# Total concurrency is the product of worker_processes and worker_connections
# that is, max_clients = worker_processes * worker_connections
# when reverse proxy is set, max_clients = worker_processes * worker_connections / 4 Why
# Why does the above reverse proxy divide by 4, it should be said to be an empirical value
# based on the above conditions, the maximum number of connections that a Nginx Server can handle under normal circumstances is: 4 * 8000 = 32000
# the setting of worker_connections value is related to the size of physical memory
# because concurrency is constrained by IO, the value of max_clients must be less than the maximum number of files that the system can open
# and the maximum number of files that the system can open is proportional to the size of memory. Generally speaking, the number of files that can be opened on a machine with 1GB memory is about 100000.
# Let's take a look at the number of file handles that can be opened by a 360m VPS:
# $cat / proc/sys/fs/file-max
# output 34336
# 32000 < 34336, that is, the total number of concurrent connections is less than the total number of file handles that the system can open, which is within the scope of the operating system.
# therefore, the value of worker_connections should be set appropriately according to the number of worker_processes processes and the maximum number of files that the system can open
# make the total number of concurrency less than the maximum number of files that the operating system can open
# its essence is to configure according to the physical CPU and memory of the host.
# of course, the total number of concurrency in theory may deviate from the reality, because the host has other working processes that consume system resources.
# ulimit-SHn 65535
# keepalive timeout
Keepalive_timeout 60
# the buffer size of the client request header. This can be set according to the paging size of your system. Generally speaking, the size of a request header will not exceed 1k.
# however, since the general system pagination is greater than 1k, it is set to the paging size here. The page size can be obtained with the command getconf PAGESIZE.
# however, there are cases where client_header_buffer_size exceeds 4k, but client_header_buffer_size this value must be set to an integral multiple of "system page size"
Client_header_buffer_size 4k
# this specifies the cache for opening files, which is not enabled by default. Max specifies the number of caches, which is recommended to be the same as the number of opened files. Inactive refers to how long it takes to delete the cache after the file has not been requested.
Open_file_cache max=65535 inactive=60s
# this refers to how often to check the valid information of the cache
Open_file_cache_valid 80s
# the minimum number of times a file is used during the time of the inactive parameter in the # open_file_cache directive, if this number is exceeded
The # file descriptor is always opened in the cache, as in the example above, if a file is not used once in inactive time, it will be removed.
Open_file_cache_min_uses 1
}
Http {
# hide the version number of nginx
Server_tokens off
# set the mime type, which is defined by the mime.type file
Include mime.types
# default file type
Default_type application/octet-stream
# default Encoding
Charset utf-8
# cancel service log
Access_log off
# set log format
# $remote_addr and $http_x_forwarded_for are used to record the client's ip address
# $remote_user: used to record the client user name
# $time_local: used to record access time and time zone
# $request: url and http protocols used to record requests
# $status: used to record the status of the request; success is 200
# $body_bytes_sent: the size of the body content of the file sent to the client by the record
# $http_referer: used to record links accessed from that page
# $http_user_agent: record the relevant information of the customer's browser
Log_format main'$remote_addr-$remote_user [$time_local] "$request"'
'$status $body_bytes_sent "$http_referer"'
'"$http_user_agent"$http_x_forwarded_for"'
# combined is the default value for log format
Access_log logs/access.log main
# hash table size of server name
# the hash table that holds the server name is controlled by the instructions server_names_hash_max_size and server_names_hash_bucket_size. The parameter hash bucket # size is always equal to the size of the hash table and is a multiple of the cache size of one processor. After reducing the number of access times in memory, it is possible to speed up the lookup of hash table key values in the processor. If hash bucket # size is equal to the size of a processor cache, then when looking for keys, the number of times to look in memory at worst is 2. The first time is to determine the address of the storage unit, and the second time is to look for the key # value in the storage unit. Therefore, if Nginx gives a hint that you need to increase hash max size or hash bucket size, the first thing is to increase the size of the previous parameter.
Server_names_hash_bucket_size 128
Client_header_buffer_size 32k
# customer request header buffer size. By default, nginx uses the client_header_buffer_size buffer to read the header value, if
# header is too large, it will use large_client_header_buffers to read.
Large_client_header_buffers 8 128k
This directive specifies whether caching is enabled; when caching is turned on, it also specifies the maximum number of caches and the duration of caching.
# We can set a relatively high maximum time so that we can clear them after they are inactive for more than 20 seconds
Open_file_cache max=100000 inactive=20s
# specify the interval between detecting correct information in open_file_cache.
Open_file_cache_valid 30s
# defines the minimum number of files during the period of inactivity of instruction parameters in open_file_cache. Use fields: http, server, location
Open_file_cache_min_uses 2
# specifies whether error messages are cached when searching for a file, including adding files to the configuration again.
# use fields: http, server, location
Open_file_cache_errors on
# set the size of uploading files via nginx
Client_max_body_size 300m
# sendfile directive specifies whether nginx calls the sendfile function (zero copy mode) to output the file
# for ordinary applications, it must be set to on
# if it is used for downloading applications such as disk IO heavy load applications, it can be set to off
# to balance the processing speed of disk and network IBG O and reduce the uptime of the system.
Sendfile on
# connection timeout
Keepalive_timeout 65
# timeout of backend CVM connection _ initiating handshake waiting for response timeout
Proxy_connect_timeout 90
# after a successful connection, _ wait for the response time of the backend server _ actually entered the queue of the backend for processing (it can also be said that the time it takes for the backend server to process the request)
Proxy_read_timeout 180
# backend server data return time _ that is, the backend server must transmit all the data within the specified time
Proxy_send_timeout 180
# set the buffer size for proxy server (nginx) to store account information
Proxy_buffer_size 256k
# set the number and size of buffers used to read replies (from the proxied server). The default is also the paging size, which may be 4k or 8k depending on the operating system
Proxy_buffers 4 256k
# buffer size under high load (proxy_buffers*2)
Proxy_busy_buffers_size 256k
# maximum number of bytes requested by the buffer proxy
Client_body_buffer_size
# set the size of the data when writing to proxy_temp_path to prevent a worker process from blocking for too long when transferring files
# set the cache folder size. If it is larger than this value, it will be transferred from the upstream server.
Proxy_temp_file_write_size 256k
# tell nginx to send all header files in one packet instead of one after another
Tcp_nopush on
# tell nginx not to cache data, but to send it one by one
Tcp_nodelay on
# enable gzip compression
Gzip on
# disable the gzip feature for the specified client. We set it to IE6 or lower to make our solution widely compatible.
Gzip_disable "MSIE [1-6]."
# tell nginx to look for resources that have been pre-processed by gzip before compressing them. This requires you to pre-compress your files (commented out in this example)
# this allows you to use the highest compression ratio so that nginx no longer has to compress these files (for more detailed gzip_static information, please click here).
Gzip_static on
# allows or disables compression of response streams based on requests and responses. We set it to any, which means that all requests will be compressed.
Gzip_proxied any
# minimum compressed file size
Gzip_min_length 1k
# compress buffer
Gzip_buffers 4 16k
# compressed version (default 1.1, if the front end is squid2.5, please use 1.0)
Gzip_http_version 1.0
# Compression level
Gzip_comp_level 2
# the compression type already contains text/html by default, so you don't have to write it any more, and there will be no problem writing it, but there will be a warn.
Gzip_types text/plain application/x-javascript text/css application/xml
# it has something to do with vary headers. Add a http header for proxy servers. Some browsers support compression, and some browsers do not support compression, so avoid wasting unsupported HTTP headers, so judge whether compression is needed according to the HTTP header of the client.
Gzip_vary on
# limit_zone crawler $binary_remote_addr 10m; # you need to use it to limit the number of IP connections
Upstream mysvr {
# upstream of nginx currently supports four allocation methods
# 1. Polling (default) each request is assigned to a different backend server one by one in chronological order. If the backend server down is dropped, it can be automatically eliminated.
# 2. Weight specifies the polling probability, and weight is proportional to the access ratio, which is used in the case of uneven performance of back-end servers.
# 3. Ip_hash each request is allocated according to the hash result of accessing the ip, so that each visitor accesses a back-end server regularly, which can solve the session problem.
# 4. Fair (third party): requests are allocated according to the response time of the back-end server, and priority allocation is given to those with short response time.
# 5. Url_hash (third party): allocate requests according to the hash result of accessing url, so that each url is directed to the same backend server, which is more effective when the backend server is cached.
# the status of each device is set to:
# 1.down means that the server before the order does not participate in the load for the time being
The larger the # 2.weight is weight, the greater the weight of the load.
# 3.max_fails: the number of failed requests allowed defaults to 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded
# time to pause after a 4.fail_timeout:max_fails failure.
# 5.backup: all other non-backup machines down or request backup machines when they are busy. So this machine will be the least stressed.
Server 127.0.0.1:7878 down
Server 192.168.10.121 3333 backup; # hot backup
Server 192.168.10.122:3333 weight=2
Server 192.168.10.123:3333 max_fails=2 fail_timeout=3s
}
# configure virtual host
Server {
# the maximum number of requests for a single connection.
Keepalive_requests 120
# listening on port 80
Listen 80
# define access using localhost
Server_name localhost
# define the default site root location of the server
Root html
# set the access log of this virtual host
Access_log logs/nginx.access.log main
# default request: location matches URL. You can redirect or do new agent load balancing.
Location / {
# backend Web servers can obtain users' real IP through X-Forwarded-For
Proxy_set_header Host $host
Proxy_set_header X-Real-IP $remote_addr
Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for
# request to the list of servers defined by mysvr
Proxy_pass mysvr
Proxy_redirect default
# define the name of the index file on the home page
Index index.php index.html index.htm
}
# prevent web crawlers
If ($http_user_agent ~ * "qihoobot | Baiduspider | Googlebot | Googlebot-Mobile | Googlebot-Image | Mediapartners-Google | Adsbot-Google | Feedfetcher-Google | Yahoo! Slurp | Yahoo! Slurp China | YoudaoBot | Sosospider | Sogou spider | Sogou web spider | MSNBot | ia_archiver | Tomato Bot") {
Return 403
}
# define error prompt page
Error_page 500 502 503 504 / 50x.html
Location = / 50x.html {
}
# static files are processed by nginx itself and regularly matched. ~ is case-sensitive and ~ * is case-insensitive.
Location ~ ^ / (p_w_picpaths | javascript | js | css | flash | media | static) / {
# Let the client cache data that changes infrequently
# 30 days after expiration, static files are not updated, and the expiration date can be set to a larger size
# if you update frequently, you can set it smaller.
Expires 30d
}
# PHP script requests are all forwarded to FastCGI for processing. Use the default configuration of FastCGI.
Location ~ .php ${
Fastcgi_pass 127.0.0.1:9000
Fastcgi_index index.php
Fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name
Include fastcgi_params
}
# prohibit access to .htxxx files
Location ~ / .ht {
# set the default page
Index vv.txt
# root path; # Root directory
Deny all
# rejected ip
Deny 127.0.0.1
# allowed ip
Allow 172.18.5.54
}
}
}
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.