Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Detailed explanation and principle of nginx configuration

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Detailed explanation and principle of nginx configuration

Configuration file for 1.nginx

The overall structure of the nginx profile

User nobody nobody; # specifies the Nginx Worker process running user and user group, which is run by nobody account by default. Nobody is a system user and cannot log in. A special purpose user ID # starts the process, which is usually set to equal the number of cpu.

Worker_processes 1; # specifies the number of processes to be started by Nginx. Each Nginx process consumes average 10M~12M memory. It is recommended to specify the same number as CPU. # Global error log and PID files are used to define global error log files. Log output levels include debug, info, notice, warn, error and crit. Among them, debug output log is the most detailed, while crit output log is at least # error_log logs/error.log; # error_log logs/error.log notice; # error_log logs/error.log info

# pid logs/nginx.pid; # pid is a main module instruction that specifies the storage file location of the process pid

Worker_rlimit_nofile 65535; # is used to bind worker processes and CPU. Linux kernels are available above 2.4

# working mode and upper limit of connections

Events {# epoll is a way of multiplexing IO (Izod O Multiplexing)

# only for kernels above linux2.6, it can greatly improve the performance of nginx

Use epoll; # maximum number of concurrent links for a single background worker process process

Worker_connections 1024; # Total concurrency is the product of worker_processes and worker_connections

That is, max_clients = worker_processes * worker_connections# when the reverse proxy is set, max_clients = worker_processes * worker_connections / 4 Why # Why the reverse proxy is divided by 4, it should be said to be an empirical value # according to the above conditions, the maximum number of connections that a normal Nginx Server can handle is: 4 * 8000 = 3200 Nginx Server worker_connections value setting is related to physical memory size # because concurrency is constrained by IO The value of max_clients must be less than the maximum number of files that the system can open # and the maximum number of files that the system can open is proportional to the size of memory. Generally speaking, the number of files that can be opened on a machine with 1GB memory is about 100000. # Let's take a look at the number of file handles that can be opened by VPS with 360m memory: # $cat / proc/sys/fs/file-max# output 3433 files 32000

< 34336,即并发连接总数小于系统可以打开的文件句柄总数,这样就在操作系统可以承受的范围之内# 所以,worker_connections 的值需根据 worker_processes 进程数目和系统可以打开的最大文件总数进行适当地进行设置# 使得并发总数小于操作系统可以打开的最大文件数目# 其实质也就是根据主机的物理CPU和内存进行配置# 当然,理论上的并发总数可能会和实际有所偏差,因为主机还有其他的工作进程需要消耗系统资源。# ulimit -SHn 65535 } http { #设定mime类型,类型由mime.type文件定义 include mime.types; default_type application/octet-stream;# default_type属于HTTP核心模块指令,这里设定默认类型为二进制流,也就是当文件类型未定义时使用这种方式,例如在没有配置PHP环境时,Nginx是不予解析的,此时,用浏览器访问PHP文件就会出现下载窗口。 #设定日志格式 log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; #sendfile 指令指定 nginx 是否调用 sendfile 函数(zero copy 方式)来输出文件,#对于普通应用,必须设为 on,#如果用来进行下载等应用磁盘IO重负载应用,可设置为 off,#以平衡磁盘与网络I/O处理速度,降低系统的uptime. sendfile on; #tcp_nopush on; #连接超时时间#keepalive_timeout 0;keepalive_timeout 65;tcp_nodelay on; #开启gzip压缩 gzip on; gzip_disable "MSIE [1-6]."; #设定请求缓冲 client_header_buffer_size 128k; large_client_header_buffers 4 128k;     upstream cszhi.com{       ip_hash;       server 192.168.8.11:80;       server 192.168.8.12:80 down;       server 192.168.8.13:8009 max_fails=3 fail_timeout=20s;       server 192.168.8.146:8080;     }; 负载均衡的设置 #设定虚拟主机配置 server { #侦听80端口 listen 80; #定义使用 www.nginx.cn访问 server_name www.nginx.cn; #定义服务器的默认网站根目录位置 root html; #设定本虚拟主机的访问日志 access_log logs/nginx.access.log main; #默认请求 location / { #定义首页索引文件的名称 index index.php index.html index.htm; } # 定义错误提示页面 error_page 500 502 503 504 /50x.html; location = /50x.html { } #静态文件,nginx自己处理 location ~ ^/(images|javascript|js|css|flash|media|static)/ { #过期30天,静态文件不怎么更新,过期可以设大一点, #如果频繁更新,则可以设置得小一点。 expires 30d; } #PHP 脚本请求全部转发到 FastCGI处理. 使用FastCGI默认配置. location ~ .php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }      location /proxy {   proxy_pass http://192.168.33.10;    } # 设置方向代理 #禁止访问 .htxxx 文件 location ~ /.ht { deny all; } } }

1.1 about nobody users

About nobody users under linux

Nobody is a system user, an account that can not log in, a special-purpose user ID, some service processes such as apache,aquid, etc., all use some special accounts to run, such as nobody,news,games and so on. Generally speaking, the uid < 500is the system ID.

Linux system for security, many operations and services are not running under the root users, but a dedicated ID, this ID is generally nobody, so that the operation of each service can be isolated.

It is guaranteed that the server program will not become the direct operation source of the server program because of the problem of the server program (the server program has been taken down, and it is only nobody users rather than root users). At the same time, it will not affect the data of other users.

There are special ways to prevent malicious use of server programs.

In addition to nobody, there are also ftp, ssh and so on. Some are not used to run the service, but to occupy the pit, mainly using the permission management of the user group to set permissions. At this time, an ID with the same name used to occupy the pit will be added to the user group. This situation seems to be mainly for compatibility.

Problem description:

In the morning, it was reported that the response of the system was very slow, and the interface would take a long time to refresh. There is nothing wrong with checking the background. Our system uses nginx for load balancing. Inertia without load balancing and directly interview single-node applications, it is found that the response is very fast, very normal. The initial positioning problem lies in nginx.

Then check the nginx log, found that there are many errors, the error has "13: Permission denied" this information, obviously is a permission problem, very strange, before the operation is very normal ah. Only later did I know that the maintenance staff had done the operation.

When nginx is installed on the system, the root user is used, and it is also started by the root user, so you need to use the root user when you want to modify the configuration, which is not convenient for management, so the maintenance staff modified the authority of nginx on a whim (later know that he used this command to modify the authority of chown-R user:group $nginxdir).

Is to change the users and groups of nginx, but why does this cause "slow response"?

1.2 load balancing Settings

Nginx's load balancing module currently supports four scheduling algorithms, which are described below, of which the latter two belong to third-party scheduling methods.

Polling (default): each request is allocated to different backend servers one by one in chronological order. If a backend server goes down, the faulty system is automatically eliminated so that user access is not affected. Weight: specify polling weights. The higher the weight value, the higher the access probability assigned, which is mainly used in the case of uneven performance of each backend server. Ip_hash: each request is allocated according to the hash result of accessing IP, so that visitors from the same IP regularly access a back-end server, which effectively solves the session sharing problem of dynamic web pages; fair: a more intelligent load balancing algorithm than the above two. This algorithm can intelligently balance the load according to the page size and loading time, that is, the request can be allocated according to the response time of the back-end server, and the priority allocation of short response time. Nginx itself does not support fair. If you need to use this scheduling algorithm, you must download the upstream_fair module of Nginx. Url_hash: allocate requests according to the hash results of accessing url, so that each url is directed to the same backend server, which can further improve the efficiency of the backend cache server. Nginx itself does not support url_hash, and if you need to use this scheduling algorithm, you must install Nginx's hash package.

In the HTTP Upstream module, you can specify the IP address and port of the back-end server through the server instruction, and you can also set the status of each back-end server in load balancing scheduling. Common states are:

Down: indicates that the current server does not participate in load balancer for the time being; backup: reserved backup machine. The backup machine is requested only when all other non-backup machines are down or busy, so this machine has the least pressure; max_fails: the number of times requests are allowed to fail, the default is 1. When the maximum number of times is exceeded, the error defined by the proxy_next_upstream module is returned; fail_timeout: the time the service was suspended after max_fails failures. Max_fails can be used with fail_timeout.

Note that when the load scheduling algorithm is ip_hash, the status of the back-end server in load balancing scheduling cannot be weight and backup.

two。

Reference: https://blog.csdn.net/wangbin_0729/article/details/82109693

Http://baijiahao.baidu.com/s?id=1604485941272024493&wfr=spider&for=pc

Https://www.jianshu.com/p/6215e5d24553

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report