Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction and configuration of NGINX

2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Nginx proxy server

Nginx is a lightweight high-performance Web server, reverse proxy server and IMAP/POP3 proxy server

Forward agent

A forward proxy is a server located between the client and the original server (origin server). In order to get content from the original server, the client sends a request to the agent and specifies the target (the original server), and then the agent transfers the request to the original server and returns the obtained content to the client. Only clients can use forward proxies.

A typical use of forward proxies is to provide access to Internet for LAN clients in firewalls. Forward agents can also use the buffering feature (provided by mod_cache) to reduce network usage.

Access any website through the forward agent and hide the client itself

Reverse proxy

Reverse proxy (Reverse Proxy) means that the proxy server accepts the connection request on the internet, then forwards the request to the server on the internal network, and returns the result obtained from the server to the client requesting the connection on the internet. At this time, the proxy server behaves as a reverse proxy server.

CDN

It is called Content Delivery Network, that is, the content distribution network. The basic idea is to avoid the bottlenecks and links that may affect the speed and stability of data transmission on the Internet as far as possible, so as to make the content transmission faster and more stable. By placing reverse proxy node servers everywhere in the network to form a layer of intelligent virtual network based on the existing Internet, the CDN system can redirect users' requests to the nearest service node in real time according to the comprehensive information such as network traffic, connection of each node, load status, distance to the user and response time. Its purpose is to enable users to get the content they need nearby, solve the congestion of Internet network, and improve the response speed of users visiting the website.

Nginx maximum number of connections configuration

In order to make Nginx support more concurrent connections, adjust the number of worker threads and the maximum number of connections supported by each worker thread according to the actual situation. For example, if you set "worker_processes 10" and "worker_connections 1024", the maximum number of connections supported by this server is 10 × 10240240.

Worker_processes 10 is events {use epoll; worker_connections 10240;} access control configuration

Involved module: ngx_http_access_module

Module Overview: allows client access to certain IP addresses to be restricted.

Allow

Syntax: allow address | CIDR | unix: | all

Default value:-

Scope: http, server, location, limit_except

Allow access to an IP or IP segment

* deny

Syntax: deny address | CIDR | unix: | all

Default value:-

Scope: http, server, location, limit_except

Access to an IP or an IP segment is prohibited. Configuration example location / {deny192.168.1.1; allow192.168.1.0/24; allow10.1.1.0/16; allow2001:0db8::/32; deny all;}

Access is allowed only for IPv4 networks, except for 192.168.1.1, and for IPv6 networks, only 2001:0db8::/32 allows access.

Rate limit of response

The combination of limit_rate_after and limit_rate means to turn on the speed limit effect (gradually slow down) after the file size of the download reaches the set number.

Limit_rate

Syntax: limit_rate rate

Default value: limit_rate 0

Scope: http, server, location, if in location

Command Overview: limits the rate limit for sending responses to the client. The parameter rate is in bytes per second, and setting to 0 turns off the speed limit. Nginx has a speed limit per connection, so if a client opens two connections at the same time, the overall rate of the client is twice the value set by this instruction.

Limit_rate_after

Syntax: limit_rate_after size

Default value: limit_rate_after 0

Scope: http, server, location, if in location

Command Overview: sets the response size for unlimited speed transmission. When the transmission volume is greater than this value, the excess part will be transmitted at a limited speed. Specific configuration to realize the limit of the number of concurrent connections

The configuration of this is based on the ngx_http_limit_zone_module module. To simply complete the concurrency limit, we need to involve two instructions, limit_conn_zone and limit_conn:

Limit_conn_zone

Syntax: limit_conn_zone zone_name $variable the_size

Scope: http

This directive defines a data area in which session state information is recorded. Variable defines the variables that determine the session; the_size defines the total capacity of the record area.

Limit_conn

Syntax: limit_conn zone_name the_size

Scope: http, server, location

Specifies the maximum number of concurrent connections for a session. When the specified maximum number of concurrent connections is exceeded, the server returns "Service unavailable" configuration example http {limit_conn_zone $binary_remote_addr zone=one:10m;. Server {... Location / seven/ {limit_conn one 1;. }

Define a record area called "one" with a total capacity of 10m, using the variable $binary_remote_addr as the benchmark for judging sessions (that is, one address per session). Under the / seven/ directory, only one connection can be made per session. To put it simply, under the / seven/ directory, only one connection can be initiated by an IP.

The length of $binary_remote_addr is 4 bytes and the length of session information is 32 bytes. When the size of the zone is 1m, about 32000 session information can be recorded (a session takes up 32 bytes).

Http {limit_rate 500k; # Speed limit per connection. 0 means unlimit limit_rate_after 800k; # Speed limit limit_zone to_vhost $server_name 1m only if it is greater than 500K; # Total bandwidth limit limit_conn to_vhost 30 per domain name; # how many threads can be opened per connection}

The limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s module turns on the limit of requests per unit time for a single ip and a single session. Here, zone is the same as the limit_conn_zone module. Rate refers to the limited rate. 1r/s means a maximum of one request in a second, or you can use 5r/m to indicate a maximum of 5 requests in a minute. Limit_req_zone is also configured in the http configuration section. It takes effect only when used in conjunction with the limit_req directive. Limit_req zone=one burst=5 indicates that the location segment uses the limit_req_zone defined by one. If the number of requests exceeds rate=1r/s, the remaining requests will be delayed. If the number of requests exceeds the number defined by burst, the excess requests will directly return a 503error.

If nodelay is enabled, requests that exceed rate=1r/s will be returned directly without delay.

Detailed official rules:

Http://wiki.nginx.org/NginxChsHttpLimit_zoneModule

DDOS prevention configuration

DDOS is distributed, aiming at bandwidth and services * *, that is, layer 4 traffic * * and layer 7 applications * *. The corresponding defense bottleneck lies in bandwidth and layer 7 throughput. For layer 7 applications *, we can still make some configurations to defend. The http_limit_conn and http_limit_req modules using nginx can defend relatively effectively by limiting the number of connections and requests.

Ngx_http_limit_conn_module can limit the number of connections to a single IP

Ngx_http_limit_req_module can limit the number of requests per second per IP

Active defense

Limit the number of requests per second

Involved module: ngx_http_limit_req_module

The number of requests per unit time is limited by the leaky bucket principle. Once the number of requests per unit time exceeds the limit, a 503 error will be returned.

Sample configuration:

Http {

Limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s

...

Server {

...

Location ~ .php ${

Limit_req zone=one burst=5 nodelay

}

}

}

Configuration instructions:

$binary_remote_addr binary remote address

Rate=10r/s; limits the frequency to 10 requests per second.

Burst=5 allows no more than 5 requests exceeding the frequency limit. Assuming 9 requests per second in 1, 2, 3 and 4 seconds, 15 requests in the fifth second are allowed. On the contrary, if 15 requests are requested in the first second, 5 requests will be placed in the second second, and requests exceeding 10 in the second second will be directly 503, similar to the aPCge rate limit in multiple seconds.

Requests exceeding nodelay are not delayed, and 5 (no delay) + 10 (delay) requests are processed within 1 second after setting. (this is only theoretical data, the most common case.)

Limit the number of IP connections

Http {limit_conn_zone $binary_remote_addr zone=addr:10m;... Server {... Location / operation directory / {limit_conn addr 1;} passive defense method

Seal IP address

Visitors visit the website normally through the browser, and generally no more than 20 connections are established with the server. We can disable IP access with too many connections through scripts. The following script enumerates all connections through the netstat command, blocking access to the IP with the highest number of connections through iptables if the number of connections exceeds 150:

#! / bin/shstatus= `netstat-na | awk'$5 ~ / [0-9] +: [0-9] + / {print $5}'| awk-F ":'{print $1}'| sort-n | uniq-c | sort-n | tail-n 1`NUM = `echo $status | awk'{print $1} '`IP= `echo $status | awk' {print $2} '`result= `echo" $NUM > 150" | bc`if [$result= 1] thenecho IP:$IP is over $NUM, BAN ITBINGUBING iptables-I INPUT-s IP-j DROPfi

Run crontab-e and add the above script to crontab to run automatically every minute:

* / root/xxxx.sh persistent connection nginx configuration:

Http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

Usage: keepalive connections

Upstream http_backend {server 127.0.0.1 keepalive 8080; keepalive 16;} server {... Location / http/ {proxy_pass http://http_backend; proxy_http_version 1.1; proxy_set_header Connection ";...}}

Maximum number of concurrent idle keepalive persistent connections connections connects to upstream servers (default is not set) when this number is exceeded, use the least recently used algorithm (LUR) to eliminate and close connections

Http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests

Usage: keepalive_requests number

Set the maximum number of requests delivered through one surviving persistent connection (default is 100, which is recommended based on the total number of requests delivered by the client during the "keepalive" lifetime). When the number of requests delivered exceeds this value, the connection is closed.

Keepalive_requests 8192

Http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout

Usage: keepalive_timeout timeout [header_timeout]

The first parameter sets the timeout that "keep-alive client long connection" will continue to open on the "server side" the optional second parameter sets the value of the "Keep-Alive: timeout=time" response header field

Http {keepalive_timeout 20;-- maximum number of requests per connection} configure Nginx's load balancing and distribution policy by adding specified parameters after the application server IP added in the upstream parameter, such as: upstream tomcatserver1 {server 192.168.72.49 Nginx 8080 weight=3; server 192.168.72.49 listen 8081;} server {listen 80 Server_name 8080.max.com; # charset koi8-r; # access_log logs/host.access.log main; location / {proxy_pass http://tomcatserver1; index index.html index.htm;}}

Through the above configuration, it can be realized that when visiting the 8080.max.com website, due to the configuration of the proxy_pass address, all requests will first go through the nginx reverse proxy server. When the server forwards the request to the destination host, read the address of upstream as tomcatsever1, read the distribution policy, and configure the tomcat1 weight of 3. So nginx will send most of the requests to tomcat1 on the 49 server, that is, port 8080. A small part of it is given to tomcat2 to achieve conditional load balancing, which is, of course, the hardware index ability of servers 1 and 2 to handle requests.

Upstream myServer {server 192.168.72.49 down; server 9090 192.168.72.49 weight=2; server 8080 weight=2; server 192.168.72.49 backup; 6060; server 192.168.72.49 backup;}

Down indicates that the server before the order does not participate in the load for the time being.

The default Weight is that the larger the 1.weight, the greater the weight of the load.

The number of times max_fails allows requests to fail defaults to 1. Returns the error defined by the proxy_next_upstream module when the maximum number of times is exceeded

Time to pause after fail_timeout max_fails failure.

Backup all other non-backup machines down or when busy, request the backup machine. So this machine will be the least stressed.

High availability using Nginx

The load balancer server also needs to be highly available in case the subsequent application server fails if the load balancer server dies.

Achieve a highly available solution: add redundancy. Add n nginx servers to avoid the above single point of failure keepalive+nginx to achieve load balancing and high availability

High availability (High Availability) monitoring programs are run on both the main and standby computers to monitor each other's health by transmitting heartbeat information. When the backup machine cannot receive the normal heartbeat of the other party within a certain period of time, it takes over the service IP of the primary server and continues to provide load balancing services; when the backup manager receives information such as "I am alive" from the master manager, it releases the service IP address, and such a master server begins to provide load balancing services again.

1. Provide two Nginx load servers

two。 Install keepalived on two servers

3. Configure keepalived

Virtual IP points to An and B. when An is down, the virtual IP drifts to B, and An automatically becomes master when it is started.

By configuring the keepalive of the two servers, we can distinguish between the host and the standby server. State MASTER is the host, priority priority value is greater than the standby, and state BACKUP is the standby.

Caching of Nginx

The http_proxy module of nginx can implement the caching function similar to Squid. Nginx establishes a local copy of the content that the customer has visited on the Nginx server, so that if the data is accessed again within a period of time, there is no need to send a request to the back-end server through the Nginx server, so it can reduce the network traffic between the Nginx server and the back-end server, reduce the network congestion, reduce the data transmission delay and improve the user access speed. At the same time, when the back-end server goes down, the replica resources on the Nginx server can also respond to relevant user requests.

Nginx directory index auto_index

Map directly to the directory / home/test

Location / home/test {root /; autoindex on; # enable autoindex_localtime on; # enable display function autoindex_exact_size off; autoindex_format html;}

Map to directory / home

Location / home {root /; rewrite ^ / home/ (. *) $/ home/$1 break; autoindex on; # Open autoindex_localtime on; # specify whether the time in the directory list is displayed in the local time zone or UTC autoindex_exact_size off # autoindex_exact_size: for HTML output format, specify the exact output file size autoindex_format html; # set the output format of the directory list autoindex_format html | xml | json | jsonp;}

Or

Location / home/eyou/Downloads {alias / home/; fancyindex on; # enable fancy index fancyindex_exact_size off; # do not use exact size, use rounding fancyindex_default_sort date_desc; fancyindex_localtime on; # use local time}

Or

Location / {root / home/eayou; fancyindex on; # enable fancy index fancyindex_exact_size off; # do not use exact size, use rounding fancyindex_default_sort date_desc;} fancyindex

Since the file indexing service provided by nginx is relatively simple, the Fancy Index plug-in is generally used. The way to install the Fancy Index plug-in under Ubuntu is

Sudo apt-get install nginx-extras

Location / test/ {root /; rewrite ^ / test/ (. *) $/ home/$1 break; fancyindex on; # enable fancy index fancyindex_exact_size off; # do not use exact size, use rounding fancyindex_default_sort date_desc; fancyindex_localtime on; # to use local time fancyindex_footer "myfooter.shtml"; # myfooter.shtml content under the current path is used as bottom} nginx to redirect Redirect

Configuration:

Location / home/test-redirect {rewrite ^ / home/test-redirect/ (. *) https://www.baidu.com$1 redirect;}

Access 127.0.0.1/home/test-redirect, jump to https://www.baidu.com, return status code is 302 Moved Temporarily

Force conversion of http to https links

Rewrite ^ (. *) $https://$host$1 permanent; hotlink protection

Do not want others to call their own pictures, first, because of personal copyright issues, and another point is to increase the load on the server, but also generate some unnecessary traffic.

Hotlink protection is implemented for files with suffixes of gif, jpg, png, swf and flv. (gif | jpg | png | swf | flv) ${valid_referers none blocked www.kenmy.com kenmy.com; if ($invalid_referer) {rewrite ^ / http://www.kenmy.com/retrun.html; # return 403;}}

First line: gif | jpg | png | swf | flv

Indicates hotlink protection for files with suffixes of gif, jpg, png, swf and flv

The second line indicates that the two sources of www.kenmy.com and kenmy.com are judged.

The content in if {} means that if the source is not specified, jump to the http://www.kenmy.com/retrun.html page if the source is not specified. Of course, it is OK to return 403 directly. Prevent location / images/ {alias / data/images/; valid_referers none blocked server_names *. Xok.la xok.la; if ($invalid_referer) {return 403;}} load balancer http {upstream myproject {server 127.0.1 weight=3; server 127.0.1 weight=3; server 8001; server 127.0.0.1 data/images/; valid_referers none blocked server_names 8002 Server 127.0.0.1 server 8003;} server {listen 80; server_name www.domain.com; location / {proxy_pass http://myproject;} check whether the configuration file is correct

Nginx-t # results show that ok and success can be restarted if there is no problem

Restart nginxnginx-s reloadnginx HTTP basic function

Flexible and simple configuration

Handle static files, index files, and automatic indexing

Reverse proxy acceleration, simple load balancing and fault tolerance

Modular structure. Filters include gzipping, byte ranges, chunked responses, and SSI-filter.

SSL and TLS SNI support

Support for keep-alive and pipe connections

Reconfigure and upgrade online without interrupting the customer's work process

Customizable access logs, log write caching, and fast log rollback

4xx-5xx error code redirection

Rewrite rewriting module based on PCRE

Bandwidth limit

Advantages of nginx

Takes up less CPU memory and has strong concurrency

Support for load balancing, fault tolerance and health check

Support hot deployment, fast startup, and software and configuration upgrades with uninterrupted service

Improve access speed

It plays the role of cache, especially for popular sites, it can significantly improve the speed of requests.

Firewall function (access security control)

Limits can be set on the proxy server to filter some unsafe information. Prevent the vicious effect of the external network on the intranet server.

Access an inaccessible target site through a proxy server

There are many developed proxy servers on the Internet. When access is restricted, clients can access the target site through unrestricted proxy servers. Generally speaking, the × × browser we use takes advantage of the proxy server nginx load balancing.

1. Forwarding function

According to a certain algorithm, the client request is forwarded to different application servers to improve the concurrency of the system.

Distribution strategies include polling, weight, ip_ hash algorithm

Weight: specify the polling probability, which is proportional to the access ratio, and is used when the performance of the application server is uneven.

Each request is allocated according to the hash result of accessing the ip, so that each visitor has a fixed access to an application server, which can solve the problem of session sharing.

2. Fault removal (determine whether the application server is currently working properly by heartbeat detection, and automatically send the request to other application servers if the server fails)

3. Restore and add (if a malfunctioning application server is detected to resume work, it will be automatically added to the team for processing user requests)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 241

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report