Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

NGINX security configuration and restricted access

2025-02-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)06/01 Report--

Speaking of network * *, perhaps many people only know that the famous DDOS***, * * is cheap and effective, directly using its bandwidth through the layer 4 network protocol to knock out your bandwidth, causing network congestion and being unable to prevent. Even big companies like Tencent have been bothered by heavy traffic DDOS. There are only three temporary solutions. The first is that you need to be rich enough to buy a powerful advanced firewall, or use enough bandwidth to ignore the network traffic. The second is that the technology is strong enough. For example, individual Daniel (Ali) with technology uses efficient packet processing drivers such as DPDK to develop a traffic cleaning service to filter out junk network packets, but it will also affect normal network packets and increase latency. The third option, which is mostly used by companies with little money, is ip (nonsense). However, now many companies have launched their own flow cleaning services, which are refined to charge by the hour, which is also quite flexible and can be purchased on demand.

However, there is another kind of network * * that appears more frequently than DDOS, which is CC (Challenge Collapsar) * *. Generally speaking, it takes advantage of website code vulnerabilities to send a large number of packet requests, causing the other server to respond to these requests, resulting in resource exhaustion until the crash. This kind of network protocol belongs to layer 7, on the one hand, it is a normal request at the server level, so if you want to solve the problem fundamentally, you can only start with the code. But on the other hand, other things can be used to restrict his access, for example, the configuration of nginx can also be slightly guarded.

Nginx basic security configuration

First of all, let's talk about some basic security settings. from the beginning to the present, the security of nginx is much better than before, but some still need to be emphasized.

By default, Nginx does not allow listing of the entire directory, but for security reasons, it is best to make sure that this is really closed, or else the code will be pulled away.

Http {autoindex off;}

By default, nginx will display the version number in the returned packet. Originally, this is not a big problem, but it would be bad if this version is specifically * by people with ulterior motives. Therefore, we'd better hide it.

Http {server_tokens off;}

Other restricted access request parameters

Http {# sets the read timeout of the client request header. If no data has been sent after this time, Nginx will return a "Request timeout (408)" error client_header_timeout 15. # set the client request body to read the timeout. If no data has been sent after this time, Nginx will return "Request timeout (408)" error client_body_timeout 15 position # upload file size limit client_max_body_size 100m # specify the timeout for the response client. This excess is limited to the time between two connection activities. If the client has no activity beyond this time, Nginx will close the connection. Send_timeout 600th # sets the timeout for the client connection to keep the session, after which the server closes the connection. Keepalive_timeout 60;}

There is no doubt that nginx can do access restrictions. Allow is the ip and ip segments that are allowed to access, and deny is the ip and ip segments that are forbidden, but this depends on the needs of your website. Now the flying home broadband IP, who dares to say that IP has always been that one.

# set the access permissions for the root directory of the website location / {allow 192.168.1.1 deny all; 24; allow 120.76.147.159; deny 119.23.19.240; deny 192.168.3.1 Universe 24; deny all;}

So, let's go a little further and restrict access to individual directories or file suffixes.

# when accessing uploads and p_w_picpaths directories, accessing files with the suffix php | php5 | jsp will return code 403, that is, no code execution will be given: location ~ ^ / (uploads | p_w_picpaths) /. *\. (php | php5 | jsp) ${allow 192.168.1.1On24; return 403;} # access to sql | log | txt | war | sh | py suffix files in all directories is prohibited. Location ~. *\. (sql | log | txt | jar | war | py) {deny all;} # sometimes, some access records do not want to be saved to the log, such as static image location ~. *\. (js | jpg | JPG | jpeg | css | bmp | gif | GIF | png) ${access_log off;} # if you want a better user experience, you can set up an error page and redirect this page to another page error_page 403 http://www.example.com/errorfile/404.html;

At a higher level, judge specific conditions, and then deny service

# judge that if the result returned by http_user_agent contains the UNAVAILABLE keyword, a 403 error will be returned. Location / {if ($http_user_agent ~ UNAVAILABLE) {return 403;}}

Again, these should be combined with the actual situation of the site, otherwise the scope of influence will be expanded, resulting in some inexplicable things, that is not a good thing, but generally 403 are their own control, easier to judge, so it is best not to deny all directly.

Nginx Advanced Security configuration

Access control:

For more precise access control, there is actually an auth_basic directive that users must enter a valid user name and password to access the site. The user name and password should be listed in the file set by the auth_basic_user_file directive.

Server {... Auth_basic "closed website"; auth_basic_user_file conf/htpasswd;}

The off parameter of auth_basic can be unvalidated, for example, for some public resources.

Server {... Auth_basic "closed website"; auth_basic_user_file conf/htpasswd; location / public/ {auth_basic off;}}

We also need to use the satisfy directive to combine IP access and Http authentication. The default setting is all, that is, user access is allowed only when IP access and HTTP authentication are passed at the same time. If set to any, user access is allowed when either IP access or HTTP authentication passes.

Location / {satisfy any; allow 192.168.1.0 take 24; deny all; auth_basic "closed site"; auth_basic_user_file conf/htpasswd;}

It seems a little complicated, so it still depends on the demand.

Connection permission control:

In fact, the maximum number of connections for nginx is the total number of worker_processes multiplied by worker _ connections.

In other words, the following configuration, which is 4X65535, generally speaking, we will emphasize that worker_processes is set to equal the number of cores, and worker_connections does not require it. But at the same time, this setting actually gives space to * users, who can initiate so many connections at the same time to cross your server. Therefore, we should configure these two parameters more reasonably.

User www;worker_processes 4 errorists log / data/logs/nginx_error.log crit;pid / usr/local/nginx/nginx.pid;events {use epoll; worker_connections 65535;}

However, it is not entirely impossible to limit. At the beginning of nginx0.7, there are two new modules:

HttpLimitReqModul: limit the number of requests per second for a single IP

HttpLimitZoneModule: limit the number of connections to a single IP

These two modules are first defined in the http layer, and then restricted in the context of location, server, and http. They use a leaky bucket algorithm that limits access to a single ip, that is, if they exceed the defined limit, a 503 error will be reported, so that the outbreak of cc*** is all restricted. Of course, sometimes it may be that a company has dozens of people visiting the website with the same ip, which may be accidentally hurt, so it is necessary to do a good 503 error callback.

Let's take a look at HttpLimitReqModul:

Http {limit_req_zone $binary_remote_addr zone=test_req:10m rate=20r/s;... Server {... Location / download/ {limit_req zone=test_req burst=5 nodelay;}

The above http layer is the definition, this is a limit_req_zone space called test_req, which is used to store session data, the size is 10m memory, 1m can store about 16000 ip calls, depending on how many visits you have. Take $binary_remote_addr as key, this definition is client-side IP, can be changed to $server_name and other, limit the average number of requests per second to 20, written as 20r/m is per minute, also depends on your visits.

The following location layer is to apply this restriction, corresponding to the above definition, the request for access to the download folder is limited to no more than 20 requests per ip per second. The number of leaky buckets burst is 5, which means that if the number of requests in 4 seconds is 19, the request in 5 seconds is allowed. But if you have 25 requests in the first second, requests over 20 in the second second return a 503 error. Nodelay, if you do not set this option, when you have 25 requests in the first second, 5 requests will be executed in the second second, and the nodelay,25 requests will be executed in the first second.

As far as the definition of this limit is concerned, the number of requests per IP is limited, and the effect is obvious for a large number of cc requests * *. For example, it is even more obvious to limit the number of requests per second to 1r/s. However, as mentioned at the beginning, for large companies where multiple people visit IP at the same time, it is inevitable to cause accidental injuries, so we still need to consider more.

Then look at HttpLimitZoneModule:

Http {limit_conn_zone test_zone $binary_remote_addr 10m; server {location / download/ {limit_conn test_zone 10; limit_rate 500k;}

Similar to the above, the upper http layer is the total definition, this is a limit_conn_zone space called test_zone, the size is also 10m limit_conn_zone key or client IP address, but there is no limit on the number of times, so change the definition below.

The following location layer is really defined, because the key definition is the client ip, so limit_conn is an IP that limits 10 connections, and if it is $server_name, it is 10 connections for a domain name. Then the following limit_rate is to limit the bandwidth of a connection. If an ip has two connections, it is 500x2k, and here it is 10, which means a maximum speed of 5000K can be given to the ip.

If you dislike the bad user experience of 503, you can also add a return page:

Error_page 503 / errpage/503.html

Source code for page 503:

The page is about to load. .

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report