In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "what can Nginx do". Friends who are interested may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn what Nginx can do.
What can Nginx do?
1. Reverse proxy
two。 Load balancing
3.HTTP server (including static and dynamic separation)
4. Forward agent
These are the things I learned that Nginx can handle without relying on third-party modules. Here is a detailed description of how to do each function.
Reverse proxy
Reverse proxy should be the most common thing that Nginx does. What is reverse proxy? the following is what Baidu encyclopedia says: reverse proxy (Reverse Proxy) means that a proxy server is used to accept connection requests on the internet, then forward the request to the server on the internal network, and return the results obtained from the server to the client on the internet requesting the connection. At this point, the proxy server behaves as a reverse proxy server. To put it simply, the real server cannot be directly accessed by the external network, so a proxy server is needed. While the proxy server can be accessed by the external network, it is in the same network environment as the real server. Of course, it may be the same server with different ports.
Here is a simple code to implement the reverse proxy
Server {listen 80; server_name localhost; client_max_body_size 1024M; location / {proxy_pass http://localhost:8080; proxy_set_header Host $host:$server_port;}}
Start Nginx after saving the configuration file, so that when we access localhost, it is equivalent to accessing localhost:8080
Load balancing
Load balancing is also a common function of Nginx, which means that load balancing is distributed to multiple operating units for execution, such as Web server, FTP server, enterprise critical application server and other critical task servers, so as to complete the task together. To put it simply, when there are two or more servers, the request is randomly distributed to the specified server according to the rules. In general, the load balancer configuration needs to configure a reverse proxy to jump to the load balancer through the reverse proxy. Nginx currently supports three load balancing strategies and two commonly used third-party strategies.
1. RR (default)
Each request is assigned to a different backend server one by one in chronological order. If the backend server down is dropped, it can be automatically eliminated.
Simple configuration
Upstream test {server localhost:8080; server localhost:8081;} server {listen 81; server_name localhost; client_max_body_size 1024M; location / {proxy_pass http://test; proxy_set_header Host $host:$server_port;}}
The core code of load balancing is
Upstream test {server localhost:8080; server localhost:8081;}
I have configured two servers here, of course, it is actually one, but the port is different, and 8081 of the servers do not exist, that is, they cannot be accessed, but when we access http://localhost, there will be no problem. We will jump to http://localhost:8080 by default because Nginx will automatically determine the state of the server if the server is inaccessible (the server is down). It won't jump to this server, so it won't affect the use of a server. Since Nginx defaults to the RR policy, we don't need any more settings.
2. Weight
Specify the polling probability. The weight is proportional to the access ratio, which is used in the case of uneven performance of the backend server.
For example
Upstream test {server localhost:8080 weight=9; server localhost:8081 weight=1;}
Then usually only one of the 10 times will visit 8081, and 9 times will visit 8080.
3 、 ip_hash
There is a problem with both the above two methods, that is, when the next request comes, the request may be distributed to another server. When our program is not stateless (using session to save data), there is a big problem. For example, if you save the login information to session, you need to log in again when you jump to another server. So in many cases, we need a customer to access only one server, so we need to use ip_hash. Each request of ip_hash is allocated according to the hash result of accessing ip, so that each visitor accesses a back-end server regularly, which can solve the problem of session.
Upstream test {ip_hash; server localhost:8080; server localhost:8081;}
4. Fair (third party)
Requests are allocated according to the response time of the back-end server, and priority is given to those with short response time.
Upstream backend {fair; server localhost:8080; server localhost:8081;}
5. Url_hash (third party)
Allocate requests according to the hash result of accessing url, so that each url is directed to the same backend server, which is more effective when the backend server is cached. Add hash statement to upstream. Other parameters such as weight cannot be written in server statement. Hash_method is the hash algorithm used.
Upstream backend {hash $request_uri; hash_method crc32; server localhost:8080; server localhost:8081;}
The above five kinds of load balancers are suitable for different situations, so you can choose which policy mode to use according to the actual situation. However, fair and url_hash need to install third-party modules to use. Since this article mainly introduces what Nginx can do, Nginx installation of third-party modules will not be introduced in this article.
HTTP server
Nginx itself is also a server of static resources. When there are only static resources, Nginx can be used as a server. At the same time, it is also very popular to separate static and static resources, which can be realized through Nginx. First, take a look at Nginx as a static resource server.
Server {listen 80; server_name localhost; client_max_body_size 1024M; location / {root e:\ wwwroot; index index.html;}}
In this way, if you visit http://localhost, you will access the index.html under the wwwroot directory of E disk by default, and if a website is just a static page, you can deploy it in this way.
Dynamic and static separation
The dynamic and static separation is to make the dynamic web pages in the dynamic website distinguish between the immutable resources and the frequently changing resources according to certain rules. after the dynamic and static resources are split, we can cache them according to the characteristics of the static resources. this is the core idea of website static processing.
Upstream test {server localhost:8080; server localhost:8081;} server {listen 80; server_name localhost; location / {root e:\ wwwroot; index index.html;} # all static requests are processed by nginx, and the storage directory is html location ~\. (gif | jpg | jpeg | png | bmp | css | js) ${root e:\ wwwroot;} # all dynamic requests are forwarded to tomcat processing location. (jsp | do) ${proxy_pass http://test; } error_page 500 502 503 504 / 50x.htl; location = / 50x.html {root e:\ wwwroot;}}
In this way, we can put HTML, pictures, css and js in the wwwroot directory, while tomcat is only responsible for handling jsp and requests. For example, when we have the suffix of gif, Nginx will get the dynamic graph file of the current request from wwwroot by default. Of course, the static file here is on the same server as Nginx. We can also go to another server and configure it through reverse proxy and load balancing. As long as you figure out the most basic process, many configurations are very simple, and localtion is actually followed by a regular expression, so it is very flexible.
Forward agent
Forward proxy, which means a server between the client and the original server (origin server). In order to get content from the original server, the client sends a request to the agent and specifies the target (the original server), and then the agent transfers the request to the original server and returns the obtained content to the client. Only clients can use forward proxies. When you need to use your server as a proxy server, you can use Nginx to achieve forward proxy, but at present Nginx has a problem, that is, it does not support HTTPS, although I have been to Baidu to configure HTTPS forward proxy, but in the end, I found that the proxy is still not correct, of course, I may be wrong, so I also hope that comrades who know the correct way to leave a message to explain.
Resolver 114.114.114.114 8.8.8.8; server {resolver_timeout 5s; listen 81; access_log e:\ wwwroot\ proxy.access.log; error_log e:\ wwwroot\ proxy.error.log; location / {proxy_pass http://$host$request_uri;}}
Resolver is the DNS server that configures the forward proxy, and listen is the port of the forward proxy. Once configured, you can use the server ip+ port number to proxy on ie or other proxy plug-ins.
Finally, a few words.
Nginx supports hot startup, that is, when we modify the configuration file, we can make the configuration take effect without shutting down Nginx. Of course, I don't know how many people know this. Anyway, I didn't know it at first, which often killed the Nginx thread and started it again. The command for Nginx to read the configuration again is
Nginx-s reload
Windows, below is
Nginx.exe-s reload so far, I believe you have a deeper understanding of "what Nginx can do". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.