In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
I. definition
Nginx is a reverse proxy server, the so-called reverse proxy server is when the client wants to request the server, add a proxy server between them, when the client requests, first request the proxy server, the proxy server connects to the server through the firewall
The proxy server is between the client and the server and is equivalent to a middleman or intermediary.
The following figure is a diagram:
User An always thinks that it accesses the original server B rather than proxy server Z, but in fact the reverse proxy server accepts a reply from user A.
The required resources of user An are obtained from the original resource server B and sent to user A. Because of the firewall, only proxy server Z is allowed to access the original resource server B. Although in this virtual environment, the combined effect of firewall and reverse proxy protects the original resource server B, user An is unaware of it.
To put it simply:
Forward proxy: the client knows the server side and connects to the server side through the proxy side. The agent side is the server side.
Reverse proxy: the so-called reverse is for the forward. The server side knows the client side, the client side does not know the server side, and connects the server side through the proxy side. The agent side represents the client side. The proxy object is just the opposite, so it is called reverse proxy.
Second, simple use of Nginx
I will not write about the installation steps of Nginx. I have submitted it in another blog post. This brief introduction uses
Server A
Proxy server B
The first step is to turn the firewall on and off:
Ubuntu:sudo ufw enable | disable
Not installed:
Sudo apt-get install ufw
Turn on the firewall in server An and disable all access except proxy server B.
Install Nginx on proxy server B and modify the nginx.conf configuration file:
Under the http layer, add:
Upstream server1 {
Server 192.168.0.134 8080; # address of the server being proxied
}
Server layer modification:
Location / {
Proxy_pass http://server1;
}
After the modification is completed, restart nginx
Enter: http:// proxy server IP: Port / to access in the browser, will automatically access to the http://192.168.0.134:8080
Load balancer configuration:
Upstream server1 {
Server 192.168.0.134 8080; # address of the server being proxied
Server 192.168.0.125virtual 8081; # address of the server being proxied
}
When one of them goes down, it will be connected to another address.
Third, the process mode of nginx
After nginx starts, it runs in the background in the Unix system. The background process includes a master process and several worker processes. Of course, you can also manually adjust it to the foreground to run, but nginx supports multi-process mode by default, and also supports multi-thread mode.
It is mentioned above that after nginx starts, there is a master process and multiple worker processes. The master process is used to manage the worker process. When there is a request from the outside, it is sent to each worker to monitor the status of the worker. When a worker goes down, a new worker process will be restarted. Each worker is peer-to-peer and the relationship with the request is one-to-one. The nginx process model is as follows:
To operate nginx to make it restart calmly, send the. / nginx-s reload command. When the master process receives this command, it will load the configuration file, establish a new worker process, and send a signal to the old worker process, telling them that they can exit. When the new worker process starts to receive requests successfully, the old worker process will quit after processing all the current requests, so that the service can be restarted without interruption.
How worker handles requests:
First let master establish the socket that needs listen, and then let the master process fork multiple worker processes. The listenfd of all worker processes will become readable when the new connection arrives. To ensure that only one process handles the connection, all worker processes grab the accept_mutex before registering the listenfd read event, the process that grabs the mutex registers the listenfd read event, and calls accept in the read event to accept the connection. When a worker process begins to read the request, parse the request, process the request, generate data, return it to the client, and finally disconnect the connection after the accept connection, such a complete request is like this. We can see that a request is handled entirely by the worker process and only in one worker process
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.