Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build a reverse proxy server with the help of Nginx

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces how to build a reverse proxy server with the help of Nginx, which has a certain reference value. Interested friends can refer to it. I hope you will gain a lot after reading this article.

Reverse proxy: the "broker" of the Web server

1.1 first impression of reverse agency

Reverse proxy (Reverse Proxy) means that the proxy server accepts the connection request on the internet, then forwards the request to the server on the internal network, and returns the result obtained from the server to the client requesting the connection on the internet. At this time, the proxy server is externally represented as a server.

As can be seen from the above figure: the reverse proxy server is located in the website computer room, and the proxy website Web server receives the Http request and forwards the request.

1.2 role of reverse proxy

① protects the website: any request from Internet must first go through the proxy server

② accelerates Web requests by configuring caching: it can cache some static resources on the real Web server and reduce the load on the real Web server.

③ implements load balancing: acts as a load balancing server to distribute requests evenly and balance the load pressure on each server in the cluster

Second, the first acquaintance of Nginx: simple but extraordinary

2.1What is Nginx?

Nginx is a lightweight web server, reverse proxy and e-mail proxy server. It distributes the source code as a BSD-like license and is known for its stability, rich feature set, sample configuration files, and low consumption of system resources.

Source:Nginx (pronounced engine x), which was developed by Russian programmer Igor Sysoev. At first, it was used by a large Russian portal and search engine Rambler. This software is distributed under the BSD-like protocol and can be run in operating systems such as UNIX, GNU/Linux, BSD, Mac OS X, Solaris, and Microsoft Windows.

When it comes to Web servers, Apache servers and IIS servers are two giants; but a faster and more flexible rival: Nginx is catching up.

2.2 Application status of Nginx

Nginx has been running on the Russian portal ── Rambler Media (www.rambler.ru) for three years, while more than 20% of Russia's virtual hosting platforms use Nginx as the reverse proxy server.

In China, many websites such as Taobao, Sina blog, Sina podcast, NetEase News, six rooms, 56.com, Discuz!, Shuimu Community, Douban, YUPOO, Haini, Xunlei online and other websites use Nginx as a Web server or reverse proxy server.

2.3 Core Features of Nginx

(1) Cross-platform: Nginx can be compiled and run in most Unix like OS, and there is also a migrated version of Windows

(2) the configuration is extremely simple: it is very easy to use. Configuration style is the same as program development, divine configuration

(3) non-blocking, highly concurrent connections: when data is replicated, the first phase of the disk Iripple O is non-blocking. The official test can support 50,000 concurrent connections and reach 20,000 to 30,000 concurrent connections in the actual production environment. (this is due to the fact that Nginx uses the latest epoll model)

PS: for a Web server, first look at the basic process of a request: establishing a connection-receiving data-sending data. From the bottom of the system, the above process (establishing a connection-receiving data-sending data) is a read and write event at the bottom of the system. If ① adopts the method of blocking calls, when the read-write event is not ready, it must not be able to read and write events. If it has to wait for such a long time, when the event is ready, then the request will be delayed. Since ② is not ready to block calls, it uses non-blocking calls. Non-blocking means: the event will return immediately to tell you that the event is not ready yet. What are you panicking about? come back later. All right, after a while, you will check the event again until the event is ready, in the meantime, you can do something else first, and then check to see if the event is ready. Although it is no longer blocked, you have to check the status of the event from time to time. You can do more, but the cost is not small.

(4) event-driven: the communication mechanism adopts epoll model to support larger concurrent connections.

① non-blocking determines whether to read or write by constantly checking the state of events, which brings a lot of overhead, so there is an asynchronous non-blocking event handling mechanism. This mechanism allows you to monitor multiple events at the same time, calling them is blocked, but you can set the timeout, within the timeout, if an event is ready, return. This mechanism solves the above two problems of blocking calls and non-blocking calls. ② uses the epoll model as an example: when an event is not ready, it is placed in the epoll (queue). If an event is ready, handle it; if the event returns EAGAIN, continue to put it into the epoll. Thus, as long as there is an event ready, we deal with it and wait in the epoll only when none of the events are ready. In this way, we can deal with a large number of concurrency concurrently. Of course, the concurrent request here refers to the unprocessed request, and there is only one thread, so of course there is only one request that can be processed at the same time. It is only constantly switching between requests, and the switching is also actively given up because the asynchronous event is not ready. Switching here is free of cost, and you can understand it as looping through multiple prepared events, which in fact is the case. Compared with multithreading, ③ has great advantages in event handling. There is no need to create threads, each request takes up very little memory, there is no context switching, and event handling is very lightweight. No matter how many concurrency is, it will not lead to unnecessary waste of resources (context switching). For IIS servers, each request has an exclusive worker thread, and when the number of concurrency reaches thousands, there are thousands of threads processing requests at the same time. This is not a small challenge for the operating system: because the memory consumption caused by threads is very large, and the cpu overhead caused by thread context switching is very high, the natural performance can not go up, resulting in serious performance degradation in high concurrency scenarios. Summary: through the asynchronous non-blocking event handling mechanism, Nginx implements that multiple prepared events are processed by the process loop, thus achieving high concurrency and lightweight.

(5) Master/Worker structure: a master process that generates one or more worker processes.

The core idea of PS:Master-Worker design pattern is to parallelize the original serial logic and split the logic into many independent modules to execute in parallel. It mainly consists of two main components, Master and Worker,Master, which mainly split the logic into independent parts, while maintaining the Worker queue, sending each independent part to multiple Worker for parallel execution, and Worker mainly carries out the actual logic calculation and returns the results to Master. Q: what are the benefits of adopting this process model for nginx? A: independent processes can be used so that they will not affect each other. After one process exits, other processes are still working and the service will not be interrupted, while the Master process restarts the new Worker process very quickly. Of course, the abnormal exit of the Worker process must be that the program has bug, and the abnormal exit will cause all requests on the current Worker to fail, but will not affect all requests, so the risk is reduced.

(6) small memory consumption: very small memory consumption for processing large concurrent requests. With 30, 000 concurrent connections, 10 Nginx processes open consume only 150 megabytes of memory (15M*10=150M).

(7) built-in health check function: if a Web server at the backend of the Nginx agent goes down, the access to the front end will not be affected.

(8) bandwidth saving: GZIP compression is supported, and Header headers cached locally by the browser can be added.

(9) High stability: for reverse proxy, the probability of downtime is minimal.

3. Construction practice: Nginx+IIS builds load balance of Web server cluster.

Here we mainly in the Windows environment, by deploying the same Web website to the IIS of different servers, and then through a unified Nginx response proxy server to provide unified access, to achieve a most simplified reverse proxy and load balancing service. However, limited by the experimental conditions, we mainly simulate reverse proxy and IIS cluster on one computer. The specific experimental environment is shown in the following figure: we deploy nginx service and web website on one computer, nginx listens on http80 port, while web website is deployed on the same IIS server with different port numbers (8050 and 8060 here). When users visit localhost, Nginx acts as a reverse proxy to evenly forward requests to Web applications on different ports in the two IIS for processing. Although the experimental environment is simple and limited, this paper can achieve and demonstrate a simple load balancing effect.

Prepare an ASP.NET website for deployment to the IIS server cluster

(1) create a new ASP.NET Web application in VS, but in order to show the effect on a computer, we make a copy of the Web program and modify the Default.aspx of the two Web programs so that the home page shows a different bit of information. Here Web1 shows "The First Web:" and Web2 shows "The Second Web".

(2) debug and run to see how the two websites work?

The display effect of ① Web1:

The display effect of ② Web2:

③ is deployed to IIS, assigning a different port number: I chose Web1:8050,Web2:8060 here

(3) Summary: in the real environment, the implementation of building a Web application server cluster is to deploy the same Web application to multiple Web servers in the Web server cluster.

Download Nginx and deploy it to the server as a self-starting Windows service

(1) download the Windows version of Nginx from the Nginx official website: http://nginx.org/en/download.html(. Here we use the nginx/Windows-1.4.7 version for experiments. There is a download address at the bottom of this article)

(2) extract it to any directory on disk. For example, here I extracted it to: d:\ Servers\ nginx-1.4.7

(3) start, stop and reload services: start nginx.exe:start nginx.exe in daemon mode through cmd, stop services: nginx-s stop, reload configuration: nginx-s reload

(4) starting the Nginx service in cmd mode does not meet the actual requirements, so we want to register it as a Windows service and set it to automatic startup mode. Here, we use a good Mini Program: "Windows Service Wrapper" to register nginx.exe as a Windows service. The specific steps are as follows:

① downloads the latest version of the Windows Service Wrapper program, for example, the name I downloaded is "winsw-1.8-bin.exe" (there is a download address at the bottom of this article), and then name it the name you want (such as "nginx-service.exe", of course, you can not change the name)

② copies the renamed nginx-service.exe to the installation directory of nginx (for example, here is "D:\ Servers\ nginx-1.4.7")

③ creates a XML configuration file for Windows Service Wrapper under the same directory, and the name must be the same as the name used in the first step (for example, "nginx-service.xml" here; if you don't rename it, it should be "winsw-1.8-bin.xml"). The content of this XML is as follows:

NginxNginx ServiceHigh Performance Nginx ServiceD:\ Servers\ nginx-1.4.7\ nginx.exeD:\ Servers\ nginx-1.4.7\ roll-p D:\ Servers\ nginx-1.4.7-p D:\ Servers\ nginx-1.4.7- s stop

④ executes the following command from the command line to register it as a Windows service: nginx-service.exe install

⑤ can then see the Nginx service in the list of Windows services, where we can set it to start automatically:

(5) Summary: in the Windows environment, the startup type of Windows services to be provided is generally set to automatic.

3.3.Modification of Nginx core profile nginx.conf

(1) the number of processes and the maximum number of connections per process:

The number of nginx processes, which is recommended to be equal to the total number of cores of CPU

The maximum number of connections per process, then the maximum number of connections to the server = number of connections * number of processes

(2) basic configuration of Nginx:

The listening port is usually http port: 80

There can be multiple domain names separated by spaces: for example, server_name www.ha97.com ha97.com

(3) basic configuration of load balancer list:

Location / {}: load balancing requests for aspx suffixes. If we want to load balance all files with aspx suffixes, we can write: location ~. *\ .aspx$ {}

Proxy_pass: requests are directed to a custom server list. Here we redirect requests to the list of load balancer servers identified as http://cuitccol.com.

In the configuration of the load balancer server list, weight is the weight, which can be defined according to the machine configuration (if a server has a very good hardware configuration and can handle more requests, you can set a higher weight; and a server with poor hardware configuration, then the weight of the former server can be configured as weight=2, and the second server can be configured as weight=1). The weigth parameter represents the weight. The higher the weight, the greater the probability of being assigned.

(4) Summary: the most basic Nginx configuration is almost the above, of course, it is only the most basic configuration. (download nginx-1.4.7 at the bottom for detailed configuration.)

3.4 add Nginx cache configuration for static files

In order to improve the response speed and reduce the load of the real server, we can cache the static resources in the reverse proxy server, which is also an important role of the reverse proxy server.

(1) Cache the image files of static resources

Root / nginx-1.4.7/staticresources/image: for files such as jpg/png mentioned in the configuration, go to the / nginx-1.4.7/staticresources/image folder to find a match and return the file.

Expires 7d: the expiration time limit is 7 days. Static files are not updated very often. The expiration time limit can be set to a little larger, and if updated frequently, it can be set to a smaller size.

TIPS: the following style and script cache configuration are the same as here, except that the location of the folder is different, so I won't repeat it.

(2) style files for caching static resources

(3) script files for caching static resources

(4) create a static resource folder in the nginx service folder and copy the static files to be cached: here I mainly copy the image, css and js files used in the Web program.

(5) Summary: by configuring the cache settings of static files, requests for these static files can be returned directly from the reverse proxy server without having to forward these static resource requests to a specific Web server for processing, which can improve the response speed and reduce the load pressure on the real Web server.

Thank you for reading this article carefully. I hope the article "how to build a reverse proxy server with Nginx" shared by the editor will be helpful to everyone. At the same time, I also hope you can support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report