Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What can Nginx do?

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what can Nginx do". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "what can Nginx do".

The generation of Nginx

Never heard of Nginx? Then you must have heard of its "peer" Apache! Nginx, like Apache, is a WEB server. Based on the REST architecture style, take the unified resource descriptor (Uniform Resources Identifier) URI or the unified resource locator (Uniform Resources Locator) URL as the basis for communication, and provide various network services through the HTTP protocol.

However, at the beginning of the design, these servers were limited by the environment at that time, such as the size of users, network bandwidth, product characteristics and so on, and their positioning and development were different. This also makes each WEB server have its own distinct characteristics.

Apache has been developing for a long time and is the undisputed largest server in the world. It has many advantages: stable, open source, cross-platform and so on. It has been around for too long, and in the era of its rise, the Internet industry is far from what it is now. So it's designed to be a heavyweight. It does not support highly concurrent servers. Running tens of thousands of concurrent accesses on Apache can cause the server to consume a lot of memory. Switching between processes or threads by the operating system also consumes a lot of CPU resources, resulting in a decrease in the average response speed of HTTP requests.

All these determine that it is impossible for Apache to become a high-performance WEB server, and the lightweight high concurrency server Nginx arises at the historic moment.

Igor Sysoev, a Russian engineer, developed Nginx in C while working for Rambler Media. As a WEB server, Nginx has been providing excellent and stable services for Rambler Media.

Igor Sysoev then opens up the Nginx code and grants a free software license.

Because:

Nginx uses an event-driven architecture that enables it to support millions of TCP connections

A high degree of modularity and free software licenses make third-party modules emerge in endlessly (this is an era of open source)

Nginx is a cross-platform server that can run on operating systems such as Linux,Windows,FreeBSD,Solaris,AIX,Mac OS

These excellent designs bring great stability.

So, Nginx is hot!

The opportunity for Nginx to display his talents

Nginx is a free, open source, high-performance HTTP server and reverse proxy server; it is also an IMAP, POP3, SMTP proxy server; Nginx can be used as a HTTP server to publish websites, and Nginx can be used as a reverse proxy for load balancing.

About Agent

When it comes to agency, first of all, we have to make clear a concept. The so-called agency is a representative and a channel.

At this time, two roles are involved, one is the agent role, the other is the target role, the process in which the agent role accesses the target role to complete some tasks through this agent is called the agent operation process; just like the store in life ~ the guest buys a pair of shoes to the adidas store, the store is the agent, the agent role is the adidas manufacturer, and the target role is the user.

Forward agent

Before we talk about reverse proxy, let's take a look at the forward agent. Forward agent is also the most common agent model that everyone comes into contact with. We will talk about the processing mode of forward agent from two aspects. Explain what is a forward agent from the aspects of software and life respectively.

In today's network environment, if we need to visit some foreign websites because of technical needs, you will find that there is no way for us to visit a website located abroad through a browser. At this time, everyone may use an operation FQ to visit. The main way to FQ is to find a proxy server that can access foreign websites, and we will send the request to the proxy server. Proxy server to visit foreign websites, and then pass the data to us!

The most important feature of the forward proxy is that the client is very clear about the server address to be accessed; the server only knows which proxy server the request comes from, but not which specific client; the forward proxy mode shields or hides the real client information. Let's take a look at a schematic diagram (I put the client and the forward agent frame together and belong to the same environment, which I will introduce later):

The client must set up a forward proxy server, of course, as long as it knows the IP address of the forward proxy server and the port of the agent. As shown in the picture.

To sum up: forward agent, "it proxies the client and sends requests on behalf of the client", is a server located between the client and the original server (origin server). In order to obtain content from the original server, the client sends a request to the agent and specifies the target (the original server), and then the agent transfers the request to the original server and returns the obtained content to the client. The client must make some special settings to use the forward proxy.

The purpose of the forward agent:

(1) access resources that were previously inaccessible, such as Google

(2) caching can be done to accelerate access to resources.

(3) authorize client access and authenticate the Internet.

(4) the agent can record user access records (Internet behavior management) and hide user information from the outside.

Reverse proxy

Now that we understand what forward proxy is, we continue to look at the handling of reverse proxy. For example, the number of visitors who connect to the website at the same time every day has exploded. A single server is far from being able to meet the people's growing desire to buy. At this time, there is a familiar term: distributed deployment. That is, through the deployment of multiple servers to solve the problem of restrictions on the number of visitors; most of the functions of a treasure website are also directly realized by reverse proxy using Nginx, and after encapsulating Nginx and other components, they have a high-end name: Tengine, interested children's shoes can visit Tengine's official website to view specific information: http://tengine.taobao.org/. So how does the reverse proxy implement the distributed cluster operation? let's first take a look at a schematic diagram (I put the server and the reverse proxy together and belong to the same environment, which I will introduce later:

Through the above illustration, you can see clearly that the requests sent by multiple clients to the server are received by the Nginx server and distributed to the back-end business processing server for processing according to certain rules. At this point, the source of the request, that is, the client, is clear, but it is not clear which server will handle the request. Nginx plays the role of a reverse proxy.

The client is not aware of the existence of the proxy, the reverse proxy is transparent to the outside world, and the visitor does not know that he is accessing a proxy. Because the client does not need any configuration to access.

Reverse proxy, "it proxies the server and receives requests on behalf of the server", is mainly used in the case of distributed deployment of the server cluster, the reverse proxy hides the information of the server.

The role of reverse proxy:

(1) to ensure the security of the private network, the reverse proxy is usually used as the public network access address, and the Web server is the private network.

(2) load balancing, optimizing the load of the website through the reverse proxy server

Project scene

In general, when we operate the actual project, the forward proxy and reverse proxy are likely to exist in an application scenario, the forward proxy client requests to access the target server, and the target server is a reverse single-interest server, which proxies multiple real business processing servers. The specific topology diagram is as follows:

The difference between the two

A picture is taken to illustrate the difference between forward proxy and reverse proxy, as shown in the figure.

Illustration:

In the forward agent, Proxy and Client belong to the same LAN (in the box in the figure), hiding the client information.

In the reverse proxy, Proxy and Server belong to the same LAN (in the box in the figure), hiding the server information.

In fact, what Proxy does in both agents is to send and receive requests and responses on behalf of the server, but structurally it just swaps left and right, so the later proxy is called a reverse proxy.

Load balancing

We have clarified the concept of the so-called proxy server, so next, Nginx plays the role of a reverse proxy server. According to what rules does it request distribution? For unused project application scenarios, can the rules of distribution be controlled?

The number of requests sent by the client and received by the Nginx reverse proxy server mentioned here is what we call the load.

The rule that the number of requests is distributed to different servers according to certain rules is a balance rule.

Therefore, the process of distributing requests received by the server according to the rules is called load balancing.

Load balancing in the actual project operation process, there are two kinds of hardware load balancing and software load balancing, hardware load balancing is also known as hard load, such as F5 load balancing, which is relatively expensive and expensive, but the stability and security of data are very well guaranteed. Companies such as China Mobile China Unicom will choose hard load to operate. Considering the cost, more companies will choose to use software load balancing, which is a message queue distribution mechanism implemented by existing technology combined with host hardware.

The load balancing scheduling algorithms supported by Nginx are as follows:

Weight polling (default, commonly used): requests received are assigned to different backend servers according to weight. Even if a backend server goes down during use, Nginx will automatically remove the server from the queue, and the acceptance of requests will not be affected in any way. In this way, you can set a weight value (weight) for different backend servers to adjust the allocation rate of requests on different servers. The larger the weight data, the greater the probability of being assigned to the request. The weight value is mainly adjusted for different backend server hardware configurations in the actual working environment.

Ip_hash (commonly used): each request is matched according to the hash result of the ip of the initiating client. Under this algorithm, a client with a fixed ip address will always access the same backend server, which solves the problem of session sharing in the cluster deployment environment to a certain extent.

Fair: intelligent adjustment scheduling algorithm, which dynamically allocates evenly according to the time from request processing to response of the back-end server. Servers with short response time and high efficiency have a high probability of allocation to requests, while servers with long response time and low efficiency allocate fewer requests. A scheduling algorithm that combines the advantages of the former two. However, it should be noted that Nginx does not support the fair algorithm by default, so if you want to use this scheduling algorithm, install the upstream_fair module.

Url_hash: assign requests according to the hash result of the accessed url. The url of each request will be directed to a server fixed at the back end, which can improve cache efficiency when Nginx is used as a static server. Also note that Nginx does not support this scheduling algorithm by default, and you need to install Nginx's hash package to use it.

Comparison of several commonly used web servers

Thank you for your reading, the above is the content of "what can Nginx do". After the study of this article, I believe you have a deeper understanding of what Nginx can do, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report