Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the function of Nginx?

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article introduces the relevant knowledge of "what is the role of Nginx". In the operation of actual cases, many people will encounter such a dilemma. Then let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

The generation of Nginx

Nginx, like Apache, is a Web server. Based on the REST architecture style, take the unified resource descriptor (Uniform Resources Identifier) URI or the unified resource locator (Uniform Resources Locator) URL as the basis for communication, and provide various network services through the HTTP protocol.

However, at the beginning of the design, these servers were limited by the environment at that time, such as the size of users, network bandwidth, product characteristics and so on, and their positioning and development were different. This also makes each Web server have its own distinct characteristics.

Apache has been developing for a long time and is the undisputed largest server in the world. It has many advantages: stable, open source, cross-platform and so on.

It has been around for too long, and in the era of its rise, the Internet industry is far from what it is now. So it's designed to be a heavyweight.

It does not support highly concurrent servers. Running tens of thousands of concurrent accesses on Apache can cause the server to consume a lot of memory.

Switching between processes or threads by the operating system also consumes a lot of CPU resources, resulting in a decrease in the average response speed of HTTP requests.

All these determine that it is impossible for Apache to become a high-performance Web server, and the lightweight high concurrency server Nginx arises at the historic moment.

Igor Sysoev, a Russian engineer, developed Nginx in C while working for Rambler Media.

As a Web server, Nginx has been providing excellent and stable services for Rambler Media. Igor Sysoev then opens up the Nginx code and grants a free software license.

Nginx became popular because of the following points:

Nginx uses an event-driven architecture that enables it to support millions of TCP connections.

A high degree of modularity and free software licenses have led to the emergence of third-party modules (this is the era of open source).

Nginx is a cross-platform server that can run on Linux, Windows, FreeBSD, Solaris, AIX, Mac OS and other operating systems.

These excellent designs bring great stability.

The opportunity for Nginx to display his talents

Nginx is a free, open source, high-performance HTTP server and reverse proxy server; it is also an IMAP, POP3, SMTP proxy server.

Nginx can be used as a HTTP server to publish the website, and Nginx can be used as a reverse proxy to achieve load balancing.

About Agent

When it comes to agency, first of all, we need to make clear a concept, the so-called agent is a representative, a channel; at this time involves two roles, one is the delegated role, the other is the target role.

The process in which the agent role accesses the target role to complete some tasks through this agent is called the agent operation process; just like the store in life, the guest goes to the adidas store to buy a pair of shoes, the store is the agent, the agent role is the adidas manufacturer, and the target role is the user.

Forward agent

Before we talk about reverse proxy, let's take a look at the forward agent. Forward agent is also the most common agent model that everyone comes into contact with. We will talk about the processing mode of forward agent from two aspects. Explain what is a forward agent from the aspects of software and life respectively.

In today's network environment, if we have to visit some foreign websites because of technical needs, you will find that there is no way for us to visit a foreign website through a browser.

At this time, everyone may use an operation FQ to access, the main way of FQ is to find a proxy server that can access foreign websites, we will send the request to the proxy server, the proxy server to visit foreign websites, and then pass the data to us!

The most important feature of the forward proxy is that the client is very clear about the server address to be accessed; the server only knows which proxy server the request comes from, but not which specific client; the forward proxy mode shields or hides the real client information.

Let's take a look at a schematic diagram (I put the client and the forward agent frame together and belong to the same environment, which I will introduce later):

The client must set up a forward proxy server, of course, as long as it knows the IP address of the forward proxy server and the port of the agent.

As shown below:

To sum up: a forward agent, "it proxies the client", is a server located between the client and the original server (Origin Server). In order to get content from the original server, the client sends a request to the agent and specifies the destination (the original server).

The agent then transfers the request to the original server and returns the obtained content to the client. The client must make some special settings to use the forward proxy.

The purpose of the forward agent:

Access resources that were previously inaccessible, such as Google.

Caching can be done to accelerate access to resources.

Authorize the access to the client and authenticate the Internet.

The agent can record user access records (Internet behavior management) and hide user information from the outside.

Reverse proxy

Knowing what forward proxy is, we continue to look at the handling of reverse proxy, such as the website of a certain treasure in our country, the number of visitors who connect to the website at the same time every day has burst, and a single server is far from being able to meet the people's growing desire to buy.

At this point, there is a familiar term: distributed deployment; that is, through the deployment of multiple servers to solve the problem of access restrictions.

Most of the functions in some treasure website are also realized by reverse proxy directly using Nginx, and after encapsulating Nginx and other components, it has a high-end name: Tengine.

If you are interested in children's shoes, you can visit Tengine's website for more information:

Http://tengine.taobao.org/

So how does the reverse proxy implement the distributed cluster operation? let's first take a look at a schematic diagram (I put the server and the reverse proxy together and belong to the same environment, which I will introduce later:

Through the above illustration, you can see clearly that the requests sent by multiple clients to the server are received by the Nginx server and distributed to the back-end business processing server for processing according to certain rules.

At this point, the source of the request, that is, the client, is clear, but it is not clear which server will handle the request. Nginx plays the role of a reverse proxy.

The client is not aware of the existence of the proxy, the reverse proxy is transparent to the outside world, and the visitor does not know that he is accessing a proxy. Because the client does not need any configuration to access.

Reverse proxy, "it proxies the server", is mainly used in the case of distributed deployment of the server cluster, the reverse proxy hides the information of the server.

The role of reverse proxy:

To ensure the security of the intranet, the reverse proxy is usually used as the public network access address, and the Web server is the intranet.

Load balancing, through the reverse proxy server to optimize the load of the website.

Project scene

In general, when we operate the actual project, it is very likely that the forward proxy and the reverse proxy will exist in the same application scenario, the forward proxy client requests to access the target server, and the target server is a reverse single-interest server, which proxies multiple real business processing servers in reverse.

The specific topology diagram is as follows:

A picture is taken to illustrate the difference between forward proxy and reverse proxy, as shown in the following figure:

Illustration:

In the forward proxy, Proxy and Client belong to the same LAN (in the box in the figure), which hides the client information.

In the reverse proxy, Proxy and Server belong to the same LAN (in the box in the figure), which hides the server information.

In fact, what Proxy does in both agents is to send and receive requests and responses on behalf of the server, but structurally it just swaps left and right, so the later proxy is called a reverse proxy.

Load balancing

We have clarified the concept of the so-called proxy server, so next, Nginx plays the role of a reverse proxy server. What are the rules for requesting distribution? For unused project application scenarios, can the rules of distribution be controlled?

The number of requests sent by the client and received by the Nginx reverse proxy server mentioned here is what we call the load. The number of requests is distributed according to a certain rule, and the rule of processing to different servers is a kind of balance rule.

Therefore, the process of distributing requests received by the server according to the rules is called load balancing.

Load balancing in the actual project operation process, there are hardware load balancing and software load balancing, hardware load balancing is also known as hard load, such as F5 load balancing, relatively expensive and high cost.

However, the stability and security of data are well guaranteed, and companies such as China Mobile and China Unicom will choose hard load to operate.

Considering the cost, more companies will choose to use software load balancing, which is a message queue distribution mechanism implemented by existing technology combined with host hardware.

The load balancing scheduling algorithms supported by Nginx are as follows:

① weight polling (default): received requests are assigned to different backend servers one by one in order. Even if a backend server goes down during use, Nginx will automatically remove the server from the queue, and the acceptance of requests will not be affected in any way.

In this way, you can set a weight value (weight) for different back-end servers to adjust the allocation rate requested on different servers.

The larger the weight data, the greater the probability of being assigned to the request; the weight value is mainly adjusted for different backend server hardware configurations in the actual working environment.

② ip_hash: each request is matched according to the hash result of the ip of the initiating client. Under this algorithm, a client with a fixed ip address will always access the same backend server, which solves the problem of Session sharing in the cluster deployment environment to some extent.

③ fair: intelligently adjusts the scheduling algorithm, which dynamically allocates the time from the request to the response of the back-end server.

The server with short response time and high processing efficiency has a high probability of being assigned to requests, while the server with long response time and low processing efficiency has fewer requests. It is a scheduling algorithm that combines the advantages of the former two.

However, it should be noted that Nginx does not support the fair algorithm by default, so if you want to use this scheduling algorithm, install the upstream_fair module.

④ url_hash: assign requests according to the hash result of the accessed URL. The URL of each request will be directed to a server fixed at the back end, which can improve cache efficiency when Nginx is used as a static server.

Also note that Nginx does not support this scheduling algorithm by default, and you need to install Nginx's hash package to use it.

Comparison of Web servers

The comparison of several commonly used Web servers is as follows:

This is the end of the content of "what is the role of Nginx"? thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report