Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the application scenario of Nginx?

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly explains "what is the application scenario of Nginx". The explanation content in this article is simple and clear, easy to learn and understand. Please follow the ideas of Xiaobian and go deep into it slowly to study and learn "what is the application scenario of Nginx" together!

Generation of Nginx

Nginx, like Apache, is a Web server. Based on REST architecture style, Uniform Resources Identifier URI or Uniform Resources Locator URL is used as communication basis to provide various network services through HTTP protocol.

However, these servers were limited by the environment at the beginning of their design, such as the user size, network bandwidth, product characteristics, etc., and their respective positioning and development were different. This also makes each Web server have its own distinct characteristics.

Apache has a long history and is unquestionably the world's largest server. It has many advantages: stability, open source, cross-platform, etc.

It has been around for too long, and it rose in an era when the Internet industry was far less than it is now. So it was designed to be a heavyweight.

It does not support highly concurrent servers. Running tens of thousands of concurrent accesses on Apache causes the server to consume a lot of memory.

Switching between processes or threads by the operating system also consumes a lot of CPU resources, resulting in lower average response times for HTTP requests.

These all determine that Apache cannot become a high-performance Web server, and Nginx, a lightweight high-concurrency server, came into being.

Russian engineer Igor Sysoev, who developed Nginx in C while working for Rambler Media.

As a Web server, Nginx has always provided excellent and stable service for Rambler Media. Then Igor Sysoev open-sourced the Nginx code and granted it a free software license.

Because of the following points, Nginx is on fire:

Nginx uses an event-driven architecture that allows it to support millions of TCP connections.

The high degree of modularity and free software licenses has led to the proliferation of third-party modules (this is the era of open source).

Nginx is a cross-platform server that can run on Linux, Windows, FreeBSD, Solaris, AIX, Mac OS and other operating systems.

These excellent designs bring great stability.

Where Nginx works

Nginx is a free, open source, high-performance HTTP server and reverse proxy server; it is also an IMAP, POP3, SMTP proxy server.

Nginx can be used as an HTTP server for website publishing, and Nginx can be used as a reverse proxy for Load Balancer implementation.

about agency

When it comes to agents, we first need to clarify a concept. The so-called agent is a representative and a channel; at this time, two roles are involved, one is the agent role and the other is the target role.

The process by which the agent role accesses the target role to complete some tasks through this agent is called the agent operation process; just like the store in life, the customer buys a pair of shoes at the adidas store, the store is the agent, the agent role is the adidas manufacturer, and the target role is the user.

forward proxy

Before we say reverse proxy, let's first look at forward proxy. Forward proxy is also the proxy mode that we most often come into contact with. We will explain what forward proxy is from two aspects: software and life.

In today's network environment, if we need to visit some foreign websites due to technical needs, you will find that a website located abroad is inaccessible to us through a browser.

At this point everyone may use an operation FQ to access, FQ is mainly to find a proxy server that can access foreign websites, we will send requests to proxy servers, proxy servers to access foreign websites, and then access to the data passed to us!

Such proxy mode is called forward proxy, the biggest characteristic of forward proxy is that the client is very clear about the server address to be accessed; the server only knows which proxy server the request comes from, but does not know which specific client it comes from; the forward proxy mode shields or hides the real client information.

Let's take a look at a schematic diagram (I put the client and the forward proxy box together, both belong to the same environment, and I have an introduction later):

The client must set up a forward proxy server, of course, provided that it knows the IP address of the forward proxy server and the port of the proxy program.

As shown below:

To summarize: A forward proxy,"it proxies the client," is a server located between the client and the origin server, in order to retrieve content from the origin server, the client sends a request to the proxy specifying the destination (origin server).

The proxy then forwards the request to the origin server and returns the obtained content to the client. The client must make some special settings to use the forward proxy.

Use of Forward Proxy:

Access resources that were previously inaccessible, such as Google.

Can do cache, speed up access to resources.

Authorize client access and authenticate the Internet.

Agents can record user access records (Internet behavior management) and hide user information from the outside world.

reverse proxy

Understand what is a positive proxy, we continue to look at the way the reverse proxy is handled, for example, a certain treasure website in our country, the number of visitors connected to the website every day has exploded, and a single server is far from meeting the growing desire of the people to buy.

A familiar term emerges: distributed deployment; that is, deploying multiple servers to solve the problem of limited access.

Most of the functions in a treasure website are also implemented directly using Nginx for reverse proxy, and after encapsulating Nginx and other components, a tall name is played: Tengine.

Interested children's shoes can visit Tengine's official website to see specific information:

http://tengine.taobao.org/

So how does reverse proxy implement distributed cluster operation? Let's first look at a schematic diagram (I put the server and reverse proxy box together, both belong to the same environment, and I will introduce them later):

Through the above diagram, you can see clearly that after the requests sent by multiple clients to the server are received by the Nginx server, they are distributed to the backend business processing server for processing according to certain rules.

At this point, the source of the request is clear, but the request is not clear which server to process, and Nginx plays a reverse proxy role.

The client is agent-unaware, the reverse proxy is transparent to the outside world, and the visitor does not know that he is accessing a proxy. Because clients don't need any configuration to access it.

Reverse proxy,"it proxies the server side," is mainly used in the case of distributed deployment of server clusters, reverse proxy hides the information of the server.

The role of reverse proxy:

To ensure the security of the intranet, the reverse proxy is usually used as the public network access address, and the Web server is the intranet.

Load Balancer, which optimizes the load of a website through a reverse proxy server.

Project scenario

Usually, when we operate the actual project, the forward proxy and the reverse proxy are likely to exist in the same application scenario. The forward proxy proxy client requests to access the target server. The target server is a reverse proxy server, and the reverse proxy has multiple real business processing servers.

The detailed topology diagram is as follows:

A diagram illustrating the difference between forward proxy and reverse proxy is shown below:

Illustration:

In a forward proxy, the Proxy and Client belong to the same LAN (box in the figure), hiding the client information.

In reverse proxy, Proxy and Server belong to the same LAN (box in the figure), hiding server-side information.

In fact, what Proxy does in both proxies is to send and receive requests and responses on behalf of the server, but from the perspective of structure, it is just exchanged left and right, so the proxy method that appears later is called reverse proxy.

Load Balancer

We have clarified the concept of the so-called proxy server, so next, Nginx plays the role of a reverse proxy server, according to what rules does it distribute requests? Can the rules of distribution be controlled for unused project application scenarios?

The number of requests sent by the client and received by the Nginx reverse proxy server mentioned here is what we call the load.

The number of requests is distributed according to a certain rule, and the rule of processing to different servers is a kind of balance rule.

Therefore, the process of distributing requests received by servers according to rules is called Load Balancer.

Load Balancer In the actual project operation process, there are two kinds of hardware Load Balancer and software Load Balancer. Hardware Load Balancer is also called hard load, such as F5 Load Balancer, which is relatively expensive.

However, the stability and security of data are very well guaranteed. Companies such as China Mobile and China Unicom will choose hard load for operation.

For cost reasons, more companies will choose to use software Load Balancer. Software Load Balancer is a message queue distribution mechanism implemented by using existing technology and host hardware.

Load Balancer scheduling algorithms supported by Nginx are as follows:

①weight polling (default): The received requests are distributed to different backend servers one by one in order. Even if a backend server goes down during use, Nginx will automatically remove the server from the queue, and the request acceptance will not be affected.

In this way, you can set a weight to different backend servers to adjust the distribution of requests on different servers.

The larger the weight data, the greater the probability of being assigned to the request; this weight value is mainly adjusted for different backend server hardware configurations in the actual working environment.

IP_hash: Each request is matched according to the hash result of the IP of the originating client. In this algorithm, a client with a fixed IP address will always access the same backend server, which also solves the problem of Session sharing in cluster deployment environment to a certain extent.

Fair: Intelligent adjustment scheduling algorithm, dynamically balanced allocation according to the time from request processing to response of backend server.

The server with short response time and high processing efficiency has high probability of allocating requests, while the server with long response time and low processing efficiency has few requests. It is a scheduling algorithm combining the advantages of the former two.

However, it should be noted that Nginx does not support the fair algorithm by default. If you want to use this scheduling algorithm, please install the upstream_fair module.

④url_hash: Assign requests according to the hash result of the accessed URL. Each requested URL will point to a fixed server in the backend, which can improve cache efficiency when Nginx is used as a static server.

Also note that Nginx does not support this scheduling algorithm by default. To use it, you need to install the Nginx hash package.

Thank you for reading, the above is the content of "what is the application scenario of Nginx", after learning this article, I believe that everyone has a deeper understanding of what the application scenario of Nginx is, and the specific use situation needs to be verified by practice. Here is, Xiaobian will push more articles related to knowledge points for everyone, welcome to pay attention!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report