Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Master the method and process of configuring instance of Nginx load balancer

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

The following brings you the methods and processes to master the configuration examples of Nginx load balancer, hoping to give you some help in practical application. There are many things involved in load balancing, not many theories, and there are many books on the Internet. Today, we will use the accumulated experience in the industry to do an answer.

Load balancer is one of the things that our high-traffic websites need to do. Let me introduce how to configure load balancer on Nginx CVM. I hope it will be helpful to students in need.

Load balancing

First of all, let's briefly understand what load balancing is. Literally, it can explain that N servers share the load evenly and will not be idle because of the high load and downtime of a server. Then the premise of load balancing is that there must be multiple servers, that is, more than two servers.

Test environment

Since there is no server, this test directly host specifies the domain name, and then installs three CentOS in VMware.

Test domain name: a.com

A server IP: 192.168.5.149 (master)

B server IP: 192.168.5.27

C server IP: 192.168.5.126

Deployment idea

Server An acts as the main server, and the domain name is resolved directly to server A (192.168.5.149), and the load is balanced from server A to server B (192.168.5.27) and server C (192.168.5.126).

Domain name resolution

Since the domain name is not in a real environment, a random a.com is used as a test, so the resolution of a.com can only be set in the hosts file.

Open: C:WindowsSystem32driversetchosts

Add at the end

192.168.5.149 a.com

Save exit, and then start the command mode ping to see if it has been set up successfully

From the screenshot, we can see that a.com has been parsed to 192.168.5.149IP successfully.

A server nginx.conf settings

Open nginx.conf, and the file location is in the conf directory of the nginx installation directory.

Add the following code to the http section

Upstream a.com {

Server 192.168.5.126:80

Server 192.168.5.27:80

}

Server {

Listen 80

Server_name a.com

Location / {

Proxy_pass http://a.com;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

}

}

Save restart nginx

B, C server nginx.conf settings

Open nginx.confi and add the following code to the http section

Server {

Listen 80

Server_name a.com

Index index.html

Root / data0/htdocs/www

}

Save restart nginx

test

When accessing a.com, in order to distinguish which server to deal with, I wrote an index.html file with different contents under B server and C server respectively.

Open a browser to access the a.com result, and refresh will find that all requests are assigned to server B (192.168.5.27) and server C (192.168.5.126) by the master server (192.168.5.149), achieving the load balancing effect.

B server processes pages

C server processes pages

What if one of the servers goes down?

When a server goes down, will access be affected?

Let's take a look at the example first. according to the above example, suppose the C server 192.168.5.126 is down (I shut down the C server because I can't simulate the downtime), and then visit it again.

Access results:

We found that although the C server (192.168.5.126) was down, it did not affect website access. In this way, you don't have to worry about dragging down the entire site due to the downtime of a machine in load balancing mode.

What if b.com also wants to set up load balancing?

It's simple, just like the a.com setting. As follows:

Suppose b.com 's primary server IP is 192.168.5.149, and the load is balanced to 192.168.5.150 and 192.168.5.151 machines.

The domain name b.com is now resolved to 192.168.5.149IP.

Add the following code to the nginx.conf of the primary server (192.168.5.149):

Upstream b.com {

Server 192.168.5.150:80

Server 192.168.5.151:80

}

Server {

Listen 80

Server_name b.com

Location / {

Proxy_pass http://b.com;

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

}

}

Save restart nginx

Set up nginx on the 192.168.5.150 and 192.168.5.151 machines, open nginx.conf and add the following code at the end:

Server {

Listen 80

Server_name b.com

Index index.html

Root / data0/htdocs/www

}

Save restart nginx

After completing the next steps, you can realize the load balancing configuration of b.com.

Can't the primary server provide services?

In the above example, we all apply the load balancing of the master server to other servers, so whether the master server itself can also be added to the server list, so that it will not be wasted to use a server as a forwarding function. But also participate in providing services.

As in the above case, there are three servers:

A server IP: 192.168.5.149 (master)

B server IP: 192.168.5.27

C server IP: 192.168.5.126

We resolve the domain name to the A server, and then forward the domain name from the A server to the B server and C server, then the A server only does a forwarding function, and now we let the A server also provide site services.

Let's first analyze that if you add a primary server to upstream, the following two situations may occur:

1. The main server forwards to other IP, and other IP servers handle it normally.

2. The master server forwards to its own IP, and then goes to the master server allocation IP. If it is assigned to this machine all the time, it will cause an endless cycle.

How to solve this problem? Because port 80 has been used to listen to the processing of load balancer, port 80 can no longer be used on this server to handle a.com access requests, but a new one is needed. So we add the nginx.conf of the main server to the following code:

Server {

Listen 8080

Server_name a.com

Index index.html

Root / data0/htdocs/www

}

Restart nginx and type a.com:8080 in the browser to see if you can access it. The results can be accessed normally

Now that we have normal access, we can add the master server to the upstream, but the port needs to be changed, as follows:

Upstream a.com {

Server 192.168.5.126:80

Server 192.168.5.27:80

Server 127.0.0.1:8080

}

Since you can add the master server IP192.168.5.149 or 127.0.0.1, you can access yourself.

Restart Nginx, and then visit a.com to see if it will be assigned to the primary server.

The main server can also join the service normally.

Last

First, load balancing is not unique to nginx. The famous apache also has it, but its performance may not be as good as nginx.

2. Multiple servers provide services, but the domain name is only resolved to the master server, while the real server IP can not be obtained under ping, which adds some security.

Third, the IP in upstream is not necessarily an internal network, but an external network IP is also possible. However, the classic case is that an IP in the local area network is exposed under the public network, and the domain name is resolved directly to this IP. Then the master server is forwarded to the intranet server IP.

4. The downtime of a server will not affect the normal operation of the website, and Nginx will not forward the request to the IP that is already down.

After reading the above methods and processes for mastering Nginx load balancer configuration examples, if there is anything else you need to know, you can find out what you are interested in in the industry information or find our professional and technical engineers for answers. Technical engineers have more than ten years of experience in the industry.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report