Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of installing nginx Server and realizing load balancing configuration in Linux system

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces the example analysis of installing nginx server and realizing load balancing configuration in Linux system. The article is very detailed and has certain reference value. Interested friends must finish reading it!

Nginx (engine x) is a high-performance HTTP and reverse proxy server, mail proxy server, and general-purpose TCP/UDP proxy server. It is characterized by lightweight (less system resources), good stability, scalability (modular structure), strong concurrency, simple configuration and so on.

This paper mainly introduces the implementation of basic load balancing functions through nginx in the test environment.

Nginx can provide HTTP services, including dealing with static files, supporting SSL and TLS SNI, GZIP web page compression, virtual hosting, URL rewriting and other functions, can be used with FastCGI, uwsgi and other programs to deal with dynamic requests.

In addition, nginx can also be used for proxy, reverse proxy, load balancing, caching and other server functions to improve network load and availability in the cluster environment.

First, set up the test environment

The test environment here is two Lubuntu 19.04 virtual machines installed through VirtualBox. The installation method of the Linux system will not be described in detail.

In order to ensure the mutual access between the two Linux virtual machines, the network configuration of the virtual machine uses the internal network (Internal) networking mode provided by VirtualBox software in addition to the default NAT mode.

In addition, the network cards associated with the "internal network" in the two virtual machines need to be bound to the static IP address of the same network segment, then the two hosts form a local area network and can access each other directly.

Network configuration

Open the VirtualBox software, go to the settings interface of the two virtual machines, and add a network connection whose connection mode is the internal network. The screenshot is as follows (both virtual machines are configured in the same way):

Internal network

Log in to the virtual machine system and use the ip addr command to view the current network connection information:

$ip addr...2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:38:65:a8 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15 enp0s8 24 brd 10.0.2.255 scope global dynamic noprefixroute enp0s3 valid_lft 86390sec preferred_lft 86390sec inet6 fe80::9a49:54d3:2ea6:1b50/64 scope link noprefixroute valid_lft forever preferred_lft forever3: enp0s8: mtu 1500 qdisc fq_codel state UP group default qlen 1000 Link/ether 08:00:27:0d:0b:de brd ff:ff:ff:ff:ff:ff inet6 fe80::2329:85bd:937e:c484/64 scope link noprefixroute valid_lft forever preferred_lft forever

As you can see, the enp0s8 Nic is not yet bound with an IPv4 address, so you need to manually specify a static IP for it.

It should be noted that since Ubuntu version 17.10, a new tool called netplan has been introduced, and the original network profile / etc/network/interfaces is no longer in effect.

Therefore, you need to modify the / etc/netplan/01-network-manager-all.yaml configuration file when setting static IP for Nic, as shown in the following example:

Network: version: 2 renderer: NetworkManager ethernets: enp0s8: dhcp4: no dhcp6: no addresses: [192.168.1.101/24] # gateway4: 192.168.1.101# nameservers:# addresses: [192.168.1.101, 8.8.8.8]

Because the two hosts are on the same subnet, the gateway and the DNS server can still access each other even if they are not configured. Comment out the corresponding configuration items for now (you can try to build your own DNS server later).

Run the sudo netplan apply command after editing, and the previously configured static IP will take effect.

$ip addr...3: enp0s8: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 08:00:27:0d:0b:de brd ff:ff:ff:ff:ff:ff inet 192.168.1.101 Plus 24 brd 192.168.1.255 scope global noprefixroute enp0s8 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe0d:bde/64 scope link valid_lft forever preferred_lft forever

Log in to another virtual machine and do the same (notice that the addresses entry in the configuration file is changed to [192.168.1.102and24]). The network of the two virtual machines is configured.

At this point, there is a Linux virtual machine with the server1,IP address 192.168.1.101 and the server2,IP address 192.168.1.102. The two hosts can access each other. The tests are as follows:

Starky@server1:~$ ping 192.168.1.102-c 2PING 192.168.1.102 (192.168.1.102) 56 (84) bytes of data.64 bytes from 192.168.1.102: icmp_seq=1 ttl=64 time=0.951 ms64 bytes from 192.168.1.102: icmp_seq=2 ttl=64 time=0.330 ms--- 192.168.1.102 ping statistics-2 packets transmitted, 2 received, 0 packet loss Time 2msrtt min/avg/max/mdev = 0.330packets transmitted 0.640 msskitar@server2:~$ ping 0.951.101-c 2PING 192.168.1.101 (192.168.1.101) 56 (84) bytes of data.64 bytes from 192.168.1.101: icmp_seq=1 ttl=64 time=0.223 ms64 bytes from 192.168.1.101: icmp_seq=2 ttl=64 time=0.249 ms--- 192.168.1.101 ping statistics-- 2 packets transmitted, 2 received, 0% packet loss Time 29msrtt min/avg/max/mdev = 0.223 ms 0.236 max 0.249 max 0.013

Second, install nginx server

There are two main ways to install nginx:

A precompiled binary program. This is the easiest and fastest way to install, and all major operating systems can be installed through package managers such as Ubuntu's apt-get. In this way, almost all official modules or plug-ins are installed.

Compile and install from source code. This approach is more flexible than the former, and you can choose which modules or third-party plug-ins you need to install.

There are no special requirements for this example, so choose the first installation method directly. The command is as follows:

$sudo apt-get update$ sudo apt-get install nginx

After the installation is successful, check the running status of the nginx service through the systemctl status nginx command:

$systemctl status nginx ● nginx.service-A high performance web server and a reverse proxy server Loaded: loaded (/ lib/systemd/system/nginx.service; enabled; vendor preset: en Active: active (running) since Tue 2019-07-02 01:22:07 CST; 26s ago Docs: man:nginx (8) Main PID: 3748 (nginx) Tasks: 2 (limit: 1092) Memory: 4.9m CGroup: / system.slice/nginx.service ├─ 3748 nginx: master process / usr/sbin/nginx-g daemon on Master_pro └─ 3749 nginx: worker process

Verify that the Web server can be accessed properly with the curl-I 127.0.0.1 command:

$curl-I 127.0.0.1HTTP/1.1 200 OKServer: nginx/1.15.9 (Ubuntu).

III. Load balancing configuration

Load balancing (load-balancing) is to distribute the load to multiple operating units according to certain rules, so as to improve the availability and response speed of the service.

A simple example diagram is as follows:

Load-balancing

For example, a website application is deployed on a server cluster composed of multiple hosts, and the load balancing server is located between the end user and the server cluster, and is responsible for receiving the access traffic of the end user. According to certain rules, the user access is distributed to the back-end server host, so as to improve the response speed in the state of high concurrency.

Load balancing server

Nginx can configure load balancing through the upstream option. Here the virtual machine server1 is used as the load balancing server.

Modify the configuration file (sudo vim / etc/nginx/sites-available/default) of the default site on serve1 to read as follows:

Upstream backend {server 192.168.1.102 server 192.168.1.102;} server {listen 80; location / {proxy_pass http://backend;}}

For testing purposes, there are currently only two virtual machines. Server1 (192.168.1.101) is already a load balancing server, so server2 (192.168.1.102) is used as the application server.

Here, with the help of the virtual hosting function of nginx, 192.168.1.102 and 192.168.1.102 are simulated as two different application servers respectively.

Application server

Modify the configuration file (sudo vim / etc/nginx/sites-available/default) of the default site on server2 to read as follows:

Server {listen 80; root / var/www/html; index index.html index.htm index.nginx-debian.html; server_name 192.168.1.102; location / {try_files $uri $uri/ = 404;}}

Create the index.html file in the / var/www/html directory as the index page of the default site, as follows:

Index Page From Server1 This is Server1, Address 192.168.1.102.

Run the sudo systemctl restart nginx command to restart the nginx service, and visit http://192.168.1.102 to get the index.html page you just created:

$curl 192.168.1.102 Index Page From Server1 This is Server1, Address 192.168.1.102.

Configure the site on "another host" and create a / etc/nginx/sites-available/server2 configuration file on server2, as follows:

Server {listen 8000; root / var/www/html; index index2.html index.htm index.nginx-debian.html; server_name 192.168.1.102; location / {try_files $uri $uri/ = 404;}}

Note the configuration changes for the listening port and the index page. Create the index2.html file in the / var/www/html directory as the index page of the server2 site, as follows:

Index Page From Server2 This is Server2, Address 192.168.1.102:8000.

PS: for testing purposes, the default site and the server2 site are configured on the same host server2, with slightly different pages. In the actual environment, the two sites are usually configured on different hosts and the content is the same.

Run the sudo ln-s / etc/nginx/sites-available/server2 / etc/nginx/sites-enabled/ command to enable the server2 site you just created.

Restart the nginx service, and visit http://192.168.1.102:8000 to get the index2.html page you just created:

$curl 192.168.1.102 Address 8000 Index Page From Server2 This is Server2, Address 192.168.1.102

Load balancing test

Go back to the load balancing server, the virtual machine server1, and the reverse proxy URL set in its configuration file is http://backend.

URL http://backend cannot be located to the correct location because the domain name resolution service has not been configured.

You can modify the / etc/hosts file on server1 to add the following record:

127.0.0.1 backend

The domain name can be resolved to the local IP to complete the access to the load balancer server.

Restart the nginx service and access http://backend on server1. The results are as follows:

$curl http://backend Index Page From Server1 This is Server1, Address 192.168.1.102. $curl http://backend Index Page From Server2 This is Server2, Address 192.168.1.102. $curl http://backend Index Page From Server1 This is Server1, Address 192.168.1.102. $curl http://backend Index Page From Server2 This is Server2, Address 192.168.1.102.

As can be seen from the output, the access of server1 to the load balancing server http://backend completes the polling of two Web sites on the application server server2, which plays the role of load balancing.

Fourth, load balancing method

The open source version of nginx provides four ways to implement load balancing, which are briefly introduced below.

1. Round Robin

User requests are evenly distributed to the backend server cluster (the weight of polling can be set through the weight option), which is the default load balancing method used by nginx:

Upstream backend {server backend1.example.com weight=5; server backend2.example.com;}

2. Least Connections

User requests are preferentially forwarded to the server with the least number of active connections in the cluster. The weight option is also supported.

Upstream backend {least_conn; server backend1.example.com; server backend2.example.com;}

3. IP Hash

The user request is forwarded based on the client IP address. That is, this approach is intended to ensure that a particular client will eventually access the same server host.

Upstream backend {ip_hash; server backend1.example.com; server backend2.example.com;}

4. Generic Hash

The user request determines the final forwarding destination based on a custom key value, which can be a string, variable, or combination (such as source IP and port number).

Upstream backend {hash $request_uri consistent; server backend1.example.com; server backend2.example.com;}

Weight

Refer to the following example configuration:

Upstream backend {server backend1.example.com weight=5; server backend2.example.com; server 192.0.0.1 backup;}

Default weight (weight) is 1. The backup server receives requests only if all other servers are down.

As in the example above, 5 out of every 6 requests are forwarded to backend1.example.com,1 and forwarded to backend2.example.com. 192.0.0.1 will receive and process requests only when backend1 and backend2 are down.

The above is all the contents of the article "sample Analysis of installing nginx Server and implementing load balancing configuration in Linux system". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report