Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Nginx+docker+nfs deployment

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

I. Architecture

In the Keepalived + Nginx highly available load balancer architecture, keepalived is responsible for implementing the High-availability (HA) function to control the front-end machine VIP (virtual network address). When a device fails, the hot backup server can automatically switch the VIP instantly. The actual operation experience is only 2 seconds, and the DNS service can be responsible for the load balancing of the front-end VIP.

Nginx is responsible for controlling the load balancing of the backend web server and forwards the client's request to the backend Real Server for processing according to a certain algorithm, while Real Server returns the response directly to the client.

Nfs server does real-time backup and provides web interface to web server.

Two. simple principle

Both NGINX_MASTER and NGINX_BACKUP servers bind the ens33 network card to a virtual IP (VIP) address 192.168.1.40 through keepalived software. The VIP is currently bound to the ens32 of whoever carries the service. When the NGINX_MASTER fails, the NGINX_BACKUP will pass the heartbeat time advert_int 1 check set in the / etc/keepalived/keepalived.conf file. If the NGINX_MASTER normal state cannot be obtained. NGINX_BACKUP will instantly bind VIP to take over the work of nginx_master. When NGINX_MASTER is restored, keepalived will rebind the virtual VIP address 192.168.1.40 to NGINX_MASTER 's ens33 Nic by determining the priority of the priority parameter.

Advantages of using this scheme

1. A flexible architecture is implemented, and when the pressure increases, you can temporarily add web servers to this architecture.

2.upstream has load balancing ability, can automatically judge the back-end machines, and automatically kick out the machines that can not provide services normally.

3. Regular distribution and redirection are more flexible than lvs. Keepalvied can ensure the effectiveness of a single nginx load balancer and avoid a single point of failure.

4. Using nginx for load balancing, there is no need to make any changes to the backend machine.

5.nginx is deployed in a docker container, which not only saves a lot of time in development, testing and deployment, but also quickly recovers business through mirroring in case of failure.

III. System environment

Two load machines are installed: nginx+docker+nfs is named: NGINX_MASTER,NGINX_BACKUP.

The back-end web server, which can be any architecture that provides web services, is named WEB_1,WEB_2.

The backend database machine can be constructed arbitrarily, as long as it can provide database services.

Server IP address installation software NGINX_MASTER192.168.1.10nginx+keepalivedNGINX_BACKUP192.168.1.20nginx+keepalivedWEB_1192.168.1.11docker+nginxWEB_2192.168.1.13docker+nginxnfs_MASTER192.168.1.30nfs+rsync+inotifynfs_BACKUP192.168.1.10nfs+rsync+inotifynginx deployment (both)

Install nginx

[root@nginx01 ~] # tar zxf nginx-1.14.0.tar.gz / / extract the nginx installation package [root@nginx01 ~] # cd nginx-1.14.0/ [root@nginx01 nginx-1.14.0] # yum-y install openssl-devel pcre-devel zlib-devel// install nginx dependency package [root@nginx01 nginx-1.14.0] # / configure-- prefix=/usr/local/nginx1.14-- with-http_dav_module-- with-http _ stub_status_module-- with-http_addition_module-- with-http_sub_module-- with-http_flv_module-- with-http_mp4_module-- with-pcre-- with-http_ssl_module-- with-http_gzip_static_module-- user=nginx-- group=nginx & & make & make install// compilation and installation nginx [root @ nginx01 nginx-1.14.0] # useradd nginx- s / sbin/nologin-M ramp / required for creation User [root@nginx01 nginx-1.14.0] # ln-s / usr/local/nginx1.14/sbin/nginx / usr/local/sbin/// Link Command [root@nginx01 nginx-1.14.0] # nginx// Open nginx [root@nginx01 nginx-1.14.0] # netstat-anpt | grep nginx// to check whether nginx is enabled

Deploy nginx

[root@nginx01 ~] # cd / usr/local/nginx1.14/conf/ [root@nginx01 conf] # vim nginx.conf

​ http module plus

Upstream backend {server 192.168.1.11 server 90 weight=1 max_fails=2 fail_timeout=10s;server 192.168.1.13 server 90 weight=1 max_fails=2 fail_timeout=10s;} location / {# root html; # index index.html index.htm; proxy_pass http://backend; # add} High availability Environment

Install keepalived

[root@nginx02 nginx-1.14.0] # yum-y install keepalived

Configure keepalived

Modify the keepalived configuration file / etc/keepalived/keepalived.conf file on the primary and standby nginx servers

Master nginx

Modify the / etc/keepalived/keepalived.conf file under the main nginx

! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL} vrrp_instance VI_1 {state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.1.40}}

Standby nginx

Modify / etc/keepalived / keepalived.conf file under nginx

Note when configuring standby nginx: you need to change state to BACKUP, priority is lower than MASTER, and the values of virtual_router_id and master are the same.

! Configuration File for keepalivedglobal_defs {router_id TWO} vrrp_instance VI_1 {state BACKUP interface ens33 virtual_router_id 1 priority 99 advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.1.40}}

Test (at the end of docker)

Both active and standby nginx start keepalived

Systemctl start keepalived [root@nginx01 conf] # curl 192.168.1.40wsd666nfs deployment (both)

Nfs operation

[root@localhost ~] # yum-y install nfs-utils// download nfs service [root@nfs ~] # mkdir / database// create a shared directory [root@nfs02 ~] # chmod 777 / database/// set permissions [root@nfs ~] # vim / etc/exports// set permissions as follows: / database * (rw,sync,no_root_squash)

Open various services

[root@nfs ~] # systemctl start rpcbind [root@nfs ~] # systemctl enable rpcbind [root@nfs ~] # systemctl start nfs-server [root@nfs ~] # systemctl enable nfs-server

Docker01 and docker02 test nfs

[root@nfs01 ~] # vim / etc/rsyncd.conf / / establish the rsync configuration file uid = nobodygid = nobodyuse chroot = yesaddress = 192.168.1.30port 873log file = / var/log/rsyncd.logpid file = / var/run/rsyncd.pidhosts allow = 192.168.1.0 / 24 [wwwroot] path = / databaseread only = nodont compress = * .gz * .bz2 * .zip [root@nfs01 ~] # mkdir / database// create a shared directory [root@nfs01 ~] # Rsync-- daemon// starts rsync [root @ nfs01 ~] # netstat-anpt | grep rsync// view port

If you need to restart the rsync service, you need to:

[root@localhost ~] # kill $(cat / var/run/rsyncd.pid) / / stop service [root@localhost ~] # rsync-- daemon// startup service [root@localhost ~] # kill-9 $(cat / var/run/rsyncd.pid)

Or directly use the "netstat-anpt | grep rsync" command to find out the process number, just like using the "kill process number".

To stop the rsync service using the first method, you must delete the file that holds the rsync service process:

[root@localhost ~] # rm-rf / var/run/rsyncd.pid using the rsync backup tool

Once the rsync synchronization source server is configured, the client can use the rsync tool to perform remote synchronization.

Options for synchronizing rsync commands with rsync hosts:-r: recursive mode Contains all files in the directory and subdirectories-l: for symbolic link files are still copied as symbolic link files-p: keep file permission mark-t: retain file time stamp-g: retain file subordinate group tag (superuser only)-o: retain file owner tag (superuser only)-D: retain device files and other special files-a: archive mode Recursive merge retains object attributes, equivalent to-rlptgoD-v: displays detailed (verbose) information of synchronization process-z: compresses files when transferring files (compress)-H: preserves hard-connected files-A: preserves ACL attribute information-- delete: deletes files that are present in the target location but not in the original location-- checksum: determines whether to skip files based on the checksum of the object

Rsync is a fast incremental backup tool that supports:

(1) Local replication

(2) synchronize with other SSH

(3) synchronize with rsync host.

Manually synchronize with rsync hosts [root@localhost ~] # rsync-avz 192.168.1.1::wwwroot / root or [root@localhost ~] # rsync-avz rsync://192.168.1.1/wwwroot / root [root@nfs01 database] # vim index.htmlxgp666// create test directory configuration inotify+rsync real-time synchronization (both)

(1) Software installation

Rpm-Q rsync / / query whether rsync is installed. Generally, yum install rsync-y / / if not installed, is installed using yum.

Install the inotify package

[root@nfs02 ~] # tar zxf inotify-tools-3.14.tar.gz [root@nfs02 ~] # cd inotify-tools-3.14/ [root@nfs02 inotify-tools-3.14] #. / configure & & make & & make install

(2) adjust inotify kernel parameters

[root@nfs02 ~] # vim / etc/sysctl.conffs.inotify.max_queued_events = 16384fs.inotify.max_user_instances = 1024fs.inotify.max_user_watches = 1048576 [root@nfs02 ~] # sysctl-pram / effective

(3) write trigger synchronization script

#! / bin/bashA= "inotifywait-mrq-e modify,move,create,delete / database/" B = "rsync-avz / database/ 192.168.1.40::wwwroot" $A | while read DIRECTORY EVENT FILEdo if [$(pgrep rsync | wc-l)-gt 0]; then $B fidone

It should be noted here that between the directories that the two servers need to synchronize, you also need to maximize the directory permissions to avoid errors due to the permissions of the directory itself.

[root@nfs01 inotify-tools-3.14] # chmod + x / opt/ino.sh

Set script to boot automatically

[root@nfs01 database] # vim / etc/rc.d/rc.local / opt/ino.sh & / usr/bin/rsync-- daemon

Source server side test

After executing the script, the current terminal will become a real-time monitoring interface, and the terminal operation needs to be reopened. Perform file operations under the source server-side shared module directory, and then go to the backup server, you can observe that the files have been synchronized in real time. Docker deployment (both) [root@docker01 ~] # docker pull nginx [root@docker01 ~] # mkdir-p / www / / create a mount directory

Mount the directory on docker after nfs is created

[root@docker01] # mount-t nfs 192.168.1.30:/database / www [root@docker01 ~] # docker run-itd-- name nginx-p 90:80-v / www/index.html:/usr/share/nginx/html/index.html nginx:latest Test

1. When NGINX_MASTER and NGINX_BACKUP server nginx are working normally

On NGINX_MASTER:

On NGINX_BACKUP:

The ens32 network card of the master server is normally bound to VIP, but backup is not bound, and the website can be accessed normally through the browser.

2. Close the nginx container of NGINX_MASTER

When the nginx container stops, it starts again immediately, and the nginx startup script is fine.

3. Disable the keepalived service of NGINX_MASTER

On NGINX_MASTER:

On NGINX_BACKUP:

NGINX_BACKUP 's ens32 network card has been instantly bound to VIP, and it is normal to access the website through a browser.

4. Start the keepalived service of NGINX_MASTER

On NGINX_MASTER:

On NGINX_BACKUP:

NGINX_MASTER 's ens32 network card is re-bound to VIP, and it is normal to access the website through a browser.

5. Shut down the WEB_1 server and access the website through the browser is normal.

Troubleshooting

First check to see if there is a problem with the nginx configuration file

Are the parameters of the two keepakived normal?

Whether the port is mapped by nginx on docker, and mount the shared directory of nfs.

Whether nfs sets directory permissions. Whether to configure rsync+inotify, write a shell to do real-time backup.

Summary:

The first is the image, which is to pull the image of nginx. Then rebuild the nginx image to become what we need, mainly to change the configuration file. Then push all the images to harbor.

Set up nginx and do reverse proxy.

Set up the docker, install the nginx image to do the test page, and the test surface is shared from the nfs.

Build NFS, in order to achieve data sharing, including the database, is persistent. It is also necessary to achieve real-time backup through rsync+inotify.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report