Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build highly available RabbitMQ Cluster and HAProxy soft load

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you how to build a high-availability RabbitMQ cluster and HAProxy soft load, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's learn about it!

RabbitMQ High availability Cluster Architecture

Two RabbitMQ disk nodes and one RabbitMQ memory node are formed into a built-in cluster. The reason for using two disk nodes is to prevent the queue and switch from being rebuilt after the only disk node is hung up. HAProxy is used as the load balance of the RabbitMQ cluster. In order to prevent the single point of failure of HAProxy, two HAProxy nodes are made into one master and one standby with Keepalived. When an application accesses the HAProxy service using VIP (virtual IP), the HAProxy of the host (Master) is connected by default. When the HAProxy on the host (Master) fails, the VIP will drift to the slave (Backup) and connect to the HAProxy service on the slave (Backup).

Preparatory work

The server installs docker,docker-compose and prepares to mirror rabbitmq.tar,haproxy.tar offline.

Server nodes can ping each other.

RabbitMQ cluster

With RabbitMQ built-in cluster, persistent queues cannot automatically connect to other nodes to create queues when the queue node crashes, while non-persistent queues can automatically connect available nodes to create queues. The non-persistent queue used by our project.

At least two disk nodes are guaranteed, otherwise metadata such as queues, switches, etc., cannot be created in the cluster when a unique disk node crashes.

Service distribution

The 192.168.1.213 server deploys RabbitMQ Disc Node1. The 192.168.1.203 server deploys RabbitMQ Disc Node2. The 192.168.1.212 server deploys RabbitMQ RAM Node3.

Create the first RabbitMQ node

Log in to the server and create a directory / app/mcst/rabbitmq.

Upload the image tar package rabbitmq.tar and the service orchestration file mcst-rabbitmq-node1.yaml to the directory you just created via sftp.

Import Mirror

$docker load-I rabbitmq.tar $docker images # check whether the import is successful

View the service orchestration file mcst-rabbitmq-node1.yaml

Version:'3' services: rabbitmq: container_name: mcst-rabbitmq image: rabbitmq:3-management restart: always ports:-4369-5671-5671-5672-15672-15672-25672 environment:-TZ=Asia/Shanghai-RABBITMQ_ERLANG_COOKIE=iweru238roseire-RABBITMQ_DEFAULT_USER=mcst_admin- RABBITMQ_DEFAULT_PASS=mcst_admin_123-RABBITMQ_DEFAULT_VHOST=mcst_vhost hostname: rabbitmq1 extra_hosts:-rabbitmq1:192.168.1.213-rabbitmq2:192.168.1.203-rabbitmq3:192.168.1.212 volumes: -. / data:/var/lib/rabbitmq

Deployment command

$docker-compose-f mcst-rabbitmq-node1.yaml up-d

Note: the RABBITMQ_ERLANG_COOKIE of the three nodes is consistent. You must have extra_hosts configuration, otherwise you will not be able to connect to other rabbitmq node services during the process of building the cluster. This node acts as the root node of the cluster.

Deploy the second RabbitMQ node

The method is the same as above, upload the rabbitmq.sh script to the. / rabbitmq.sh path configured by volumes. View mcst-rabbitmq-node2.yaml

Version:'3' services: rabbitmq: container_name: mcst-rabbitmq image: rabbitmq:3-management restart: always ports:-4369-5671-5671-5672-15672-15672-25672 environment:-TZ=Asia/Shanghai-RABBITMQ_ERLANG_COOKIE=iweru238roseire-RABBITMQ_DEFAULT_USER=mcst_admin- RABBITMQ_DEFAULT_PASS=mcst_admin_123-RABBITMQ_DEFAULT_VHOST=mcst_vhost hostname: rabbitmq2 extra_hosts:-rabbitmq1:192.168.1.213-rabbitmq2:192.168.1.203-rabbitmq3:192.168.1.212 volumes: -. / rabbitmq.sh:/home/rabbitmq.sh -. / data:/var/lib/rabbitmq

Deployment command

$docker-compose-f mcst-rabbitmq-node2.yaml up-d

After the node is started, enter the container of the rabbitmq2 node with the command and execute the / home/rabbitmq.sh script. If a permission error is reported, chmod + x / home/rabbitmq.sh weighting is performed in the container, and then bash / home/rabbitmq.sh executes the script to add to the cluster.

Command to enter the container:

$docker exec-it mcst-rabbitmq / bin/bash

The content of the script is as follows (disk node):

Rabbitmqctl stop_app rabbitmqctl reset rabbitmqctl join_cluster rabbit@rabbitmq1 rabbitmqctl start_app

Deploy the third RabbitMQ node

The method is the same as above. Check the mcst-rabbitmq-node3.yaml.

Version:'3' services: rabbitmq: container_name: mcst-rabbitmq image: rabbitmq:3-management restart: always ports:-4369-5671-5671-5672-15672-15672-25672 environment:-TZ=Asia/Shanghai-RABBITMQ_ERLANG_COOKIE=iweru238roseire-RABBITMQ_DEFAULT_USER=mcst_admin- RABBITMQ_DEFAULT_PASS=mcst_admin_123-RABBITMQ_DEFAULT_VHOST=mcst_vhost hostname: rabbitmq3 extra_hosts:-rabbitmq1:192.168.1.213-rabbitmq2:192.168.1.203-rabbitmq3:192.168.1.212 volumes: -. / rabbitmq-ram.sh:/home/rabbitmq-ram.sh -. / data:/var/lib/rabbitmq

Deployment command

$docker-compose-f mcst-rabbitmq-node3.yaml up-d

After starting the rabbitmq3 node, go inside the container and execute the bash / home/rabbitmq-ram.sh script to add the memory node to the cluster.

Content of the script:

Rabbitmqctl stop_app rabbitmqctl reset rabbitmqctl join_cluster-ram rabbit@rabbitmq1 rabbitmqctl start_app

Use the command inside the container to view the cluster status: rabbitmqctl cluster_status.

Cluster status of node rabbit@rabbitmq1... [{nodes, [{disc, [rabbit@rabbitmq1,rabbit@rabbitmq2]}, {ram, [rabbit@rabbitmq3]}]}, {running_nodes, [rabbit@rabbitmq2,rabbit@rabbitmq3,rabbit@rabbitmq1]}, {cluster_name,}, {partitions, []}, {alarms, [{rabbit@rabbitmq2, []}, {rabbit@rabbitmq3, []}, {rabbit@rabbitmq1, []}]}]

You can also enter the management side to view the cluster status through http://192.168.1.213:15672.

HAProxy load balancing

Create a directory / app/mcst/haproxy, and upload the image tar package, haproxy configuration file, and docker service orchestration file to this directory.

The import image method is the same as above.

View the contents of the service orchestration file:

Version:'3' services: haproxy: container_name: mcst-haproxy image: haproxy:2.1 restart: always ports:-8100 services 8100-15670 rabbitmq2:192.168.1.203 5670 environment:-TZ=Asia/Shanghai extra_hosts:-rabbitmq1:192.168.1.213-rabbitmq2:192.168.1.203-rabbitmq3:192.168.1.212 volumes : -. / haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro

The focus is on setting up extra_hosts (rabbitmq cluster node ip) and volumes (using custom configuration files).

The contents of the haproxy profile:

Global log 127.0.0.1 local0 info maxconn 4096 defaults log global mode tcp option tcplog retries 3 option redispatch maxconn 2000 timeout connect 5s timeout client 120s timeout server 120s # ssl for rabbitmq # frontend ssl_rabbitmq # bind *: 5673 ssl crt / root/rmqha_proxy/rmqha.pem # mode tcp # default_backend rabbitmq # web Management Interface listen stats bind *: 8100 mode http stats enable stats realm Haproxy\ Statistics stats uri / stats auth admin:admin123 # configure load balancer listen rabbitmq bind *: 5670 mode tcp balance roundrobin server rabbitmq1 rabbitmq1:5672 check inter 5s rise 2 fall 3 server rabbitmq2 rabbitmq2:5672 check inter 5s rise 2 fall 3 server rabbitmq3 rabbitmq3:5672 check inter 5s rise 2 fall 3

Deployment command

$docker-compose-f mcst-haproxy.yaml up-d

Service distribution

The 192.168.1.212 server deploys HAProxy Master. The 192.168.1.203 server deploys HAProxy Backup.

Set up the HAProxy service on the above two nodes respectively.

Log in to the management side of HAProxy to view the cluster status: http://192.168.1.212:8100/.

Use Keepalived as the master and standby for HAProxy

Preparatory work

Apply for an ip address of the same local area network as the service node. The ip cannot be occupied as a VIP (virtual ip).

Install Keepalived

Go to the Keepalived official website to download the latest version of the package, this installation uses version 2.0.20.

The downloaded file is: keepalived-2.0.20.tar.gz.

Upload to the server and extract the tar package.

$tar-xf keepalived-2.0.20.tar.gz

Check for dependencies

$cd keepalived-2.0.20 $. / configure

The installation of Keepalived requires the following dependencies on gcc,openssl-devel.

Installation command

$yum install-y gcc $yum install-y openssl-devel

Because the yum source of the external network cannot be used by the intranet server, the local yum source needs to be changed.

Upload the installation CD image of linux to the / mnt/iso directory and mount to the / mnt/cdrom directory as an installation source for yum.

$mkdir / mnt/iso $mkdir / mnt/cdrom $mv / ftp/rhel-server-7.3-x86_64-dvd.iso / mnt/iso

Mount the CD image

$mount-ro loop / mnt/iso/rhel-server-7.3-x86_64-dvd.iso / mnt/cdrom $mv / ftp/myself.repo / etc/yum.repos.d $yum clean all $yum makecache $yum update

Attached: contents of myself.repo file:

[base] name= Red Hat Enterprise Linux $releasever-$basearch-Source baseurl= file:///mnt/cdrom enabled=1 gpgcheck=1 gpgkey= file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

After the change is completed, each time you need a linux installation disk to install the package, you only need to execute the mount command to load the CD-ROM ISO file.

$mount-ro loop / mnt/iso/rhel-server-7.3-x86_64-dvd.iso / mnt/cdrom

At this point, there is no problem installing gcc,openssl-devel using yum.

If you don't have the conditions to use a local yum source, you can use yum's downloadonly plug-in.

Download the required dependencies on a machine that can connect to the extranet and the same version of the system, and install them locally on the target intranet machine.

Or it is recommended to use the local yum source

After installing gcc,openssl-devel, execute. / configure again and report a warning.

"this build will not support IPVS with IPv6. Please install libnl/libnl-3 dev libraries to support IPv6 with IPVS."

Install the following dependency resolution

$yum install-y libnl libnl-devel

After the installation is complete, the. / configure will be fine again.

Then perform a make compilation

Finally, perform the make install installation

After the installation is completed, execute keepalived-- version, and the output version number indicates that the installation is successful.

Create a Keepalived profile

Create a profile / etc/keepalived/keepalived.conf

Master node configuration:

Vrrp_script chk_haproxy {script "killall-0 haproxy" # verify haproxy's pid existance interval 5 # check every 2 seconds weight-2 # if check failed, priority will minus 2} vrrp_instance VI_1 {# Host: MASTER # standby: Nic bound to BACKUP state MASTER # instance Use the ip a command to view the network card number interface ens192 # virtual routing ID This ID is a number (1-255i). In a VRRP instance, the primary server ID must have the same virtual_router_id 51 # priority. The larger the number, the higher the priority. In an instance, the primary server has higher priority than the standby server priority 101# virtual IP address, and there can be more than one. One virtual_ipaddress {192.168.1.110} track_script {# Scripts state we monitor chk_haproxy}} per line

Ens192 is the name of the network card. The ifconfig command looks at the server network card and finds the network card corresponding to the local service ip. The value of virtual_router_id should be consistent with the configuration on the backup node. The killall\-0 haproxy command means that if the haproxy service exists to execute the command, nothing happens, and if the service does not exist, the process haproxy: no process found will not be found.

# Network card information ens192: flags=4163 mtu 1500 inet 192.168.1.203 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::250:56ff:fe94:bceb prefixlen 64 scopeid 0x20 ether 00:50:56:94:bc:eb txqueuelen 1000 (Ethernet) RX packets 88711011 bytes 12324982140 (11.4 GiB) RX errors 0 dropped 272 overruns 0 frame 0 TX packets 88438149 bytes 10760989492 (10.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 # haproxy service does not exist [root@localhost ~] # killall-0 haproxy haproxy: no process found

The priority of the master node must be lower than that of the backup node after subtracting the weight, otherwise the master / slave handover will not be successful.

Backup node configuration:

Vrrp_script chk_haproxy {script "killall-0 haproxy" # verify haproxy's pid existance interval 5 # check every 2 seconds weight-2 # if check failed, priority will minus 2} vrrp_instance VI_1 {# Host: MASTER # standby: Nic bound to BACKUP state BACKUP # instance Use the ip a command to view the network card number interface ens192 # virtual routing ID This ID is a number (1-255i). In a VRRP instance, the primary server ID must have the same virtual_router_id 51 # priority. The larger the number, the higher the priority. In an instance, the primary server has higher priority than the standby server priority 100# virtual IP address, and there can be more than one. One virtual_ipaddress {192.168.1.110} track_script {# Scripts state we monitor chk_haproxy}} per line

After creating the configuration, start keepalived.

$systemctl restart keepalived

Test Keepalived

On the Master,Backup node, use the ip addr command to see which machine vip is on the ens192 network card.

2: ens192: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:50:56:94:c1:79 brd ff:ff:ff:ff:ff:ff inet 192.168.1.212/24 brd 192.168.1.255 scope global ens192 valid_lft forever preferred_lft forever inet 192.168.1.110/32 scope global ens192 valid_lft forever preferred_lft forever inet6 fe80::250:56ff:fe94:c179/64 scope link valid_lft forever preferred_lft forever

By default, stop the haproxy service of the master host on the master host, and then use ip addr to see which machine the virtual ip is on. If it drifts to the backup host, the hot backup takes effect.

When the haproxy service of the master host is enabled, ip addr views that the virtual ip should drift back to the master host.

Test the service and access the HAProxy service using virtual ip plus service port number.

These are all the contents of the article "how to build highly available RabbitMQ clusters and HAProxy soft loads". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report