Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the containerized deployment and application integration of Consul cluster version

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

In view of what the container deployment and application integration of Consul cluster version is, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a simpler and easier way.

Background

At present, the registration center used by the company's main products is consul,consul, which needs clusters to ensure high availability, and the traditional method (Nginx/HAProxy) will have a single point of failure. In order to solve this problem, I began to study how to rely only on consul for cluster registration. After a day of twists and turns, it was finally verified that cluster registration can be carried out through the cluster version of ConsulClient, and some problems were encountered in the process of deployment and implementation. I hereby record and share it. I hope it will be helpful to the students in need.

Comparison between mainframe cluster and docker cluster

The cluster deployment of client+server forwarding mode involves two options. The first is direct host mode deployment, with 2 client+3 server and one host for each consul instance (suitable for Tuhao). The advantage of this mode is simple violence and relatively simple operation and maintenance. The architectural deployment diagram for this model is as follows:

We choose another economical mode, docker deployment. The advantage is to save resources, and the disadvantage is to manage many docker images. Before the introduction of container management platforms such as K8s, the operation and maintenance of subsequent docker will be troublesome. The architecture deployment figure of this mode is as follows:

Through the architecture diagram of the above two modes, we can clearly know that the host deployment mode is the most simple and direct, while the docker model saves resources, but increases the complexity and the difficulty of operation and maintenance. But this mode should be a good choice in the current containerized environment, the reason is very simple, because make full use of resources, container operation and maintenance can be handed over to the container operation and maintenance platform, such as k8s and so on. Let's practice how to implement containerized consul cluster deployment.

Environmental preparation

Here are two virtual hosts. Because they are virtual hosts, the external ip is the same, so we distinguish them by port.

Host ip:192.168.236.3 192.168.23.222 10385 Intranet ip:192.168.236.5 deployment configuration

Step 1: the host installs the Docker environment (take Centos as an example)

Yum install docker

Step 2: pull the Consul image for deployment

Docker pull consul

Step 3: assign an ip segment to the host Docker to prevent duplication of multi-host ip

Edit the / etc/docker/daemon.json file of docker on host An and add the following

"bip": "172.17.1.252Candle 24"

Edit the / etc/docker/daemon.json file of docker on host B and add the following

"bip": "172.17.2.252Candle 24"

The configuration here is to assign ip to the docker instance of the host, because the subsequent docker will be registered across hosts. If registered by default, docker is the private network of the host used, resulting in duplicate ip, so the ip allocation is done manually here. Of course, you can customize the above ip configuration.

Step 4: deploy Consul on host A

Node1:

Docker run-d-- name=node_31-- restart=always\-e 'CONSUL_LOCAL_CONFIG= {"skip_leave_on_interrupt": true}'\-p 11300 docker run 8300\-p 11301RV 8301\-p 11301:8301/udp\-p 11302:8302/udp\-p 11302 restart=always 8302\ -p 11400 bootstrap-expect=3 8400\-p 11500 consul agent 8500\-p 11600 join=172.17.1.1 8600\ data-dir=/consul/data/-client-bootstrap-expect=3-node=node31\-data-dir=/consul/data/-client 0.0.0.0-ui

Several parameters are highlighted here:

-- name: is the name of the docker container. Each container instance is different.

-node: is the name of the consul node. Each node is different.

-bootstrap-expect: at least how many nodes are expected to start the cluster. The setting here is 3.

-data-dir: it is the directory of consul's data center. Consul must be given read and write permission, otherwise an error will be reported when launching.

After a successful startup, execute the command to view the nodes of the consul.

Docker exec-t node_31 consul members

The display results are as follows:

Node Address Status Type Build Protocol DC Segmentnode31 172.17.1.1:8301 alive server 1.6.2 2 dc1

This means that the first node starts normally, and then the remaining nodes of host A start normally.

Node2:

Docker run-d-- name=node_32-- restart=always\-e 'CONSUL_LOCAL_CONFIG= {"skip_leave_on_interrupt": true}'\-p 9300 CONSUL_LOCAL_CONFIG= 8300\-p 9301 CONSUL_LOCAL_CONFIG= 8301\-p 9301:8301/udp\-p 9302:8302/udp\-p 9302 CONSUL_LOCAL_CONFIG= 8302\ -p 9400 bootstrap-expect=3 8400\-p 9500 bootstrap-expect=3 8500\-p 9600 V 8600\ consul agent-server-join=172.17.1.1-bootstrap-expect=3-node=node32\-data-dir=/consul/data/-client 0.0.0.0-ui

Node3:

Docker run-d-- name=node_33-- restart=always\-e 'CONSUL_LOCAL_CONFIG= {"skip_leave_on_interrupt": true}'\-p 10300 restart=always 8300\-p 10301 CONSUL_LOCAL_CONFIG= 8301\-p 10301:8301/udp\-p 10302:8302/udp\-p 10302 CONSUL_LOCAL_CONFIG= 8302\ -p 10400 bootstrap-expect=3 8400\-p 10500 bootstrap-expect=3 8500\-p 10600 bootstrap-expect=3 8600\ consul agent-server-bootstrap-expect=3-node=node33\-data-dir=/consul/data/-client 0.0.0.0-ui

After the three nodes are started, execute the command to check the status of the nodes:

Docker exec-t node_31 consul operator raft list-peers

The results are as follows:

Node ID Address State Voter RaftProtocolnode32 ee186aef-5f8a-976b-2a33-b20bf79e7da9 172.17.1.2 Node ID Address State Voter RaftProtocolnode32 ee186aef-5f8a-976b-2a33-b20bf79e7da9 8300 follower true 3node33 d86b6b92-19e6-bb00-9437-f988b6dac4b2 172.17.1.3 follower true 3node31 0ab60093-bed5-be77-f551-6051da7fe790 172.17.1.1 Node ID Address State Voter RaftProtocolnode32 ee186aef-5f8a-976b-2a33-b20bf79e7da9 8300 leader true 3

It has been shown here that the three server nodes have completed the cluster deployment and elected node_31 as the primary node. Finally, deploy a client to the host cluster and you are done.

Node4 (client node)

Docker run-d-- name=node_34-- restart=always\-e 'CONSUL_LOCAL_CONFIG= {"leave_on_terminate": true}'\-p 8300 CONSUL_LOCAL_CONFIG= 8300\-p 8301 CONSUL_LOCAL_CONFIG= 8301\-p 8301:8301/udp\-p 8302:8302/udp\-p 8302 restart=always 8302\-p 8400 Switzerland 8400\ -p 8500 consul agent 8500\-p 8600 consul agent-retry-join=172.17.1.1\-node-id=$ (uuidgen | awk'{print tolower ($0)}')\-node=node34-client 0.0.0.0-ui

Execute the docker exec-t node_31 consul members command, and the result is as follows:

Node Address Status Type Build Protocol DC Segmentnode31 172.17.1.1:8301 alive server 1.6.2 2 dc1 node32 172.17.1.2:8301 alive server 1.6.2 2 dc1 node33 172.17.1.3:8301 alive server 1.6.2 2 dc1 node34 172.17.1.4:8301 alive client 1.6.2 2 dc1

Here, all the consul nodes of host A have been started, and the cluster deployment has been completed. It can be said that this is a single-host version of consul cluster, so the next thing we need to do is to add the consul of host B to the cluster of host A.

Step 5: deploy Consul on host B

Node5

Docker run-d-- name=node_51-- restart=always\-e 'CONSUL_LOCAL_CONFIG= {"skip_leave_on_interrupt": true}'\-p 11300 docker run 8300\-p 11301RV 8301\-p 11301:8301/udp\-p 11302:8302/udp\-p 11302 restart=always 8302\ -p 11400 bootstrap-expect=3 8400\-p 11500 consul agent 8500\-p 11600 join=172.17.1.1 8600\ data-dir=/consul/data/-client-bootstrap-expect=3-node=node_51\-data-dir=/consul/data/-client 0.0.0.0-ui

Node6

Docker run-d-- name=node_52-- restart=always\-e 'CONSUL_LOCAL_CONFIG= {"skip_leave_on_interrupt": true}'\-p 9300 CONSUL_LOCAL_CONFIG= 8300\-p 9301 CONSUL_LOCAL_CONFIG= 8301\-p 9301:8301/udp\-p 9302:8302/udp\-p 9302 CONSUL_LOCAL_CONFIG= 8302\ -p 9400 bootstrap-expect=3 8400\-p 9500 bootstrap-expect=3 8500\-p 9600 V 8600\ consul agent-server-join=172.17.1.1-bootstrap-expect=3-node=node_52\-data-dir=/consul/data/-client 0.0.0.0-ui

Node7

Docker run-d-- name=node_53-- restart=always\-e 'CONSUL_LOCAL_CONFIG= {"skip_leave_on_interrupt": true}'\-p 10300 restart=always 8300\-p 10301 CONSUL_LOCAL_CONFIG= 8301\-p 10301:8301/udp\-p 10302:8302/udp\-p 10302 CONSUL_LOCAL_CONFIG= 8302\ -p 10400 bootstrap-expect=3 8400\-p 10500 bootstrap-expect=3 8500\-p 10600 bootstrap-expect=3 8600\ consul agent-server-bootstrap-expect=3-node=node_53\-data-dir=/consul/data/-client 0.0.0.0-ui

After the deployment of the three server nodes of host B is completed, we execute the command docker exec-t node_51 consul members to check the status of the cluster nodes

Node Address Status Type Build Protocol DC Segmentnode_51 172.17.2.1:8301 alive server 1.6.2 2 dc1

Why is there only a single node called node_51? Is it the node problem? Let's query the same query in host B, and the results are as follows:

Node31 172.17.1.1:8301 alive server 1.6.2 2 dc1 node32 172.17.1.2:8301 alive server 1.6.2 2 dc1 node33 172.17.1.3:8301 alive server 1.6.2 2 dc1 node34 172.17.1.4:8301 alive client 1.6.2 2 dc1

The nodes of host An only have the nodes of their own machines, and all the nodes in host B are not registered. Why? The reason is that the ip bound to the consul is the private network ip of the container, and the internal communication of the host is possible, and the cross-host communication cannot be communicated through the private network address, so what do we do? We can forward it through the routing rules, and forward the private network address of the container of host A to host B to host B. this shows that we begin to assign ip to the container. We execute the following command on host A:

Route add-net 172.17.2.0 netmask 255.255.255.0 gw 192.168.236.5

This command means to add a routing rule 172.17.2.1 to 172.17.2.254 and forward all ip requests to the 192.168.236.5 address, which is our host B. Similarly, host B also executes the following command:

Route add-net 172.17.1.0 netmask 255.255.255.0 gw 192.168.236.3

After the addition is complete, execute the docker exec-t node_53 consul members command:

Node Address Status Type Build Protocol DC Segmentnode31 172.17.1.1:8301 alive server 1.6.2 2 dc1 node32 172.17.1.2:8301 alive server 1.6.2 2 dc1 node33 172.17.1.3:8301 alive server 1.6.2 2 dc1 node_51 172.17.2.1:8301 alive server 1.6.2 2 dc1 node_52 172 .17.2.2: 8301 alive server 1.6.2 2 dc1 node_53 172.17.2.3:8301 alive server 1.6.2 2 dc1 node34 172.17.1.4:8301 alive client 1.6.2 2 dc1

The cluster join is successful, which completes the docker container join across hosts. Finally, deploy a client to host B

Node8 (client node)

Docker run-d-- name=node_54-- restart=always\-e 'CONSUL_LOCAL_CONFIG= {"leave_on_terminate": true}'\-p 8300 CONSUL_LOCAL_CONFIG= 8300\-p 8301 CONSUL_LOCAL_CONFIG= 8301\-p 8301:8301/udp\-p 8302:8302/udp\-p 8302 restart=always 8302\-p 8400 Switzerland 8400\ -p 8500 consul agent 8500\-p 8600 consul agent-retry-join=172.17.1.1\-node-id=$ (uuidgen | awk'{print tolower ($0)}')\-node=node54-client 0.0.0.0-ui

All the final cluster nodes have been joined successfully, and the results are as follows:

Node31 172.17.1.1:8301 alive server 1.6.2 2 dc1 node32 172.17.1.2:8301 alive server 1.6.2 2 dc1 node33 172.17.1.3:8301 alive server 1.6.2 2 dc1 node_51 172.17.2.1:8301 alive server 1.6.2 2 dc1 node_52 172.17.2.2:8301 alive server 1.6. 2 2 dc1 node_53 172.17.2.3:8301 alive server 1.6.2 2 dc1 node34 172.17.1.4:8301 alive client 1.6.2 2 dc1 node54 172.17.2.4:8301 alive client 1.6.2 2 dc1

Execute the node status command docker exec-t node_31 consul operator raft list-peers:

Node32 ee186aef-5f8a-976b-2a33-b20bf79e7da9 172.17.1.2 follower true 3node33 d86b6b92-19e6-bb00-9437-f988b6dac4b2 172.17.1.3Vera 8300 follower true 3node31 0ab60093-bed5-be77-f551-6051da7fe790 172.17.1.1 Vera 8300 leader true 3node_51 cfac3b67-fb47-8726-fa31-158516467792 172.17.2.1 Vera 8300 follower true 3node_53 31679abe-923f-0eb7-9709-1ed09980ca9d 172.17.2.3Rover 8300 follower true 3nodec 52 207eeb6d-57f2-c65f-0be6-079c402f6afe 172.17.2.2 follower true 8300 follower true 3

Such a consul containerized cluster with 6 server+2 client is deployed. The web panel of consul is as follows:

Application integration

We have deployed the cluster version of consul, so how do we integrate with the application? All we have to do is integrate the cluster version of the consul registered client. First, add dependency.

Com.github.penggle spring-cloud-starter-consul-cluster 2.1.0.RELEASE

The second step is to specify spring.cloud.consul.host as multi-node in bootstrap.yml | properties, as shown below:

Spring.cloud.consul.host=192.168.23.222:10385192.168.23.222:10585

If you want to output the log related to registration, you can also add log configuration to logback.

In this way, after the configuration is completed, you can see that our application has been successfully registered. The following figure shows the effect of the successful registration I tested:

This shows that my application nodes are registered to the 2 client of the cluster and forward the request to the healthy server through the proxy of client, thus achieving the high availability of consul.

This article does not study any technical practical information, just to share the work experience, and mainly talks about the way of consul cluster deployment. The traditional mode can complete the cluster deployment through HAProxy, but the disadvantages of this method are obvious. Through virtual ip, it may point to the faulty node, so we use consul's client+server mode cluster deployment to make full use of machine resources through docker. Only 2 machines are needed to achieve the high availability of the cluster.

This is the answer to the question about the containerized deployment and application integration of Consul Cluster Edition. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report