Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy Consul Cluster in Linux Environment

2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "how to deploy Consul clusters in the Linux environment". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's ideas to study and learn "how to deploy Consul clusters in the Linux environment".

1.Consul concept 1.1.What is Consul?

Consul is a service grid solution, which is an open source component developed by HashiCorp. It is developed by Go language and is easy to deploy, requiring very few executable programs and configurations. At the same time, Consul is also a distributed, highly available system, it comes with a simple built-in agent, out of the box, also supports Envoy and other third-party agent integration, it also provides service discovery, configuration and segmentation functions.

Characteristics of 1.2Consul

Service Discovery (Service Discovery): Consul provides a way to register services and discovery services through DNS or HTTP interfaces. Some applications can easily find the services it depends on through Consul.

Health check (Health Checking): Consul's Client can provide any number of health checks associated with the application service ("whether the Web server returns 200 OK") or the local node ("memory utilization less than 90%"). Operators can use this information to monitor the health of the cluster and monitor it through the service discovery component to prevent traffic from being routed to unhealthy application hosts.

Key/Value storage (Key/Value Store): applications can use Key/Value storage provided by Consul according to their own needs. Consul provides an easy-to-use HTTP interface, combined with other tools to achieve dynamic configuration (dynamic configuration), function tag (feature flagging), coordination (coordination), leader election (leader election) and other functions.

Secure service communication (Secure Service Communication): Consul can generate and distribute TLS certificates for services to establish mutual TLS connections. Intentions can be used to define which services are allowed to communicate. Service segmentation can be easily managed and is intended to be changed in real time rather than using complex network topologies and static firewall rules.

Multiple data Centers (Multi Datacenter): Consul supports multiple data centers out of the box. This means that users do not need to worry about the need to establish additional layers of abstraction to expand the business to multiple regions.

1.3Consul architecture

According to this picture, you can see that there are two data centers, DataCenter1 and DataCenter2. It is common that Consul has state-of-the-art support for multiple data centers.

In every data center, we have clients and servers. It is better to have three to five servers, which strike a balance between availability and performance in the event of a failure, as consistency slows down as more machines are added. However, there is no limit to the number of clients and can be easily expanded to thousands or tens of thousands.

Consul implements multiple data centers that rely on the gossip protocol protocol. This is done for several purposes: first, there is no need to configure the client with the address of the server; service discovery is done automatically. Second, the work of health check failure is not placed on the server, but distributed. This makes fault detection more scalable than simple heartbeat mode. Provides fault detection for the node; if the agent is not accessible, the node may have experienced a failure.

The servers in each data center are part of an Raft peer set. This means that they work together to elect a single leader, and a selected server has additional responsibilities. The leader is responsible for handling all inquiries and affairs. The transaction must also be copied to all peers as part of the consensus protocol. Because of this requirement, when a non-leader server receives an RPC request, it forwards it to the cluster leader.

The application scenarios of 1.4Consul include service discovery, service isolation and service configuration.

In the service discovery scenario, consul is used as the registry center. After the service address is registered in consul, you can use the dns and http interfaces provided by consul to query. Consul supports health check.

In the service isolation scenario, consul supports setting access policies on a service-by-service basis, supports both classic and emerging platforms, and supports tls certificate distribution and service-to-service encryption.

In the service configuration scenario, consul provides key-value data storage function, and can notify changes quickly. Configuration sharing can be achieved with Consul. Services that need to read configuration can read accurate configuration information from Consul.

Consul can help system managers understand the internal system architecture of complex systems more clearly. Operators can regard Consul as a kind of monitoring software or an asset (resource) management system.

Cluster deployment of 2.Consul on linux

Download the latest version of the linux system components on the consul official website (https://www.consul.io/downloads). The sample version is 1.8.5:

2.1 preliminary preparation

Prepare three CentOS on the virtual machine:

CentOS 7-No.1:192.168.113.128CentOS 7-No.2:192.168.113.129CentOS 7-No.3:192.168.113.130

Upload the extracted consul components to three linux servers through Xftp (or other tools). The installation directory of the components is as you like. I put it in the / root directory:

2.2 Cluster deployment

Type the command to install the cluster through Xshell (or other tools), and first change to the installation directory on the three servers:

Cd / root/consul_1.8.5_linux_amd64

Then enter the following command on each of the three servers to start the corresponding Consul component:

192.168.113.128:./consul agent-server-bootstrap-expect=3-data-dir=/root/consul_1.8.5_linux_amd64-node=server1-bind=192.168.113.128-client=0.0.0.0-datacenter=myservicedc1-ui192.168.113.129:./consul agent-server-bootstrap-expect=3-data-dir=/root/consul_1.8.5_linux_amd64-node=server2-bind=192.168.113.129-client=0.0.0.0-datacenter myservicedc1192.168.113.130 :. / consul agent-server-bootstrap-expect=3-data-dir=/root/consul_1.8.5_linux_amd64-node=server3-bind=192.168.113.130-client=0.0.0.0-datacenter myservicedc1

If insufficient execution permission is reported (Permission denied), execute the following command to grant the execution permission:

/ / Grant execution permission chmod + x consul

The proxy configuration parameters are as follows. For more detailed configuration parameters, please refer to the official website documentation (https://www.consul.io/docs/agent/options):

-server: this flag is used to control whether the agent is in server or client mode. Once provided, the agent acts as a Consul server.

-bootstrap-expect: the minimum number of server required by the cluster. If the number is lower than this, the cluster will become invalid.

-data-dir:data stores the directory path.

-node: node id, each node in the cluster must have a unique name. By default, Consul uses the machine's hostname.

-bind: the ip address of the listener. The default binding is 0.0.0.0, which can be unspecified. Represents the address that Consul listens on, and it must be accessible to other nodes in the cluster. Consul listens to the first Private IP by default, but it's best to provide one. The server on the production device usually has several network cards, so specifying one will not make an error.

-client: the ip address of the client. 0.0.0.0 means that anyone can access it (without this, the following ui: 8500 cannot be accessed).

-ui: you can access the Consul UI management interface.

-config-dir: specify the configuration folder in which Consul will load all files.

-datacenter: specifies the name of the datacenter. The default is dc1.

After starting the Consul cluster component, you will see the following information (No cluster leader):

This is because the cluster does not specify a leader, so you need to specify a leader between 128,129 and 130 servers. Because I specify 128servers as leader here, you need to enter the following command on 129,130 servers to join the Consul cluster with 128as leader:

. / consul join 192.168.113.128

After entering the command, you will see the following message:

This shows that the Consul components of 129,130 servers have joined the cluster successfully! The following is a list of commands on how to view cluster members and status:

View cluster members

. / consul members

View cluster status

. / consul operator raft list-peers

Enter the Consul address of leader on the browser to access:

Thank you for your reading. the above is the content of "how to deploy Consul clusters in Linux". After the study of this article, I believe you have a deeper understanding of how to deploy Consul clusters in Linux environment. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report