In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-14 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains the "Docker micro service ETCD cluster building method is what", the content of the article is simple and clear, easy to learn and understand, now please follow the editor's train of thought slowly in depth, together to study and learn "Docker micro service ETCD cluster building method is what" it!
Etcd is a highly available key storage system, mainly for shared configuration and service discovery. Etcd is developed and maintained by CoreOS and is inspired by ZooKeeper and Doozer. It is written in the go language and handles log replication through the Raft consistency algorithm to ensure strong consistency. Raft is a new consistency algorithm from Stanford, which is suitable for log replication in distributed systems. Raft achieves consistency by election. In Raft, any node may become Leader. Etcd is widely used in Google's container cluster management system Kubernetes, open source PaaS platform Cloud Foundry and CoreOS's Fleet.
Characteristics of etcd
Simple: API (HTTP+JSON) for curl accessible users is clearly defined, and user-oriented API (gRPC)
Security: optional SSL client certificate authentication
Fast: 1000 writes per second for a single instance
Reliable: use Raft to ensure consistency
There are three main forms for Etcd to build its own highly available cluster.
1) static discovery: know which nodes are in the Etcd cluster in advance, and directly specify the addresses of each node node of the Etcd at startup.
2) Etcd dynamic discovery: using the existing Etcd cluster as the data interaction point, and then implementing the mechanism of service discovery through the existing cluster when expanding the new cluster
3) DNS dynamic discovery: obtain address information of other nodes by DNS query
The basic environment built this time
Underlying OS:Centos7
Docker version: Docker version 18.06.1-ce
IP:
Server AVR 192.167.0.168
Server BVR 192.167.0.170
Server CRV 192.167.0.172
First download the latest etcd image on each server
# docker pull quay.io/coreos/etcd
Because the machine is limited, I have configured three containers on one machine and created a sub-network on the machine. The three containers are in one network.
# docker network create-subnet=192.167.0.0/16 etcdnet
Then I used two ways to create the cluster: 1, add three servers to the cluster one by one; 2, add the three servers to the cluster. The following command to mark the representative of An is executed on machine A, which is the same as B and C.
1. Add servers to the cluster one by one
A runs an ETCD instance named autumn-client0 on Container / Server A, noticing that its status is new and that there is only its own IP in "- initial-cluster"
# docker run-d-p 2379 docker run-p 2380 http://192.167.0.168:2380 2380-- net etcdnet-- ip 192.167.0.168-- name etcd0 quay.io/coreos/etcd / usr/local/bin/etcd-- name autumn-client0-advertise-client-urls http://192.167.0.168:2379-listen-client-urls http://0.0.0.0:2379-initial-advertise-peer-urls http://192.167.0.168:2380-listen-peer-urls http : / / 0.0.0.0 initial-cluster 2380-initial-cluster-token etcd-cluster-initial-cluster "autumn-client0= http://192.167.0.168:2380"-initial-cluster-state new
Parameter description
-data-dir specifies the data storage directory of the node, including node ID, cluster ID, cluster initialization configuration, Snapshot file, and if-wal-dir is not specified, the WAL file will be stored;-wal-dir specifies the storage directory of the node's was file. If this parameter is specified, the wal file will be stored separately from other data files. -name node name-initial-advertise-peer-urls tells other nodes in the cluster that url.- listen-peer-urls listens for URL to communicate with other nodes-advertise-client-urls tells client url, that is, all nodes in the ID- initial-cluster cluster of the url- initial-cluster-token cluster of the service
Profile description, such as
# [member] # Node name ETCD_NAME=node1# data storage location ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd" # ETCD_WAL_DIR= "# ETCD_SNAPSHOT_COUNT=" 10000 "# ETCD_HEARTBEAT_INTERVAL=" 100 "# ETCD_ELECTION_TIMEOUT=" 1000 "# listening on the address of other Etcd instances ETCD_LISTEN_PEER_URLS=" http://0.0.0.0:2380"# listens on the client address ETCD_LISTEN _ CLIENT_URLS= "http://0.0.0.0:2379, Http://0.0.0.0:4001"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""##[cluster]# notifies other Etcd instances of the address ETCD_INITIAL_ADVERTISE_PEER_URLS= "http://node1:2380"# if you use different ETCD_NAME (e.g. Test), set ETCD_INITIAL_CLUSTER value for this name I.E. "test= http://..."# initializes node address ETCD_INITIAL_CLUSTER=" node1= http://node1:2380,node2=http://node2:2380,etcd2=http://etcd2:2380"# initializes cluster state New means new ETCD_INITIAL_CLUSTER_STATE= "new" # initialize cluster tokenETCD_INITIAL_CLUSTER_TOKEN= "mritd-etcd-cluster" # notify client address ETCD_ADVERTISE_CLIENT_URLS= http://node1:2379,http://node1:4001
An on the ETCD service of server A, add a new node by calling API: 192.167.0.170
# curl http://127.0.0.1:2379/v2/members-XPOST-H "Content-Type: application/json"-d'{"peerURLs": ["http://192.167.0.170:2480"]}'"
B runs an instance of ETCD on container / server B, named autumn-client1, and notice that its status is existing. There is a previous IP and its own IP in "- initial-cluster".
# docker run-d-p 2479 http://0.0.0.0:2480 2479-p 2480 http://0.0.0.0:2480 2480-- name etcd1 quay.io/coreos/etcd / usr/local/bin/etcd-- name autumen-client1-advertise-client-urls http://192.167.0.170:2379-listen-client-urls http://0.0.0.0:2379-initial-advertise-peer-urls http://192.167.0.170:2380-listen-peer-urls http://0.0.0.0:2480-initial -cluster-token etcd-cluster-initial-cluster "autumn-client0= http://192.167.0.168:2380, Autumn-client1= http://192.167.0.170:2480"-initial-cluster-state existing
An on the ETCD service of server A, add a new node by calling API: 192.168.7.172
# curl http://127.0.0.1:2379/v2/members-XPOST-H "Content-Type: application/json"-d'{"peerURLs": ["http://192.167.0.172:2580"]}'"
C runs an ETCD instance named autumn-client2 on server C, and notice that its status is existing. "- initial-cluster" contains the IP of all previous nodes and its own IP.
# docker run-d-p 2579 http://0.0.0.0:2380-p 2580 http://192.167.0.172:2579 2580-- name etcd quay.io/coreos/etcd-name autumn-client2-advertise-client-urls http://192.167.0.172:2579-listen-client-urls http://0.0.0.0:2379-initial-advertise-peer-urls http://192.167.0.172:2580-listen-peer-urls http://0.0.0.0:2380-initial-cluster-token etcd-cluster-initial -cluster "autumn-client0= http://192.167.0.168:2380, Autumn-client1= http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580"-initial-cluster-state existing2, add servers to the cluster uniformly
("- initial-cluster" contains the IP of all nodes, with a status of new)
Execute on A
# docker run-d-p 2379 listen-client-urls 2379-p 2380 http://192.167.0.168:2380 2380-- restart=always-- net etcdnet-- ip 192.167.0.168-- name etcd0 quay.io/coreos/etcd / usr/local/bin/etcd-- name autumn-client0-advertise-client-urls http://192.167.0.168:2379-listen-client-urls http://0.0.0.0:2379-initial-advertise-peer-urls http://192.167.0.168:2380-listen-peer -urls http://0.0.0.0:2380-initial-cluster-token etcd-cluster-initial-cluster autumn-client0= http://192.167.0.168:2380, Autumn-client1= http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580-initial-cluster-state new
Execute on B
# docker run-d-p 2479 docker run-p 2480 http://192.167.0.170:2480 2480-- restart=always-- net etcdnet-- ip 192.167.0.170-- name etcd1 quay.io/coreos/etcd / usr/local/bin/etcd-- name autumn-client1-advertise-client-urls http://192.167.0.170:2479-listen-client-urls http://0.0.0.0:2479-initial-advertise-peer-urls http://192.167.0.170:2480-listen- Peer-urls http://0.0.0.0:2480-initial-cluster-token etcd-cluster-initial-cluster autumn-client0= http://192.167.0.168:2380, Autumn-client1= http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580-initial-cluster-state new
Execute on C
# docker run-d-p 2579 docker run-p 2580 http://192.167.0.172:2580 2580-- restart=always-- net etcdnet-- ip 192.167.0.172-- name etcd2 quay.io/coreos/etcd / usr/local/bin/etcd-- name autumn-client2-advertise-client-urls http://192.167.0.172:2579-listen-client-urls http://0.0.0.0:2579-initial-advertise-peer-urls http://192.167.0.172:2580-listen- Peer-urls http://0.0.0.0:2580-initial-cluster-token etcd-cluster-initial-cluster autumn-client0= http://192.167.0.168:2380, Autumn-client1= http://192.167.0.170:2480,autumn-client2=http://192.167.0.172:2580-initial-cluster-state new
Cluster verification. Clusters created by the two methods can be verified in the following ways
1. Verify the cluster members. Look at the members on each machine in the cluster and the result should be the same
[root@localhost] # curl-L http://127.0.0.1:2379/v2/members{"members":[{"id":"1a661f2b9997ba39","name":"autumn-client0","peerURLs":["http://192.167.0.168:2380"],"clientURLs":["http://192.168.7.168:2379"]},{"id":"4932c8ea462e079c","name":"autumn-client2", "peerURLs": ["http://192.167.0.172:2580"],"clientURLs":["http://192.167.0.172:2579"]},{"id":"c1dbdde07e61741e","name":"autumn-client1","peerURLs":["http://192.167.0.170:2480"],"clientURLs":[http://192.167.0.170:2479]}]}"
2. If you add data to one machine and view it on other machines, the result should be the same.
Execute on A
[root@localhost] # curl-L http://127.0.0.1:2379/v2/keys/message-XPUT-d value= "Hello autumn" {"action": "set", "node": {"key": "/ message", "value": "Hello autumn", "modifiedIndex": 13, "createdIndex": 13}, "prevNode": {"key": "/ message", "value": "Hello world1", "modifiedIndex": 11, "createdIndex": 11}}
Execute on B and C
[root@localhost ~] # curl-L http://127.0.0.1:2379/v2/keys/message{"action":"get","node":{"key":"/message","value":"Hello autumn "," modifiedIndex ": 13 "createdIndex": 13}} basic operation of etcd api API api: https://github.com/coreos/etcd/blob/6acb3d67fbe131b3b2d5d010e00ec80182be4628/Documentation/v2/api.md cluster configuration api: https://github.com/coreos/etcd/blob/6acb3d67fbe131b3b2d5d010e00ec80182be4628/Documentation/v2/members_api.md authentication api: https://github.com/coreos/etcd/blob/6acb3d67fbe131b3b2d5d010e00ec80182be4628/Documentation/v2/auth_api.md configuration item: https://github. Com/coreos/etcd/blob/master/Documentation/op-guide/configuration.md https://coreos.com/etcd/docs/latest/runtime-configuration.html https://coreos.com/etcd/docs/latest/clustering.html https://coreos.com/etcd/docs/latest/runtime-configuration.html https://coreos.com/etcd/docs/latest/ https://coreos.com/etcd/docs/latest/admin_guide.html#disaster -recovery uses standard restful interface Http and https protocols are supported. Service registration and discovery
The traditional service invocation is generally called through the configuration file reading ip, which has many limitations, such as inflexibility, inability to perceive the state of the service, complex load balancing of service invocation, and so on. With the introduction of etcd, the problem will be greatly simplified.
After the service starts, it registers with etcd and reports its listening port and current weight factor and other information, and sets the ttl value for this information.
The service periodically reports information such as weight factors within the time of ttl.
When the client side invokes the service, it gets the information from the etcd, makes the call, and listens to whether the service has changed (implemented by the watch method).
When a service is added, the watch method listens for a change and adds the service to the substitution list. When the service is hung up, the ttl expires, and the client detects the change and kicks the service out of the call list, thus realizing the dynamic expansion of the service.
On the other hand, the client carries out the weighted invocation strategy of the client through each change of the weight factor, so as to ensure the load balancing of the back-end service.
Thank you for your reading. The above is the content of "what is the ETCD cluster building method of Docker micro service". After the study of this article, I believe you have a deeper understanding of what the ETCD cluster building method of Docker micro service is, and the specific usage needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.