Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Configuring Consul+registrator Real-time Service Discovery based on docker Service

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Consul is a tool for service discovery and configuration. Consul is distributed, highly available, and highly scalable.

Consul services provide the following key features:

Service discovery: Consul clients can provide a service, such as api or mysql, while other clients can use Consul to discover a specified service provider. It is easy to find the services he depends on through DNS or HTTP applications; health check: the Consul client can provide any number of health checks, specify a service (for example, whether webserver returns a 200 OK status code) or use a local node (for example, whether memory usage is greater than 90%). This information can be used by operator to monitor the health of the cluster. The service discovery component is used to avoid sending traffic to unhealthy hosts; Key/Value storage: applications can use Consul levels of Key/Value storage according to their own needs. Such as dynamic configuration, function marking, coordination, leader election, etc., simple HTTP API makes it easier to use; multiple data centers: Consul supports multiple data centers out of the box. This means that users do not need to worry about the need to establish additional layers of abstraction to expand the business to multiple regions.

There is not much to say about Consul services. If you want to know more about its functions, you can go to the Consul official website.

Blog outline:

I. Environmental preparation

Execute binary commands on Docker01 to deploy consul services

Run the consul service as a container on docker02 and docker03 hosts

Run the registrator service as a container on docker02 and docker03 hosts

Deploy the Nginx service on the host docker01 to provide a reverse proxy

Install the consul-template command tool in docker01 and write the template

7. Real-time discovery function of verification service

I. Environmental preparation

The schematic diagram of its work is as follows:

The approximate flow of the above diagram is as follows:

1. Deploy the consul service as a binary package on the docker01 host and run it in the background, and its identity is leader.

2. Docker02 and docker03 run the consul service as a container and add it to the docker01 consensus cluster

3. Run registrator container in the background on the host docker02 and docker03 to automatically discover the services provided by docker container.

4. Deploy Nginx on docker01, provide reverse proxy service, run two web containers based on Nginx image on docker02 and docker03 hosts, and provide different web page files to test the effect.

5. Install the consul-template command on docker01, write the collected information (the information collected by registrator to the container) into the template template, and finally write it into the configuration file of Nginx.

6. At this point, the client can access the Nginx reverse proxy server (docker01) to obtain the web page files provided by the Nginx container running on the docker02 and docker03 servers.

Note: registrator is an automatic discovery of services provided by docker container and registers services in the back-end services registry (data center). It is mainly used to collect information about the container running service and send it to consul. In addition to consul, the data center also has etcd, zookeeper and so on.

Before you begin, please download the source code package required for the configuration in the blog post.

Execute the binary command on Docker01 to deploy the consul service [root@docker01 ~] # rz # upload the compressed package [root@docker01 ~] # unzip consul_1.5.1_linux_amd64.zip # provided by me and unpack it. After decompression, you will get a command [root@docker01 ~] # mv consul / usr/local/bin/ # move to the command storage path [root@docker01 ~] # chmod + x / usr/local/bin/consul # give it execution permission [root@docker01 ~] # nohup consul agent-bootstrap-ui-data-dir=/var/lib/consul-data-bind=192.168.20.6-client=0.0.0.0-node=master & [1] 8330 [root@docker01 ~] # nohup: ignore input and append output to "nohup.out" # after executing the command Will prompt the information, and occupy the terminal, press enter key, # after running the above command, it will generate a file called "nohup.out" in the current directory, which stores the running log of the consul service. # after executing the above command, consul will run in the background and return its PID number, which can be viewed through the "jobs-l" command.

The relevant parameters of the above command are explained as follows:

-server: add a service;-bootstrap: generally used when the server is a single node, and since the election is leader;-ui: enable the internal web interface;-bind: specify the IP to enable the service (that is, the native IP);-client: specify the client of the service (generally any here);-node: the name used for communication within the cluster. The default is the host name.

The functions of the open ports are as follows: 8300: cluster nodes; 8301: ports for internal access to the cluster; 8302: communication between data centers; 8500 virtual DNS

Attach two query commands:

[root@docker01 ~] # consul info # you can see the leader and version information of this cluster # for example: leader_addr = 192.168.20.6 leader 8300 [root@docker01 ~] # consul members # View the internal information of the cluster

At this point, the client can access port 8500 of docker01 for authentication and will see the following page:

3. Run the consul service as a container on docker02 and docker03 hosts # docker02 server configuration is as follows # [root@docker02 ~] # docker run-d-name consul-p 8301consul 8301-p 8301:8301/udp-p 8500 consul 8500-p 8600 docker02-- restart=always progrium/consul-join 192.168.20.6-advertise 192.168.20.7-client 0.0.0.0-node=node01# in the above command "- join" is the IP address of the specified leader (that is, docker01) "- advertise" specifies its own IP address # docker03 server configuration is as follows # [root@docker03 ~] # docker run-d-- name consul-p 8301 docker run 8301-p 8301:8301/udp-p 8500 advertise 8500-p 8600 8600-- restart=always progrium/consul-join 192.168 .20.6-advertise 192.168.20.8-client 0.0.0.0-node=node02# is similar to the command executed by the docker02 host But changed to your own IP, and the node name has been changed # Note: the node name in the node cluster must be unique.

Note: the consul service of the host docker01 can also be deployed as a container, and this is just to demonstrate its multiple deployment methods.

At this point, on the docker01 host, execute the "consul members" command to view the information of docker02 and docker03, as follows:

[root@docker01 ~] # consul members # execute the command Node Address Status Type Build Protocol DC Segmentmaster 192.168.20.6 root@docker01 8301 alive server 1.5.1 2 dc1 node01 192.168.20.7 dc1 node01 8301 alive client 0.5.2 2 dc1 node02 192.168.20.8 root@docker01 8301 alive client 0.5.2 2 dc1

The client accesses port 8500 of 192.168.20.6, and all ports related to the docker container on the docker02 or 03 host can also be seen by doing the following:

IV. Run the registrator service as a container on docker02 and docker03 hosts # host docker02 configuration is as follows # [root@docker02 ~] # docker run-d-- name registrator-v / var/run/docker.sock:/tmp/docker.sock-- restart=always gliderlabs/registrator consul://192.168.20.7:8500# the function of the above command is to send the collected container message Message is sent to port 8500 of this machine to display # host docker03 configuration as follows # [root@docker03 ~] # docker run-d-- name registrator-v / var/run/docker.sock:/tmp/docker.sock-- restart=always gliderlabs/registrator consul://192.168.20.8:8500# is the same as docker02 Send the collected container information to port 8500 of the machine to display 5. Deploy the Nginx service on the host docker01 to provide a reverse proxy

No comments will be written here to deploy the Nginx service. If you want to optimize the Nginx service, please refer to the blog article: Nginx installation and Deep Optimization

[root@docker01] # tar zxf nginx-1.14.0.tar.gz-C / usr/src [root@docker01 ~] # useradd-M-s / sbin/nologin www [root@docker01 ~] # cd / usr/src/nginx-1.14.0/ [root@docker01 nginx-1.14.0] #. / configure-- prefix=/usr/local/nginx-- user=www-- group=www-- with-http_stub_status_module-- with-http_realip_module-- with- Pcre-- with-http_ssl_module & & make & & make install [root@docker01 nginx-1.14.0] # ln-s / usr/local/nginx/sbin/nginx / usr/local/sbin/ [root@docker01 nginx-1.14.0] # nginx VI. Install the consul-template command tool in docker01 And write templates.

The purpose of consul-template: to write the collected information (the information collected by registrator to the container) into the template template and eventually to the configuration file of Nginx.

1. Generate the consul-template command tool (to install the new version, go to the Consul template release page and download the latest version for use):

[root@docker01 ~] # rz # upload the package provided by me [root@docker01 ~] # unzip consul-template_0.19.5_linux_amd64.zip # unpack [root@docker01 ~] # mv consul-template / usr/local/bin/ # move to the command search path [root@docker01 ~] # chmod + x / usr/local/bin/consul-template # to grant execution permission

2. Under the Nginx installation directory, write a template for the consul-template command tool to use, and configure the Nginx reverse proxy:

[root@docker01 ~] # cd / usr/local/nginx/ [root@docker01 nginx] # mkdir consul [root@docker01 nginx] # cd consul/ [root@docker01 consul] # vim nginx.ctmpl # create a template file upstream http_backend {{range service "nginx"}} # where the "Nginx" is searched based on the docker image, not the name of the container server {{.Address}: {{.Port}} {{end} # the above is written in go language to collect Nginx-related IP address and port information # the following is the definition of reverse proxy server {listen 8000; # listener address can be arbitrarily specified, no conflict can be server_name localhost; location / {proxy_pass http://http_backend; After editing, save and exit [root@docker01 consul] # nohup consul-template-consul-addr 192.168.20.6 usr/local/nginx/consul/nginx.ctmpl:/usr/local/nginx/consul/vhost.conf:/usr/local/sbin/nginx 8500-template "/ usr/local/nginx/consul/nginx.ctmpl:/usr/local/nginx/consul/vhost.conf:/usr/local/sbin/nginx-s reload" & # generate a vhost.conf file from the information collected locally # this service must be run in the background Real-time discovery and update of services can be realized [root@docker01 consul] # vim.. / conf/nginx.conf # call the generated vhost.conf file include / usr/local/nginx/consul/*.conf in the main configuration file } # write "include" configuration above the curly braces at the end of the configuration file to invoke the vhost.conf file. 7. Verify the real-time discovery function of the service.

At this point, any Nginx-related containers on docker02 or docker03 running in the background "- d" mode will be added to the reverse proxy for scheduling. Once the container is accidentally closed, it can be automatically removed from the reverse proxy configuration file.

Now you can run two Nginx containers on docker02, and on docker03. The container names are web01, web02., and the web files are this is web01 test, this is web02 test.

The purpose of preparing different web page files for it is to make it easier for the client to distinguish which container is being accessed.

Since the configuration process is similar, I'll write a process to run the Nginx container here, and the rest can follow suit.

The example configuration is as follows (run web01 and modify its home page file):

[root@docker02] # docker run-d-P-- name web01 nginx [root@docker02 ~] # docker exec-it web01 / bin/bashroot@ff910228a2b2:/# echo "this is a web01 test." > / usr/share/nginx/html/index.html

After docker02 and docker03 run four Nginx containers (which must be run in the background, that is, you must have the "- d" option at run time), when you access port 8000 of docker01, you will iterate through the web files provided by the four containers, as follows:

[root@docker01 consul] # curl 192.168.20.6:8000this is a web01 test. [root@docker01 consul] # curl 192.168.20.6:8000this is a web02 test. [root@docker01 consul] # curl 192.168.20.6:8000this is a web03 test. [root@docker01 consul] # curl 192.168.20.6:8000this is a web04 test. [root@docker01 consul] # curl 192.168.20.6:8000this is a web01 test. [root@docker01 consul] # curl 192. 168.20.6:8000this is a web02 test.# and view the following files You will see that the server in the web pool below [root@docker01 consul] # pwd/usr/local/nginx/consul [root@docker01 consul] # cat vhost.conf # is automatically generated by the upstream http_backend {server 192.168.20.7 cat vhost.conf based on the written template. Server 192.168.20.7 server 32769; server 192.168.20.8 server 32769;} server {listen 8000; server_name localhost; location / {proxy_pass http://http_backend;}} # since consul-template is running in the background, the above file # will be dynamically modified and the Nginx service will be restarted as soon as a change to the container is detected

If you delete all the Nginx containers of docker02 and docker03, leaving only one web01, and then access port 8000 of the Nginx proxy server again, you can only access the web01 web page and view the vhost.conf file, and the previously added server address and port are also gone, as follows (delete or stop the Nginx container by yourself):

[root@docker01 consul] # cat vhost.conf # only the IP and port information of the web01 container in this file upstream http_backend {server 192.168.20.7 cat vhost.conf 32768;} server {listen 8000; server_name localhost; location / {proxy_pass container }} # after many visits, you can only access the web01 page: [root@docker01 consul] # curl 192.168.20.6:8000this is a web01 test. [root@docker01 consul] # curl 192.168.20.6:8000this is a web01 test. [root@docker01 consul] # curl 192.168.20.6:8000this is a web01 test. [root@docker01 consul] # curl 192.168.20.6:8000this is a web01 test.

At this point, the consul+registrator+docker real-time service discovery is configured.

-this is the end of this article. Thank you for reading-

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report