In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
How to use Docker, Docker-Compose and Rancher to build and deploy Pipeline, many novices are not very clear about this. In order to help you solve this problem, the following editor will explain it in detail. People with this need can come and learn. I hope you can get something.
We will discuss how to use Rancher to implement service discovery for consul.
In this final article in this series of building deployment pipelines, we will explore some of the challenges you face when switching to Rancher for cluster scheduling. In the previous article, we performed the scheduling using Rancher so that the operators were no longer responsible for selecting the location of each container run. To use this new scheme, we must let the rest of the environment know where the scheduler places these services and how to access them. We will also discuss how to use tags to manipulate the scheduler to adjust the location of containers and avoid port binding conflicts. Finally, we will optimize our upgrade process by taking advantage of Rancher's rollback feature.
Before the introduction of Rancher, our environment was quite static. We always deploy the container to the same host, while deploying to a different host means we need to update some configuration files to reflect the new location. For example, if we want to add an additional instance of the 'java-service-1' application, we also need to update the load balancer to point to the IP of the attached instance. Using the scheduler makes it impossible to predict the location of the container deployment, and we need to dynamically configure the environment so that it can automatically adapt to changes. To do this, we need to use service registration and service discovery.
The service registry provides us with a single source of the location of the application in the environment. Unlike the hard-coded service location, our application can query the service registry through API and automatically reconfigure when our environment changes. Rancher uses Rancher's DNS and metadata services to provide service discovery out of the box. However, when using a mix of Docker and non-Docker applications, we cannot rely entirely on Rancher to handle service discovery. We need a separate tool to track the location of all our services, and consul meets this requirement.
We will not elaborate on how to set up Consul in your environment, but we will briefly describe how we use Consul at ABC. In each environment, we have a Consul cluster deployed as a container. We deploy a Consul agent on each host in the environment, and we deploy a registry container if the host is running Docker. The registry monitors the Docker event API for each daemon and automatically updates the Consul during lifecycle events. For example, after the new container is deployed, the registry automatically registers the service with Consul. When the container is deleted, the registry unregisters it.
List of Consul services
After registering all the services in Consul, we can run consul-template in the load balancer to dynamically populate the upstream list based on the service data stored in Consul. For our NGINX load balancer, we can create a template to populate the back end of the 'java-service-1' application:
# upstreams.confupstream java-service-1 {{range _, $element: = service "java-service-1"}} server {{.Address}: {{.Port}; {{else}} server 127.0.0.1 element 65535; # force a 502 {{end}
This template looks in Consul for a list of services registered as "java-service-1". It then loops through the list, adding a service line with the IP address and port of that particular application instance. If no "java-service-1" application is registered in Consul, we throw 502 by default to avoid errors in NGINX.
We can run consul-template in daemon mode to monitor changes to Consul, re-render the template when changes occur, and then reload NGINX to apply the new configuration.
TEMPLATE_FILE=/etc/nginx/upstreams.conf.tmplRELOAD_CMD=/usr/sbin/nginx-s reload consul-template-consul consul.stage.abc.net:8500\-template "${TEMPLATE_FILE}: ${TEMPLATE_FILE//.tmpl/}: ${RELOAD_CMD}"
By using our load balancer settings to dynamically change the rest of the environment, we can rely entirely on the Rancher scheduler to make complex decisions about where our services should run. However, our "java-service-1" application binds TCP port 8080 on the Docker host, and if multiple application containers are scheduled on the same host, it will cause port binding conflicts and eventually fail. To avoid this, we can manipulate the scheduler through scheduling rules.
Making conditions by using container tags in the docker-compose.yml file is a way that Rancher gives us to manipulate the scheduler. Conditions can include affinity rules, negation, and "soft" coercion (meaning to avoid as much as possible). In the case of using the 'java-service-1' application, we know that only one container can run on the host at a given time, so we can set the anti-association rule based on the container name. This causes the scheduler to look for a Docker host that is not running a container named "java-service-1". Our docker-compose.yml file looks like this:
Java-service-1: image: registry.abc.net/java-service-1:$ {VERSION} container_name: java-service-1 ports:-8080 rides 8080 labels: io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=java-service-1
Note the introduction of the "label" key. All scheduling rules are added as tags. Tags can be added to Docker hosts and containers. When we register our hosts with Rancher, we can associate them with tags and later cut off the scheduling deployment. For example, if we have a set of Docker hosts that use SSD drives for storage optimization, we can add the host label storage=ssd.
Rancher host label
Containers that need to optimize storage hosts can be tagged to force the scheduler to deploy them only on matching hosts. We will update our "java-service-1" application to deploy only on storage-optimized hosts:
Java-service-1: image: registry.abc.net/java-service-1:$ {VERSION} container_name: java-service-1 ports:-8080 rides 8080 labels: io.rancher.scheduler.affinity:container_label_ne: io.rancher.stack_service.name=java-service-1 io.rancher.scheduler.affinity:host_label: storage=ssd
By using tags, we can fine-tune our application deployment based on the required capacity, rather than running a specific set of containers on individual hosts. Switch to Rancher for cluster scheduling, even if you still have applications that must run on specific hosts.
Finally, we can use the rollback feature of Rancher to optimize our service upgrade. In our deployment workflow, we instruct Rancher to perform an upgrade on the service stack to deploy the service by calling rancher-compose. The upgrade process is roughly as follows:
Start the upgrade by pulling a new image
One by one, the existing container is stopped and the new container is started
When the deployer logs in to UI and selects finish upgrade, the upgrade is complete
The stopped old service container is deleted
Rancher upgrade
This workflow is fine when the deployment of a given service is very small. However, when a service is in the "upgrade" state (until the deployer chooses to complete the upgrade), you cannot make any new upgrades to it until you perform a "complete upgrade" or "rollback" operation. The rancher-compose utility gives us the option to programmatically select the action to be performed as the deployer. For example, if you test the service automatically, you can invoke such tests after the rancher-compose upgrade returns. Depending on the state of these tests, rancher-compose can be called again, this time we tell the stack to "complete the upgrade" or "rollback". An original example of our deployment of a Jenkins job might be as follows:
# for the full job, see part 3 of this series/usr/local/bin/rancher-compose-- verbose\-f ${docker_dir} / docker-compose.yml\-r ${docker_dir} / rancher-compose.yml\ up-d-upgrade JAVA_SERVICE_1_URL= http://java-service-1.stage.abc.net:8080/api/v1/status if curl-s ${JAVA_SERVICE_1_URL} | grep-Q "OK" Then # looks good, confirm or "finish" the upgrade / usr/local/bin/rancher-compose-- verbose\-f ${docker_dir} / docker-compose.yml\-r ${docker_dir} / rancher-compose.yml\ up-- confirm-upgrade else # looks like there's an error Rollback the containers # to the previously deployed version / usr/local/bin/rancher-compose-- verbose\-f ${docker_dir} / docker-compose.yml\-r ${docker_dir} / rancher-compose.yml\ up-- rollback fi
This logic will invoke our application endpoint to perform a simple state check. If the output shows' OK', then we finish the upgrade, otherwise we need to roll back to the previously deployed version. If you don't have automated testing, another option is to simply always complete or "confirm" the upgrade.
# for the full job, see part 3 of this series/usr/local/bin/rancher-compose-- verbose\-f ${docker_dir} / docker-compose.yml\-r ${docker_dir} / rancher-compose.yml\ up-d-upgrade-confirm-upgrade
If you decide soon that you need to roll back, simply redeploy the previous version using the same deployment job. This is certainly not as friendly as Rancher's upgrade and rollback features, but it unlocks future upgrades by keeping the stack out of the "upgrade" state.
When the service is rolled back in Rancher, the container is redeployed to the previous version. Unintended consequences can occur when services are deployed using common tags such as "latest" or "master". For example, let's assume that the 'java-service-1' application' was previously deployed with the tag 'latest'. Make changes to the image, push it to the registry, and the Docker tag "latest" is updated to point to this new image. We use the tag "latest" to continue the upgrade and decide after testing that the application needs to be rolled back. Using the Rancher scroll stack still redeploys the latest image because the label "latest" has not been updated to point to the previous image. Rollback can be implemented in pure technical terms, but the expected effect of deploying a recent working copy is completely impossible. At ABC, we avoid this situation by always using specific tags related to the version of the application. Therefore, instead of using the tag latest to deploy our "java-service-1" application, we can use the version label "1.0.1-22-7e56158". This ensures that the rollback will always point to the latest working deployment of our application in the environment.
Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 255
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.