Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How will Rancher solve the problems that arise in deployment and container management

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail how Rancher will solve the problems in deployment and container management. The content of the article is of high quality, so the editor will share it with you for reference. I hope you will have some understanding of the relevant knowledge after reading this article.

We will step into Rancher step by step and discuss in detail how Rancher will solve the problems that arise in deployment and container management. This will enable developers to easily modify their application deployment logic, and operators can also view the deployment time of the application.

Challenges in using Docker-Compose

First, the operation and maintenance staff must manually adjust the execution plan of all services. The deployer needs to decide which application to deploy to which host, which means that the deployer needs to be aware of the remaining available resources of each host at all times. If a host or container crashes, the deployed operator will need to redeploy the application. In actual production, this means that the host is often in a state of load imbalance, and it takes a long time for the service to recover after a crash.

Second, when using Docker-Compose, it is very difficult to get the current state of your service. For example, we often hear the question from operators, project managers, and developers: "which version of the XX application is running in the deployment environment?" If we are manually adjusting the execution plan of the service, the answer to this question usually needs to ask the designated engineer, who needs to log in to the server and run the ps command in docker to view the container information. However, in the face of these problems, Rancher will provide us with great convenience: everyone can easily get information about the deployed services without temporarily asking for the help of the operation and maintenance staff.

Before using Rancher, we tried to learn about a number of other solutions that can manage Docker hosts or clusters. However, none of these solutions notice that this is the management of Docker hosts or clusters in multiple environments (multi-environment), which will become one of the biggest troubles and burdens. If there are services running in eight different environments with different loads, what we need is a unified way to manage the cluster without wanting to access eight different services. And we want to make rebuilding the environment a task that can be done in minutes for us, so that developers can change the development environment at will. For production environments, however, what we want to give them is limited read-only access. In the face of such requirements, a centralized management scheme based on role-based access control (RBAC) model is very necessary. We initially decided to try Rancher because it is very simple to deploy.

When Rancher faces these challenges

In just half a day, using AWS ELB, Elasticache, RDS and the existing Docker host, we have deployed Rancher and run it successfully. Being able to configure authentication information conveniently is also one of the advantages of Rancher.

We will not go into the details of the deployment of Rancher itself, which has been made clear in the Rancher deployment documentation. Instead, we will show you how to migrate the original settings (mentioned in the first and second parts of the tutorial) from the first step of the initial setup.

Let's start by creating different environments, and to make the process as simple as possible, we will set up the development environment (dev), deployment environment (stage), and production environment (prod) respectively. Each environment already has Docker hosts running on Ubuntu, and these Docker hosts are configured by internal Ansible, Ansible installed Docker, our monitoring agent, and made some organization-specific changes. On Rancher, you only need to run a command to register Docker hosts within Rancher server to add existing Docker hosts to each environment.

Add a Rancher host

In most cases, adding a host requires a series of operations: completing a few clicks on a web page with the mouse, then switching to a specific environment, and finally entering commands on the end system. However, if you use Rancher API, we can turn this series of operations into fully automated settings with the help of the Ansible tool. Out of curiosity, we intercept parts of playbook about this operation below (mostly based on logical changes made in Hussein Galas's repo).

Name: install dependencies for uri module apt: name=python-httplib2 update_cache=yesname: check if the rancher-agent is running command: docker ps-filter 'name=rancher-agent' register: containersname: get registration command from rancher uri: method: GET user: "{{RANCHER_API_KEY}}" password: "{{RANCHER_SECRET_KEY}}" force_basic_auth: yes status_code: 200 url: "https://rancher.abc.net/v1/projects/{{ RANCHER _ PROJECT_ID}} / registrationtokens "return_content: yes validate_certs: yes register: rancher_token_url when:" 'rancher-agent' not in containers.stdout "name: register the host machine with rancher shell: > docker run-d-privileged-v / var/run/docker.sock:/var/run/docker.sock {{rancher_token_url.json [' data'] [0] ['image']}} {{rancher_token_url. Json ['data'] [0] [' command'] .split () | last}} when: "'rancher-agent' not in containers.stdout"

As the work progresses step by step, we have completed the creation of the environment and registered the host in Rancher server, so let's take a look at how to integrate our deployment workflow into Rancher. We know that for every Docker, there are running containers, and these systems are deployed through the Ansible tool with the help of Jenkins. Rancher provides the following out-of-the-box features:

Manage existing containers (e.g. start, modify, view logs, start an interactive shell)

Get information about running and stopped containers (such as mirror information, initialization command information, command information, port mapping information, and environment variable information)

View resource usage at the host and container levels (e.g. CPU usage, memory usage, and disk and network usage)

A separate container

Soon, we have registered the Docker host with Rancher Server, and now we can view the running status of the container in various environments. Not only that, if we want to share this information with other teams, we just need to give them limited permissions for an environment. In the above way, when we want to get the status information, there is no need to request the operator to log in to the Docker host and query it manually, which also reduces the number of requests for environment information, because we have assigned certain access rights to each team. For example, if you assign read-only access to environment information to the development team, it will build a bridge between the development team and the deployment operations team, so that both teams will be more concerned about the state of the environment than ever before. On this basis, troubleshooting has also become a process of mutual cooperation between groups, rather than the previous one-way solution that relies on synchronous information flow, and mutual cooperation will also reduce the total time to resolve emergencies.

So far, we have added existing Docker hosts to Rancher Server, and based on the first part of the tutorial about Jenkins and Rancher, the next step we intend to improve is our existing deployment pipeline, we will modify the existing deployment pipeline so that Rancher compose,Rancher Compose will replace Docker compose mentioned earlier in the Ansible tool. But before we move on to the next section, we first need to know some information about the application of Rancher, scheduling, Docker Compose, and Rancher Compose.

* * applications and services: * * Rancher separates each independent container (that is, a container deployed outside Rancher or a container with one-time function generated through Rancher UI), application and service from each other. Simply put, an application is a set of services, and all containers need to take advantage of services (the content of the application and services will be described in more detail later) to build an application. Separate containers need to be scheduled manually.

* * scheduling: * * in the previous deployment technology, the OPS staff needed to decide on which host the container should run. If you are using a deployment script, it means that the operator needs to decide which host or hosts the deployment script is running on; if using Ansible, it means that the operator needs to decide which hosts or groups need to work in Jenkins. Either way, operation and maintenance staff are required to make some decisions, but in most cases, their decisions lack some reliable basis, which is very disadvantageous to our deployment (for example, the CPU utilization of a host is as high as 100%). Many solutions, such as Docker Swarm, Kubernetes, Mesos, and Rancher, use schedulers to solve these problems. For an operation that needs to be performed, the scheduler will request information about a set of hosts and determine which ones are suitable to perform the operation. The scheduler gradually narrows the host selection based on default demand settings or user-defined specific requirements, such as CPU usage, affinity or anti-affinity rules (for example, banning the deployment of two identical containers on the same host). If I were an operator in charge of deployment, the scheduler would greatly reduce my workload (especially when I am busy deploying late at night), because the scheduler calculates the above information much faster and more accurately than I do. Rancher can provide an out-of-the-box scheduler when we deploy the service through the application.

* * Docker compose:**Rancher uses Docker compose to create applications and define services. Because we have converted the service into Docker compose files, it is much easier for us to create applications based on this. Applications can be created manually from the UI interface or quickly from the command line (CLI) through Rancher compose.

* * Rancher compose:**Rancher compose is a tool that allows us to easily manage applications and services in every environment in Rancher through the command line (CLI). At the same time, Rancher compose allows some other access to the Rancher tool through the rancher-compse.yml file. This is a pure additional file and will not replace the original docker-compose.yml file. In the rancher-compose.yml file, you can define the following, such as:

Upgrade strategy information for each service

Health check-up information for each service

Demand scale information for each service

These are all very useful highlights of Rancher, which you won't get if you use Docker Compose or Docker daemon. If you want to see all the features that Rancher Compose has to offer, you can check out this document

By handing over the existing deployment work to Rancher Compose to replace the previous Ansible tools, we can easily migrate and deploy the services in the form of Rancher applications. After that, we can remove the DESTINATION parameter, but we still keep the VERSION parameter because we need to use it when inserting the docker-compose.uml file. The following is a shell snippet of the deployment logic when deploying with Jenkins:

Export RANCHER_URL= http://rancher.abc.net/export RANCHER_ACCESS_KEY=... Export RANCHER_SECRET_KEY=... If [- f docker/docker-compose.yml]; then docker_dir=dockerelif [- f / opt/abc/dockerfiles/java-service-1/docker-compose.yml]; then docker_dir=/opt/abc/dockerfiles/java-service-1else echo "No docker-compose.yml found. Can't continue!" Exit 1fiif! [- f ${docker_dir} / rancher-compose.yml]; then echo "No rancher-compose.yml found. Can't continue!" Exit 1fi/usr/local/bin/rancher-compose-verbose\-f ${docker_dir} / docker-compose.yml\-r ${docker_dir} / rancher-compose.yml\ up-d-upgrade

After reading the code snippet, we can find that it mainly includes the following:

We defined how to access our Rancher server as environment variables.

You need to find the docker-compose.yml file, or the task will report an error to exit.

You need to find the rancher-compose.yml file, or the task will report an error to exit.

Run Rancher-compose and tell it not to block and use the-d command to output logs, and use the-upgrade command to update an existing service.

As you may have found, in most cases, the logic of the code is the same, and the biggest difference is that you use rancher-compose instead of using the Ansible tool to complete the deployment, and add rancher-compose.yml files to each service. Specific to our java-service-1 application, the docker-compose file and rancher-compose file now look like this:

Docker-compose.ymljava-service-1:image: registry.abc.net/java-service-1:$ {VERSION} container_name: java-service-1expose:- 8080 Ports-8080:8080rancher-compose.ymljava-service-1:scale: 3

Before we begin the deployment, let's review the process of the deployment:

Developers push changes to the code to the git

Use Jenkins to unit test the code, triggering downstream work after the test is over

The downstream work uses the new code to build a docker image and push it to our own Docker image repository

Create a deployment ticket that contains the application name, version number, and deployment environment

DEPLOY-111: App: JavaService1, branch "release/1.0.1" Environment: Production

The deployment engineer runs the deployment of Jenkins for the application, and the runtime needs to take the version number as a parameter. Rancher compose starts running, creates or updates applications for an environment, and when the required scale is reached, the work ends. The deployment engineer and the development engineer manually validate the service, respectively, and the deployment engineer confirms that the upgrade is completed in Rancher UI.

When using Rancher for our service deployment, we get great convenience from Rancher's built-in tools such as scheduling, auto scaling, restore, upgrade, and rollback, so that we don't put much effort into the deployment process. * * at the same time, we find that the workload of migrating deployment work from Ansible tools to Rancher is very small, and only need to add rancher-compose.yml files to the original. However, using Rancher to handle the scheduling of our containers means it will be difficult to determine which host our application is running on. For example, we did not decide where to run the java-service-1 application before. For the back end, the application does not have a static IP for load balancing related operations. We need to find a way to make our applications aware of each other. Finally, for our java-service-1 application, we will explicitly bind port 8080 of the docker host where the application container is located to the application, but if there are other services bound to the same port as the application, it will fail to start. Usually the engineer responsible for scheduling decisions will handle the above transactions. However, it is best to inform the scheduler of this information to prevent this from happening.

This is the end of how Rancher will solve the problems in deployment and container management. I hope the above content can be of some help and learn more. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report