Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to create a continuous deployment pipeline for docker

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article focuses on "how to create a continuous deployment pipeline for docker". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "how docker creates a continuous deployment pipeline".

Create a continuous deployment pipeline

We have created the test environment, and in the previous article we also built the CI pipeline to create applications, package applications into containers, and conduct integration testing. Now let's move on to the final chapter of this series of articles, extending the CI pipeline to create a continuous deployment pipeline.

Publish a Docker image

We first publish the packaged image to the Docker repository. For convenience, we use a public DockerHub repository, but for actual development projects, we recommend that you push the Docker image into a private repository. Let's create a new Free Style Project task in Jenkins, click the New Item button and name the task push-go-auth-image. After completing these steps, you will go to the Jenkins task configuration page, where you can define the steps required to push your go-auth image to Dockerhub.

Because this is a follow-up to the pipeline we built in the previous chapter, this job will have a configuration similar to the go-auth-integration-test job. You need to first set parameterized build and add the GO_AUTH_VERSION variable.

To push the image, we select the Add build step drop-down menu, and then select the Execute shell option. Add the following command to the result text box. In these commands, we will log in to DockerHub and push the image we built earlier. Here we are going to push it to the usman/go-auth repository, and you need to push it to your own DockerHub library.

As mentioned in the previous chapter, we used the git-flow branch model, where all of its functional branches are merged into the "development" branch. In order to be able to continuously deploy the changes that occur to the integrated environment, we need a simple mechanism to generate the latest images based on the development branch. During the packaging process, we use GO_AUTH_VERSION to mark the docker container (for example, docker build-t usman/go-auth:$ {GO_AUTH_VERSION}....). By default, this version will be a development branch, but later in this chapter we will create new releases for our applications, using CI/CD pipelines to build, package, test, and deploy them to our integrated environment. Note that in this scenario, we always overwrite the usman/go-auth:develop of our development branch, which limits us from referencing historical builds and performing rollbacks. You can make a simple change to the pipeline by adding the Jenkins build number to your version name, such as usman/go-auth:develop-14.

You need to specify your DockerHub user name, password and email address. You can specify them as parameter builds at each run time, or you can use Jenkins Mask Passwords Plugin to safely define them in the main Jenkins configuration and add them to the build. Make sure that "Mask passwords (and enable global password)" is enabled for your homework in Build Environment.

Now we need to make sure that this job is triggered after the integration test job. To do this, we need to update the integration test job to trigger a parameterized build with the current build parameters. This means that after each successful run of the integration test job, we will push the tested image to DockerHub.

Finally, we need to trigger the deployment job after the image is successfully push to DockerHub. Just as we do for other assignments, we can do this by adding post-build operations.

Deploy to an integrated environment

We will use Rancher Compose CLI to stop the running environment, get the latest image from DockerHub, and restart the environment (as a reminder, Updates API is still evolving and may change. New features will certainly be added in the coming weeks or months, so please feel free to check the document for updates. Before we create a Jenkins job for continuous deployment, let's complete these steps manually.

We can use the easiest way to stop all services (auth services, load balancers, and mysql), extract the latest images, and start all services. However, for us, we just want to update the application, not stop the long-running environment, which is not ideal. To update our application, we first need to stop auth-service. You can use the stop command in Rancher Compose to complete the operation.

This stops all containers running the goauth service, and you can open the stack in Rancher UI to verify that the state of the service is set to Inactive. Next, we will have Rancher pull the mirror version that we want to deploy. Now we can dynamically specify the version we want to run without having to update the template every time. If you need the same mirror version of push multiple times, please add the pull switch to make sure we are using the latest copy of the mirror version.

We can also use the upgrade function with the following command to achieve rolling updates of the environment with zero downtime. We will discuss rolling updates further in the next section. After the upgrade is complete, you can use the-- rollbach command or-- confirm-upgrade to confirm the change or roll back to the preview state.

Now that we know how to run our update, we create a Jenkins job in the pipeline to perform this operation. As before, create a new freestyle project named deploy-integration. Like other jobs, it is a parameterized build, using GO_AUTH_VERSION as a string parameter. Next, we will copy the artifacts from the upstream build-go-auth job.

Finally, we need to add the Rancher Compose up command to the Execute Sheell build step, which we specified earlier. Note that you also need to set up Rancher CLI on Jenkins in advance so that it can be used for your build on the system path. The content of the shell step is similar to the following code snippet. If you have multiple Rancher Compose nodes, the load balancer container may start on different hosts, and your Route 53 record set may need to be updated.

With our two new Jenkins assignments, the pipeline we built from the previous chapter now looks like the following figure. Each time the check-in for our sample application is compiled to make sure it is free of syntax errors and can pass the automated test. These changes are then packaged, passed the integration test, and finally deployed to the manual test. The following five steps provide a good baseline template for building pipelines and help move code from the development phase to the test and deployment phase. Having a continuous deployment pipeline ensures that the code can not only be tested by the automated system, but also quickly made available to testers. In addition, it can also be used as a model for production deployment automation, testing operational tools and code, and continuously deploying applications.

Release and deploy a new version

After we deploy the code to a continuous testable environment, we will have the QA team test these changes for some time. After they determine that the code is ready, we can create a release and then deploy it to a production environment. Publishing with git-flow is similar to the way feature branches (which we discussed in the previous chapter work), we publish using the gitflow release start [Release Name] command (shown below). This will create a new name to publish the branch. In this branch, we will perform some internal operations, such as increasing the version number and making final changes.

When this is done, we run the releasefinish command to merge the release branch into the main branch. In this way, the main branch always reflects the latest released code. In addition, each release will be tagged, and each post can have a history. If we don't need to make any other changes, we can confirm the final release.

The final step is to put the release push into the remote repository

Git push origin master git push-- tags / / pushes the v1 tag to remote repository

If you are using Github to host the git library, you should find a new version by now. We can also push the version image that matches the release name to DockerHub, which is also a good choice. Let's run the first job to trigger our CD pipeline. As you may recall, we set up the GitParameter plug-in for the CI pipeline so that we can get all the tags that match the filter from git. However, for the default development branch, when we manually trigger the pipeline, we can choose from the git tag. For example, we provide two versions of the application below. We select one of them to start the integration and deployment pipeline.

Then, after the following steps, version 1.1 of our application will be deployed to a long-running integrated environment with just a few clicks.

Get the selected publication from git

Build the application and run the unit test

Create a new image with the label v1.1 (such as usman/go-auth:v1.1)

Run the integration test

Push image (usman/go-auth:v1.1) to DockerHub

Deploy this version to our integrated environment

Deployment strategy

There are many challenges in managing a long-running environment, one of which is to minimize downtime, preferably zero, during release. A considerable amount of work needs to be done to make this process predictable and safe. Automation and quality assurance can indeed greatly improve the predictability and security of releases. But even so, failures can occur, and for any good operations team, their goal is to recover quickly while minimizing the impact. In this section, we will introduce some strategies for deploying a long-running environment as well as their advantages and disadvantages.

In-situ update

The first strategy is called In-placeupdates, and as the name implies, its idea is to reuse the application environment and update the application in place. This is sometimes referred to as rolling deployment. We will use the sample application (go-auth) that we have discussed so far. In addition, we assume that you use the service that Rancher runs. If you want to update in place, you can use the following upgrade command:

Where the user can't see it, Rancher agent will get a new image on each host running the auth service container (--pull will download again, even though the image already exists). Then the agent stops the old container and starts the new container in batch. You can use the-- batch-size flag to control the batch size. In addition, you can specify the pause interval (--interval) between batch updates, using a large enough interval to verify that the behavior of the new container is running as expected, and that the service is generally healthy. By default, when the old container terminates, the new container starts at their location. Or you can set the start_first flag in rancher-compose.yml to tell Rancher to stop the old container before starting the new one.

If you are not satisfied with the current update and want to roll back, you can use the rollback flag to complete the rollback. Or if you want to continue with the update, just tell rancher to specify the confirm-update flag to complete the update. You can also specify the confirm-update flag in the original up command to do this in one step. You can also perform updates in RancherUI by selecting "upgrade" from the service menu (shown in the following figure).

Updating in place is very simple and does not require additional resources to manage multiple stacks. However, this mode of production has its shortcomings. First of all, fine-grained control of rollback updates is mostly difficult, that is, they are often difficult to estimate in the event of a failure. For example, dealing with partial failures and rolling updates can become very confusing. You need to know on which nodes changes have been deployed, which have not been deployed, and which are still running the previous version. Second, you need to ensure that all updates are not only backward compatible, but also forward compatible, because both the old version and the new version need to run in the same environment at the same time. Finally, depending on the usage, in-place updates may not be practical. For example, if the old client needs to continue to use the old environment, and the new client needs to roll forward. In this case, it is easier to separate the client using the methods we will list today.

Blue and green deployment

One problem with in-place updates is the lack of predictability. To overcome this problem, another deployment strategy is to use two parallel stacks for the application: one is active and the other is standby. To run the new version, deploy the latest version of the application to the standby stack. When verification to the new version requires work, traffic will be switched from the active stack to the alternate stack. At this point, the previously active stack is called an alternate stack, and vice versa. This strategy allows you to validate deployed code, quickly roll back (switch standby and reactivate), and extend the concurrency of the two stacks as needed. This strategy is often referred to as blue-green deployment. To accomplish this deployment with our sample application, you can simply create two stacks in Rancher: go-auth-blue and go-auth-green. In addition, we assume that the database is not part of these stacks, but is managed independently. Each stack runs goauth and auth-lb services. If we assume that go-auth-green is stack-active and want to perform updates, all we have to do is deploy the latest version to the blue stack, perform validation, and switch traffic to it.

Traffic switching

There are two ways to perform traffic switching, change the DNS record to point to the new stack, or use proxies or load balancers to route traffic to the active stack. We will describe these two methods in detail below.

Record update

A simple way is to update the DNS record to point to the active stack. One advantage of this approach is that we can use weighted DNS records to slowly convert traffic to the new version. This is also an easy way to perform canary releases, and is useful for safely updating or doing Amax B testing in a real-time environment. For example, we can deploy experimental features to our own feature stack (or active stack), and then update DNS to forward only a small portion of the traffic to the new version. If there is a problem with the new update, we can reverse the DNS record and roll it back. In addition, it is much safer than switching all traffic from one stack to another, because this will overwrite the new stack. Although simple, if you want all traffic to be switched to the new version at once, DNS record updates are the simplest way to do things. Depending on the DNS client, these changes can take a long time to propagate, resulting in a lot of communication with the older version rather than a direct switchover.

Use reverse proxy

Using a proxy or load balancer, you can switch the entire traffic at once simply by updating it to point to the new stack. This approach is useful in a variety of scenarios, such as non-backward compatible updates. To do this using Rancher, we first create a stack that contains only load balancers.

Next, we specify a port for the load balancer, configure SSL, and select the active stack load balancer as target service from the drop-down menu to complete the creation. In essence, we put the load on the load balancer, which then routes the traffic to the actual service node. With external load balancers, you don't need to update each version of the DNS record. Instead, you can simply update the external load balancer to point to the updated stack.

At this point, I believe you have a deeper understanding of "how to create a continuous deployment pipeline for docker". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report