Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to construct Environment containerization in Docker+rancher

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article is about how Docker+rancher builds the containerization of the environment. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

Jieshao

With the maturity of the Docker project and its related ecosystem, containers have been used by more enterprises in larger projects. Therefore, we need a set of coherent workflows and pipelines to simplify the deployment of large-scale projects. In this guide, we will introduce you from the aspects of code development, continuous integration, continuous deployment, and zero downtime updates. This is already a fairly standard workflow in large organizations, but in this series of articles, we will focus more on how to replicate these workflows in a Docker-based environment in the container age. In addition, we will describe in detail how to automate these workflows using Docker and Rancher. In this guide, we provide detailed examples of each step to help you implement your own CI system.

We hope that through this guide, you will be able to extract some of the ideas, use tools such as Docker and Rancher to create a continuous integration and continuous deployment pipeline that belongs to your enterprise, and add custom processes to the CI/CD pipeline according to your actual situation and needs.

Before we get started, there are a few things to note: because the versions of Docker and Rancher change very quickly, there may be some API and implementation inconsistencies on different versions of the platform. For reference, our working environment in the guide is as follows: Golang 1.8, Docker 1.13.1, docker Version 2.32.2, DockerMurcompose 1.11.1 + and Rancher 1.4.1 +.

Part I: continuous integration

So to take the first step, let's start with the entrance to the pipeline, that is, build the source code. Building / compiling is not a hassle at the beginning of any project, as most languages and tools have a well-defined and well-documented process of compiling source code. However, with the expansion of projects and teams and the increase of dependencies, how to provide consistent and stable builds for all developers while ensuring code quality will gradually become a greater challenge. In this section, we will introduce some common challenges, best practices, and how to achieve continuous integration through Docker.

Expand the challenge of building a system

Before sharing best practices, let's take a look at some of the challenges that often arise in maintaining build systems.

First of all, the first problem you will face when extending a project is Dependency Management (dependency management). The developer will pull the replacement code from the library and integrate with the source code, so it becomes important to track the version of each library used by the code, make sure that all parts of the project use the same version, test the upgrade of the library version, and push the updates that pass the test to all your projects.

Secondly, managing environmental dependency is a problem related to dependency management but with some differences. It includes IDE and IDE configuration, tool versions (such as Maven version, Python version), and tool configuration (such as static analysis rule files, code formatting templates). Because different parts of the project may conflict with each other, environmental dependency management can become very tricky. Unlike dependency conflicts at the code level, resolving these conflicts is often very difficult or even impossible. For example, in a recent project, we used fabric for automated deployment and s3cmd to upload artifacts to Amazon S3. Unfortunately, the latest version of fabric requires Python 2.7, while s3cmd requires Python 2.6. The fix requires us to switch to the test version of s3cmd or use the older version of fabric.

Finally, for each large project, they are mainly faced with the problem of build time. As the scope and complexity of the project increases, more and more languages are added. At the same time, the project team also needs to test a variety of interdependent components. For example, if you have a shared database, tests that change the same data cannot be executed at the same time. In addition, we need to set the expected state before the test execution and be able to clean up after completion. As a result, all of this can take a few minutes to hours to build, and full testing means that development can be greatly slowed down, but serious problems can occur if you skip testing.

Solutions and best practices

To solve all these problems, we need to have a build system that supports the following requirements:

Repeatability

We must be able to generate / create similar (or identical) build environments with the same dependencies on different development machines and automated build servers.

Centralized management

We must be able to control the build environment of all developers and build servers from a central code repository or server. This includes setting up the build environment and update latency.

Isolation

The subcomponents of the project must be built independently, rather than using clearly defined shared dependencies.

Parallelization

We must be able to provide parallel builds for subcomponents.

In order to meet the repeatability requirements, we must use centralized dependency management. Most modern languages and development frameworks support automatic dependency management. Maven is widely used in Java and several other languages, and Python uses pip,Ruby and Bundler. All of these tools have a very similar style, and you can commit index files (pom,xml, requirements.txt, or gemfile) into your source control. Then run the tool to download the dependency to the build machine. After we have tested them, we can centrally manage the index file, and then make changes by updating the index in source control. However, there are still problems with managing environment dependencies, such as the need to install the correct versions of Maven, Python, and Ruby. We also need to make sure that these tools are run by developers. Maven can automatically check for dependency updates, but for pip and Bundler, we must wrap the build command in the script that triggers the dependency update to run.

For setting up dependency management tools and scripts, most small teams use only documentation and leave the task to developers. However, this approach is not entirely applicable to large teams, especially when dependencies change over time. To make matters more complicated, the installation commands for these tools change depending on the platform and operating system on which the machine is built. You can use orchestration tools such as Puppet or Chef to manage the installation of dependencies and to set up configuration files. Both Puppet and Cher allow the use of a central server or shared configuration in source control to support centralized management. In this way, you can test the configuration changes in advance and then hand them over to the developer. However, these tools have some disadvantages: first, it becomes too important to install and configure Puppet or Chef, and the full version of them is not free. In addition, each tool has its own language to define tasks, which adds another administrative cost to the IT team and developers. Another point is that orchestration tools do not provide isolation, so tool version conflicts are still a problem, and the problem of performing parallel tests remains unresolved.

To ensure component isolation and reduce build time, we can use automatic virtualization systems such as Vagrant. Vagrant can create and run virtual machines that can isolate the build of various components and support parallel builds. When centralized management is ready, the Vagrant configuration file can be submitted to source control and handed over to the developer. By the way. The virtual machine can be tested and deployed to Atlas for download by all developers. There are still drawbacks, you need further configuration to set up Vagrant, and in this problem, the virtual machine is a very important solution. Each virtual machine runs a complete operating system and network stack, including test runs or compilers. Memory and disk resources need to be allocated to each virtual machine in advance.

Despite some warnings and flaws, using dependency management (Maven, pip, Bundler), choreography (Puppet, Chef), and Vagrant (virtualization), we can build a stable, testable, centrally managed build system. Not all projects require a complete tool stack; however, any long-running large project requires this level of automation.

Using Docker to create a containerized construction system

With the advent of Docker, we can no longer spend too much time and resources supporting the tools we mentioned above, and Docker and its tool ecosystem can help us meet these needs. In this section, we will use the following steps to create a containerized build environment for the application.

Containment your build environment

Package your application with Docker

Create a build environment using Docker Compose

We use an example application called go-messenger to demonstrate how to use Docker in the build pipeline, which will be used in later chapters. You can get this application from Github:

Https://github.com/usmanismail/go-messenger/tree/golang-1.8 .

The main data flow of the system is shown below. The application has two components: one is a portable RESTful authentication server with Golang, and the other is a session manager, which accepts long-running TCP connections from clients and routes messages between clients. Returning to the goal of this article, we will focus on the RESTful Authentication Service (go-auth). This subsystem consists of a set of stateless network servers and a database cluster for storing user information.

Containment your build environment

The first step in building a build system is to create a container image that contains all the tools needed to build the project. Our mirrored Docker file is shown in the following figure. Because our application is written in the go language, we use the official golang image and install the govendor dependency management tool. It is important to note that if you are using the Java language in your project, you can create a similar "build container" with the Java base image and install Maven instead of govendor.

Then we added a compilation script that brings together all the steps to build and test our code. The script shown below uses the govendor restore download dependency, standardizes the format with the go fmt command, executes the test with the go test command, and then compiles the project using go build.

To ensure repeatability, we can use the Docker container and all the tools needed to build the component into a single, versioned container image. The image can be downloaded from Dockerhub or built using Dockerfile (docker build-t go-builder:1.8). So far, all developers (and the machines that build the environment) can build any go project using the container with the following command:

In the above command, we ran version 1.8 of the usman/go-builder image, installed our source code into the container with-v, and specified the SOURCE_PATH environment variable with-e. If you want to test go-builder in our sample project, you can run all the steps using the following command and create an executable called go-auth in the root directory of the go-auth project.

An interesting by-product of isolating all sources from the build tool is that we can easily change the build tool and configuration. For example, in the above command, we used golang 1.8. By changing go builder:1.8 to go builder:1.5, you can test the impact on the project when using golang 1.5. To centrally manage the images used by all developers, we can deploy the latest test version of the build container (builder container) to a fixed version (that is, the latest version) and ensure that all developers use go-builder:latest to build the source code. Similarly, if different versions of the build tools are used in different parts of our project, we can build them with different containers without having to worry about managing multiple language versions in a single build environment. For example, we can use official python images that support various versions of python to alleviate early python problems.

Package your application with Docker

If you want to package the executable file into your own container, you need to add a dockerfile file containing the contents shown below, and then run "docker build-t go-auth". In dockerfile, we add the binary output from the last step to a new container and expose port 9000 to the application to accept incoming connections. We also specify the entry point to run the binaries, which uses the given parameters. Because the Go binaries are self-contained (self-contained), we used the original Ubuntu image. However, if your project requires run time dependencies, you can also package them in a container. For example, if you are going to generate a war file, you can use the tomcat container.

Create a build environment using Docker Compose

Now we can repeatedly build the project in a centrally managed container that isolates various components, and we can extend the build pipeline to run integration tests. This also fully demonstrates Docker's ability to speed up builds when using parallelization. One of the main reasons why tests cannot be parallelized is shared data storage. This is especially true for integration testing, because we don't usually simulate external databases. Our sample project has a similar problem because we use the MySQL database to store users. We want to write a test to make sure we can register new users. The second time we register for the same user, we expect a collision error. This forces us to serialize the test so that we can clear the registered users after the test is complete and then start a new test.

To set up isolated, parallel builds, we can define a Docker Compose template (docker-compose.yml) as follows. We define a database service that uses the MySQL official image and the required environment variables. Then we use the container we created to create a GoAuth service to package the application and connect it to the database container. It is important to note that here we use GO_AUTH_VERSION variable substitution. If this variable is specified in the environment, compose will use it as the tag for the go-auth mirror, otherwise the default value latest will be used as the tag.

With this docker-compose template, we can run the application environment by executing docker-compose up. Then run the following curl command to simulate our integration test. The first time should return 200 OK, while the second time should return 409 Conflict. If you are running on Linux, the service_ip parameter should be localhost, and if you are using OSX, the parameter should be the IP of the Docker virtual machine. To find service_ip, you can run:

Finally, after running the test, we can run docker-compose rm to clean up the entire application environment.

If we want to run multiple separate versions of the application, we need to update the docker-compose template to add the service database1 and goauth2 to their counterparts with the same configuration. The only change is that in Goauth2, we need to change the port entry from 9000 to 9001. In this way, the ports exposed by the application do not conflict. The complete template is here. Now when you run docker-compose, you can run two integration tests in parallel. Something like this can be effectively used to speed up the build of a project with multiple independent sub-components, such as a multi-module Maven project.

Thank you for reading! This is the end of the article on "how to build environment containerization in Docker+rancher". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report