Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use jenkins and docker in Docker to achieve continuous delivery

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

How to use jenkins and docker to achieve continuous delivery in Docker? I believe that many inexperienced people are at a loss about this, so this article summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.

The release cycle for traditional delivery can be represented as follows:

Disadvantages of traditional delivery:

Slow delivery: here, the customer receives the product long after the specified requirements. This leads to unsatisfactory time to market and delays in customer feedback.

Long feedback cycle: the feedback cycle is not only related to the customer, but also to the developer. Suppose you accidentally created a bug and learned about it during the UAT phase. How long will it take to repair the things you repaired two months ago? Even small mistakes can take weeks.

Dangerous hotfixes: hotfixes usually cannot wait for the full UAT phase, so they are often tested differently (the UAT phase is shortened), or there is no testing at all.

Stress: unpredictable releases are stressful for the operations team. More importantly, the release cycle is usually tightly scheduled, which puts additional pressure on developers and testers.

In order to be able to deliver products continuously, rather than spending a lot of money on round-the-clock operations teams, we need automation. This is why continuous delivery means changing each stage of the traditional delivery process into a series of scripts called automatic deployment pipeline or continuous delivery pipeline.

Then, if manual steps are not required, we can run the process after each code change to continuously deliver the product to the user.

Advantages of continuous delivery:

Fast delivery: after the development is completed, customers can use the product, which greatly shortens the time to market. Keep in mind that software generates revenue only in the hands of users.

Quick feedback cycle: suppose you create a bug in your code and the bug goes into production on the same day. How long will it take to fix what you were working on that day? Maybe not that much. This, along with the quick rollback strategy, is the best way to keep production stable.

Low-risk release: if you publish every day, the process becomes repeatable and therefore more secure.

Flexible release options: if you need to release immediately, everything is ready, so there is no additional time / cost associated with the release decision.

Needless to say, we can achieve all the benefits by eliminating all delivery phases and developing directly in production. However, this will lead to a decline in quality. In fact, the whole difficulty of introducing continuous delivery is the fear that quality will deteriorate as manual steps are eliminated. We will show how to handle it in a secure way, delivering products that continue to have less bug and better adapt to customer needs.

three。 How to achieve continuous delivery

Automate the deployment of the pipeline, which includes the three phases shown in the following figure:

Each step corresponds to a stage in the traditional delivery process, as follows:

Continuous integration: check to ensure that code written by different developers is integrated

Automated acceptance testing: this will replace the manual QA phase and check whether the features implemented by the developer meet customer needs

Configuration management: this will replace the manual phase of configuring the environment and deploying the software

1. Continuous integration

The continuous integration phase provides the first feedback to developers. It checks out the code from the git,svn, compiles the code, runs unit tests, and verifies the quality of the code. If any of the steps fail, stop the pipeline execution, and the first thing the developer should do is to repair the continuous integration build.

two。 Automatic acceptance testing

The automated acceptance test phase is a set of tests written with QAs that should replace the manual UAT phase. It acts as a quality check to determine whether a product is ready for release. If any acceptance tests fail, the pipeline execution is stopped and no further steps are run. It prevents movement to the configuration management phase, thereby preventing publishing.

3. Configuration management

The configuration management phase is responsible for tracking and controlling changes in the software and its environment. It involves preparing and installing the necessary tools, expanding the number and distribution of service instances, infrastructure inventories, and all tasks related to application deployment.

Configuration management is a solution to the problems caused by manually deploying and configuring applications in a production environment. Configuration management tools, such as Ansible, Chef, or Puppet, support storing configuration files in a version control system and tracking every change made on the production server.

Another task of the manual task of the alternative operations (operation and maintenance) team is responsible for application monitoring. This is usually done by streaming the logs and metrics of the running system to a common dashboard that is monitored by the developer (or the DevOps team, described in the following section).

four。 Tools

1.docker ecological chain

As the leader of containerization, Docker has occupied a dominant position in the software industry in recent years. It allows applications to be packaged in environment-independent images, so the server is treated as a resource farm rather than a machine that must be configured for each application.

Docker is a clear choice because it is well suited to the (micro) world of services and continuous delivery processes.

2.jenkins

Jenkins is the most popular automation server on the market. It helps to create continuous integration and continuous delivery pipelines, and usually helps to create any other automated scripts. Highly plug-in oriented, it has a great community and continues to expand its new features.

More importantly, it allows pipelines to be written as code and supports a distributed build environment.

3. Ansible

Ansible is an automated tool that helps with software configuration, configuration management, and application deployment. It adopts agentless architecture and integrates well with Docker.

4.gitHub

GitHub is definitely the number one of all managed version control systems. It provides a very stable system, a web-based UI, and a free service for a common repository.

Nonetheless, any source control management service or tool can be delivered continuously, whether it is in the cloud or self-hosted, or whether it is based on Git, SVN, Mercurial, or any other tool.

5. Docker actual combat

Overview of 1.docker

Docker is an open source project designed to help deploy applications using software containers. The following is quoted from the official Docker page:

The Docker container encapsulates a piece of software in a complete file system that contains everything you need to run: code, runtime, system tools, system libraries-- anything that can be installed on the server. This ensures that the software will always run the same, regardless of its environment.

As a result, Docker allows applications to be packaged into images that can run anywhere, in a manner similar to virtualization.

two。 Virtualization and containerization

Without Docker, isolation and other benefits can be achieved by using hardware virtualization, often referred to as virtual machines. The most popular solutions are VirtualBox, VMware, and Parallels.

The virtual machine simulates the computer architecture and provides the functions of the physical computer. If each application is delivered and run as a separate virtual machine image, we can achieve complete isolation of the application. The following figure shows the concept of virtualization:

Each application starts as a separate image that contains all dependencies and guest operating systems. The image is run by the hypervisor, which simulates the physical computer architecture.

This deployment approach is widely supported by many tools, such as Vagrant, and is dedicated to developing and testing the environment. However, virtualization has three significant disadvantages:

Low performance: the virtual machine simulates the entire computer architecture to run the guest operating system, so each operation has a lot of overhead.

High resource consumption: simulation requires a lot of resources and must be performed separately for each application. This is why on a standard desktop, only a few applications can run at the same time.

Large images: each application is delivered using a full operating system, so deployment on the server means sending and storing large amounts of data.

The following figure shows the difference made by docker:

Installation of 3.docker

The installation process for Docker is quick and easy. Currently, most Linux operating systems support it, many of which provide dedicated binaries. Mac and Windows are also well supported by native applications.

However, it is important to understand that Docker is internally based on the Linux kernel and its details, which is why, in Mac and Windows, it uses virtual machines (Mac uses xhyve, Windows uses hyv) to run the Docker engine environment.

This is only about the operation of Ubuntu 16.04 on linux (official order):

$sudo apt-get update$ sudo apt-key adv-- keyserver hkp://p80.pool.sks-keyservers.net:80-- recv-keys 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 $sudo apt-add-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial main stable'$ sudo apt-get update$ sudo apt-get install-y docker-ce

If prompted to report an error:

You can execute the following command again:

$cd / etc/apt/sources.list.d$ sudo vi docker.list deb https://download.docker.com/linux/ubuntu zesty edge$sudo apt update$sudo apt install docker-ce

No error was reported this time, but it was found to be too slow, because the download docker-ce is relatively large, and it is a foreign website, which can be changed to domestic origin. The instructions are as follows:

Sudo apt-get update sudo apt-get install\ apt-transport-https\ ca-certificates\ curl\ software-properties-commoncurl-fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key addsudo add-apt-repository "deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu $(lsb_release-cs) stable" sudo apt-get update sudo apt-get install docker-ce

Test whether the installation is complete: docker-v or docker info can see some basic information about docker indicating that the installation is successful:

4. Run docker

Now that the docker environment has been installed, we can first run a very classic example: hello world:

$docker run hello-world

When you see the following message, you are running correctly:

Let's look at what's going on under the hood step by step:

1. Run the Docker client using the run command.

The 2.Docker client contacts the Docker daemon and asks to create a container from the image named hello-world.

The 3.Docker daemon checks to see if it contains the hello-world image locally and, because it does not, requests the hello-world image from the remote Docker Hub registry.

The 4.Docker Hub registry contains the hello-world image, so drag and drop it into the Docker daemon.

The 5.Docker daemon creates a new container from the hello-world image that starts the executable that generates the output.

The 6.Docker daemon streams this output to the Docker client.

The 7.Docker client sends it to your terminal.

5. Build an image

There are two ways to build an image:

The Docker commit command and Dockerfile are built automatically. Let's talk about how docker builds images.

I'm just going to say Dockerfile:

Manually creating each Docker image using the commit command can be laborious, especially if the build automation and continuous delivery process are involved. Fortunately, there is a built-in language that specifies all the instructions that need to be executed to build the Docker image.

1. Create a DockerFile file and enter the following:

FROM ubuntu:16.04RUN apt-get update & &\ apt-get install-y python

two。 Execute the build image command:

Docker build-t ubuntu_with_python.

3. We can do this by order:

Docker images sees the image we created:

6.docker container

We can view the running containers with the command: docker ps, and view all the containers with docker ps-a. The container is stateful.

Start the container by mirroring and check the status of the container:

To stop the docker container is the command: docker stop container id

7. Run tomcat with external access

1. Run the tomcat image:

Docker run-d tomcat

However, our external browsers cannot access the tomcat 8080 port, and a virtual machine blocks the network connection.

So when we start the container, we use the-p command to connect the network port mapping between the virtual host and the docker container

2. Start-up of Murp

Docker run-d-p 8080 tomcat

Enter the virtual machine ip+ port on the web page to access it as follows:

Six. Jenkins actual combat

1. Introduction to jenkins

Jenkins is an open source automation server written in Java. Due to very active community-based support and a large number of plug-ins, it is the most popular tool for implementing continuous integration and continuous delivery processes.

Jenkins is superior to other continuous integration tools and is the most widely used of its kind. All of this is possible because of its features and functions.

two。 Install jenkins

The installation process for Jenkins is quick and easy. There are many different ways to do this, but since we are already familiar with Docker tools and their benefits, we will start with Docker-based solutions. This is also the simplest, most predictable and smartest approach.

The installation of jenkins has some environmental requirements:

Java 8 256MB free memory 1 GB+ free disk space

However, it is important to understand that requirements are strictly dependent on what you plan to do with Jenkins. If Jenkins is used as a continuous integration server for the entire team, then even small teams are recommended to use 1gb + free memory and 50gb + free disk space. Needless to say, Jenkins also performs some calculations and transmits large amounts of data over the network, so CPU and bandwidth are critical.

There are two ways to install jenkins:

1. Use docker mirroring

two。 Do not use docker mirroring

1. Install jenkins using the docker image

Use the command:

Docker run-p: 8080-v: / var/jenkins_home jenkins:2.60.1

Enter the URL on the web page, as shown in the figure indicates that the installation is successful:

Enter the password so that you can see an initial password in the log:

two。 Install jenkins without using a docker image

Installation is also very simple, as long as the following command is executed:

$wget-Q-O-https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add-$sudo sh-c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > / etc/apt/sources.list.d/jenkins.list'$ sudo apt-get update$ sudo apt-get install jenkins

3.jenkins simple Application (hello world)

Let's follow this rule and take a look at the steps to create the first Jenkins pipe:

Click New Item.

Enter hello world as the project name, select Pipeline, and click OK.

There are a lot of options. Now we will skip them and go straight to the pipe part.

In the script text box, we can enter a pipeline script:

Pipeline {agent any stages {stage ("Hello") {steps {echo 'Hello World'}

Click Save and build immediately. We can see the following figure in the output log:

seven。 Continuous integration pipeline

1. Introduce the pipeline

Pipes can be understood as a series of automated operations and can be seen as a simple script chain:

Operation grouping: grouping operations into phases (also known as gates or quality gates) that introduce a structure into the process and clearly define the rules: if one phase fails, no other phases are executed

Visibility: all aspects of the process are visual, which facilitates rapid fault analysis and facilitates team collaboration

Feedback: team members can keep abreast of any problems when they occur, so they can react quickly

two。 Pipeline structure

The Jenkins pipeline consists of two elements: phase and step. The following figure shows how to use them:

3. Hello world of the pipe

Pipeline {agent any stages {stage ('First Stage') {steps {echo' Step 1. Hello World'}} stage ('Second Stage') {steps {echo' Step 2. Second time Hello' echo 'Step 3. Third time Hello'}

Immediately after the build is successful, you can see the following figure:

4. Pipe Rul

Agent: it specifies where the execution occurs and can define tags to match the same agent or docker tag to specify a dynamically prepared container to provide an environment for pipe execution

Triggers: this defines a method for automatically triggering pipes, and you can use cron to set time-based scheduling or pollScm to check for changes in the repository (we'll cover it in more detail in the triggers and notifications section)

Options: this specifies options for a specific pipe, such as timeout (maximum time for the pipe to run) or retry (the number of times the pipe should be rerun after failure)

Environment: this defines a set of key values that are used as environment variables during the build process

Parameters: this defines a list of user input parameters

Stage: this allows logical grouping of steps

When: this determines whether the phase should be executed according to the given conditions

After reading the above, have you mastered how to use jenkins and docker to achieve continuous delivery in Docker? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report