In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you about how to use Kubernetes and Jenkins to create a CI/CD assembly line. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
What problem is CI/CD trying to solve?
Like DevOps, Agile, Scrum, Kanban, automation, and other terms, CI/CD is a technical term that is often mentioned together. Sometimes it is treated as part of the workflow, but it is not clear what it is or why it is adopted. For young DevOps engineers, using CI/CD has become the norm, perhaps they do not see the "traditional" software release process and therefore do not appreciate CI/CD.
CI/CD stands for continuous integration / continuous delivery and / or deployment. If a team does not access the CI/CD process, it must go through the following stages when generating a new software product:
The product manager (who represents the interests of the customer) provides the functions that the product needs and the behavior that the product needs to follow. The document must be as detailed as possible.
Developers with business analysis capabilities begin to code the application, perform unit tests, and then submit the results to a version control system such as git.
Once the development phase is complete, the project is handed over to QA. Conduct multiple rounds of testing of the product, such as user acceptance testing, integration testing, performance testing. In the meantime, there will be no code changes until the QA phase is complete. If any bug is found, it needs to be returned to the developer for modification, and then the product is handed over to QA.
Once the QA is complete, the operations team will deploy the code to the production environment.
There are some drawbacks to the above workflow:
First of all, it takes too much time from the requirements put forward by the product manager to the time the product has the development conditions.
It is really difficult for developers to locate problems from code that has been written for a month or more. Keep in mind that bug can only be discovered after the completion of the development phase and the beginning of the QA phase.
When there is an urgent code fix, such as a serious bug that needs to be hotfixed, the QA phase may be shortened because it needs to be deployed as soon as possible.
There is little collaboration between different teams, and when bug appears, people start to blame each other. From the beginning, everyone is only concerned about their own part of the project and ignores the common goal.
CI/CD solves the above problems by introducing automation. Once each change in the code is pushed to the version control system, it is tested, and then deployed to the pre-production / UAT environment for further testing before being deployed to the production environment used by the user. Automation ensures that the overall process is fast, reliable, repeatable, and not error prone.
So, what is CI/CD?
Books have been written on the subject. How, why, and when to use it in your architecture. However, we always tend to belittle theory and value practice. Having said that, the following article briefly describes the automation steps that will be performed once the modified code is committed:
Continuous Integration (CI): the first step does not include QA. In other words, it doesn't care whether the code provides the functionality that the user needs. Instead, it ensures the quality of the code. Through unit testing and integration testing, developers can quickly find defects in code quality. We can increase code coverage checks and static analysis to improve code quality assurance.
User acceptance testing: this is the first part of the CD process. At this stage, automated tests are performed on the code to ensure that the code meets the user's expectations. For example, a web application works without any errors, but customers want visitors to go to the login page before navigating to the home page. But the current code allows visitors to navigate directly to the main page, which is not in line with the customer's needs. This problem will be pointed out during the UAT test. In a non-CD environment, it becomes the work of manual QA testers.
Deployment: this is the second part of the CD process. It includes making changes on the server / pods / container hosting the application to apply the updated version. This is done in an automated way, preferably through a configuration management tool such as Ansible, Chef, or Puppet.
What is an assembly line?
Pipelining is a fancy term with a simple concept; when you have scripts that need to be executed sequentially to achieve a common goal, these combinations can be called "pipelining". For example, in Jenkins, a pipeline contains one or more _ stages_ that must be executed for a successful build. Using stages, you can visualize the entire process, see how long each phase has been used, and then figure out exactly where the build process failed.
Lab: create a pipeline for a Golang application
In this experiment, we build a pipeline of continuous delivery (CD). We used a simple Mini Program written in Go. For simplicity, we only run one type of test on the code. The preliminary work of the experiment is as follows:
A running instance of Jenkins. It can be a cloud instance, a virtual machine, a bare metal, or a docker container. It must be available from the Internet so that the warehouse can connect to Jenkins through web-hooks.
Mirror system: you can use Docker Registry, a cloud-based product that provides ECR,GCR or even a custom system.
A GitHub account. Although we use GitHub in this example, the program can also use other repositories, such as a small amount of modified Bitbucket.
The assembly line can be illustrated with the following figure:
Step 1: application files
Our experimental program will reply 'Hello World'' to any GET request. Create a file named main.go and add the following code:
Package mainimport ("log"net/http") type Server struct {} func (s * Server) ServeHTTP (w http.ResponseWriter, r * http.Request) {w.WriteHeader (http.StatusOK) w.Header (). Set ("Content-Type", "application/json") w.Write ([] byte (`{"message": "hello world"}`)} func main () {s: = & Server {} http.Handle ("/") S) log.Fatal (http.ListenAndServe (": 8080", nil))}
When we build a CD pipeline, we should run some tests. Our code is so simple that it only needs a test case; it ensures that we get the correct string when entering the root URL. Create a file named main_test.go in the same directory and add the following code:
Package mainimport ("log"net/http") type Server struct {} func (s * Server) ServeHTTP (w http.ResponseWriter, r * http.Request) {w.WriteHeader (http.StatusOK) w.Header (). Set ("Content-Type", "application/json") w.Write ([] byte (`{"message": "hello world"}`)} func main () {s: = & Server {} http.Handle ("/") S) log.Fatal (http.ListenAndServe (": 8080", nil))}
We also have some other files to help us deploy the application, called:
Dockerfile
This is where we package our applications:
FROM golang:alpine AS build-envRUN mkdir / go/src/app & & apk update & & apk add gitADD main.go / go/src/app/WORKDIR / go/src/appRUN CGO_ENABLED=0 GOOS=linux go build- a-installsuffix cgo-ldflags'- extldflags "- static"'- o app .from scratchWORKDIR / appCOPY-- from=build-env / go/src/app/app .EntRYPoint [". / app"]
Dcokerfile is a multi-stage file that keeps the image as small as possible. It starts with building images based on golang:alpine. The generated binaries are used in the second image, which is only a temporary image, which has no dependencies or library files, only the binaries used to start the application.
Service
Since we use Kubernetes as the platform for hosting the application, we need at least one service and one deployment. Our service.yml looks like this:
ApiVersion: v1kind: Servicemetadata: name: hello-svcspec: selector: role: app ports:-protocol: TCP port: 80 targetPort: 8080 nodePort: 32000 type: NodePort
There is nothing special about the definition of this file, only a NodePort as its type Service. It listens on port 32000 on the cluster node of any IP address. Incoming connections will be trunked to port 8080. As an internal communication, this service listens on port 80.
Deployment
The application itself, once containerized, can be deployed to Kubernetes through a Deployment resource. The deployment.yml is as follows:
ApiVersion: apps/v1kind: Deploymentmetadata: name: hello-deployment labels: role: appspec: replicas: 2 selector: matchLabels: role: app template: metadata: labels: role: appspec: containers:-name: app image: "" resources: requests: cpu: 10m
The most interesting part of the definition in this deployment file is the image section. Instead of hard-coding the image name and label, we use a variable. Later, we'll see how to use this variable as a template for Ansible and to replace the image name (and other parameters for deployment) with the command.
Playbook
In this lab, we used Ansible as the deployment tool. There are many other ways to deploy Kubernetes resources, including Helm Charts, but I think Ansible is a relatively simple choice. Ansible uses playbooks to organize its operations. Our playbook.yml file is as follows:
-hosts: localhost tasks:-name: Deploy the service K8s: state: present definition: "" validate_certs: no namespace: default-name: Deploy the application K8s: state: present validate_certs: no namespace: default definition: ""
Ansible already includes a K8s module to handle communication with the Kubernetes API server. So we don't need to install kubectl, but we need a valid kubeconfig file to connect to the cluster (more on that later). Let's quickly discuss this important part of playbook:
This playbook is used to deploy services and resources to the cluster.
When we need to inject data into the definition file during dynamic execution, we need to use the definition file as a template so that variables can be applied to the external environment.
To do this, Ansible has a lookup function in which you can pass a valid YAML file as a template. Ansible supports many ways to inject variables into templates. In this experiment, we use the command line method.
Learn how to continuously optimize your K8s cluster
Part II: install Jenkins, Ansible and Docker
Let's start installing Ansible and then use it to automatically deploy a Jenkins server and Docker runtime. We also need to install the openshift Python module to connect Ansible to Kubernetes. The installation of Ansible is very simple; just install Python and then install Ansible using pip:
Log in to the Jenkins instance.
Install Python3,Ansible and the openshift module:
Sudo apt update & & sudo apt install-y python3 & & sudo apt install-y python3-pip & & sudo pip3 install ansible & & sudo pip3 install openshift
By default, pip installs binaries into a hidden directory in the user's home folder. We need to add this path to the $PATH environment variable, so we can easily invoke the following command:
Echo "export PATH=$PATH:~/.local/bin" > > ~ / .bashrc &. ~ / .bashrc
Install the necessary Ansible roles to deploy an Jenkins instance.
Ansible-galaxy install geerlingguy.jenkins
Install the Dcoker role:
Ansible-galaxy install geerlingguy.docker
Create a playbook.yml and add the following code:
-hosts: localhost become: yes vars: jenkins_hostname: 35.238.224.64 docker_users:-jenkins roles:-role: geerlingguy.jenkins-role: geerlingguy.docker
Run the playbook:ansible-playbook playbook.yaml with the following command. Notice that we use the public IP address of the instance as the host address of the Jenkins. If you use DNS, you may need to change the instance to the DNS domain name. Also, note that you must allow port 8080 to pass through the firewall (if any) before playbook runs.
In a few minutes, Jenkins should be installed, and you can access Jenkins through the machine's IP address (or DNS domain name) and port 8080:
Click the login link to use "admin" as the user name and "admin" as the login password. These are the default credentials created by the Ansible role. You can (should) modify these default values when Jenkins is used in a production environment. This can be set by setting the role variable. You can refer to the official page of the role.
The last thing you need to do is install the following plug-ins that we used in this experiment:
Git
Pipeline
CloudBees Docker Build and Publish
GitHub
Step 3: configure Jenkins users to connect to the cluster
As we mentioned earlier, this experiment assumes that you already have a startup Kubernetes cluster. In order for Jenkins to connect to this cluster, we need to add the necessary kubeconfig files. In this particular experiment, we use the Kubernetes cluster of hosts in Google Cloud so we can use gcloud command. It varies from environment to environment. However, in any case, we must copy the kubeconfig file to the user directory of Jenkins, as follows:
$sudo cp ~ / .kube/config ~ jenkins/.kube/$ sudo chown-R jenkins: ~ jenkins/.kube/
It is important to keep in mind that the account you use must have the necessary permissions to create administrative Deployment and Service.
Step 4: create a Jenkins pipelined task
Create a new Jenkins task to select the pipeline type of task. The settings for the task are shown in the following figure:
The configurations we modified are as follows:
We use Poll SCM as the build trigger; set this option to have Jenkins check the Git repository periodically (every minute as instructed by *). If the warehouse has been modified since the last poll, the task will be triggered.
In terms of the pipeline itself, we specify the URL of the warehouse and the credentials. The branch is the master branch.
In this experiment, we added the code for all the tasks in a Jenkinsfile, and the Jenkinsfile is stored in the same warehouse as the code. Jenkinsfile will be discussed in a later article.
Step 5: configure Jenkins credentials for GitHub and Docker Hub
Go to the / credentials/store/system/domain/_/newCredentials link and add the target credentials. Please make sure that each of your credentials provides a meaningful ID and description because you will use them later.
Step 6: create a Jenkinsfile
Jenkinsfile is a file that instructs Jenkins on how to build, test, containerize, publish, and deliver our applications. Our Jenkinsfile looks like this:
Pipeline {agent any environment {registry = "magalixcorp/k8scicd" GOCACHE = "/ tmp"} stages {stage ('Build') {agent {docker {image' golang'}} steps {/ / Create our project directory. Sh'cd ${GOPATH} / src' sh 'mkdir-p ${GOPATH} / src/hello-world' / / Copy all files in our Jenkins workspace to our project directory. Sh'cp-r ${WORKSPACE} / * ${GOPATH} / src/hello-world' / / Build the app. Sh'go build'}} stage ('Test') {agent {docker {image' golang'}} steps {/ / Create our project directory. Sh'cd ${GOPATH} / src' sh 'mkdir-p ${GOPATH} / src/hello-world' / / Copy all files in our Jenkins workspace to our project directory. Sh'cp-r ${WORKSPACE} / * ${GOPATH} / src/hello-world' / / Remove cached test results. Sh'go clean-cache' / / Run Unit Tests. Sh' go test. /...-v-short'}} stage ('Publish') {environment {registryCredential =' dockerhub'} steps {script {def appimage = docker.build registry + ": $BUILD_NUMBER" docker.withRegistry ('') RegistryCredential) {appimage.push () appimage.push ('latest')} stage (' Deploy') {steps {script {def image_id = registry + ": $BUILD_NUMBER" Sh "ansible-playbook playbook.yml-- extra-vars\" image_id=$ {image_id}\ ""}
This file is much simpler than it looks. Basically, this pipeline consists of four stages:
Where to build our Go binaries to ensure that there are no errors during the build process.
Where to do a simple UAT test will ensure that the application runs as expected.
Publish, where to build the Docker image and push it to the repository. After that, it can be used in any environment.
Deployment, which is the last step in the pipeline, Ansible communicates with Kubernetes and then applies these definition files.
Now, let's discuss the important part of this Jenkinsfile:
The first two stages are about the same. They all use golang Docker images to build / test applications. It is always a good practice to have the phase run in a container where all builds and tests are ready. Another option is to install these tools on the master server or on one of the nodes. Problems tend to show up when you need to test different versions of the tool. For example, you may want to test your code with Go 1.9 builds, but our application is not ready to support the latest version of Golang. If everything is in the image, changing the version number or even the image type will be as easy as changing the string.
An environment variable is defined at the beginning of the release phase (starting at line 42), which will be used in later steps. This variable points to the Docker Hub credentials that we added in Jenkins in the previous step.
Line 48: we use the docker plug-in to build the image. By default, it uses the Dockerfile in our registry and then adds the build number as the mirror tag. Later, this is important when you need to decide which Jenkins build is the source of the currently running container.
Line 49-51: after the image is built successfully, we use the build number to push it to Docker Hub. In addition, we have added the tag "latest" (a second tag) to the image, so we allow users to pull the image without specifying a build number.
Line 56-60: during the deployment phase, we apply the deployment and service definition files to the cluster. We use the playbook of Ansible discussed earlier. Remember, we pass image_id as an argument to the command line. This value automatically replaces the image name in the deployment file.
Learn more about configuration mode
Test our CD pipeline
This part is what really tests our work. We will submit the code to GitHub to ensure that our code is pipelined until it reaches the cluster.
Add our file: git add *
Submit our changes: git commit-m "Initial commit"
Push to GitHub:git push
In Jenkins, we can wait for the task to be triggered automatically, or we can just click "build now".
If the task is successful, we can use the following command to verify our deployed application:
Get the IP address of the node:
Kubectl get nodes-o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEgke-security-lab-default-pool-46f98c95-qsdj Ready 7d v1.13.11-gke.9 10.128.0.59 35.193.211.74 Container-Optimized OS from Google 4.14.145 + docker://18.9.7
Now let's make a HTTP request to the application:
$curl 35.193.211.74 hello world 32000 {"message": "hello world"}
OK, we can see that the application is working properly. Let's deliberately create an error in the code to ensure that the pipeline does not apply the wrong code to the target environment:
Change the message that should be displayed to "Hello World!" Notice that we capitalize the first letter of each word and add an exclamation point at the end However, the customer may not want the information to be displayed this way, and the pipeline should stop during the Test phase.
First of all, let's make some changes. The main.go file now looks like this:
Package mainimport ("log"net/http") type Server struct {} func (s * Server) ServeHTTP (w http.ResponseWriter, r * http.Request) {w.WriteHeader (http.StatusOK) w.Header (). Set ("Content-Type", "application/json") w.Write ([] byte (`{"message": "Hello World!"}`)} func main () {s: = & Server {} http.Handle ("/") S) log.Fatal (http.ListenAndServe (": 8080", nil))}
Next, let's submit and push our code:
$git add main.go$ git commit-m "Changes the greeting message" [master 24a310e] Changes the greeting message 1 file changed, 1 insertion (+), 1 deletion (-) $git pushCounting objects: 3, done.Delta compression using up to 4 threads.Compressing objects: 100% (3 threads.Compressing objects) Done.Writing objects: 100,319 bytes | 319.00 KiB/s, done.Total 3 (delta 2), reused 0 (delta 0) remote: Resolving deltas: 100 local objects.To local objects.To 7954e03..24a310e master-> master
Going back to Jenkins, we can see that the last build failed:
Click on the failed task, and we can see the reason why the task failed:
In this way, our wrong code will never enter the target environment.
Content summary
CI/CD is part of any modern environment that follows agile methodology.
Through the pipeline, you can ensure a smooth transition from the version control system to the target environment (test / pre-production / production / etc.) while applying all necessary testing and quality control practices.
In this article, we have a practical experiment to build a continuous delivery pipeline to deploy a Golang application.
With Jenkins, we can pull code from the repository, build it, and test it with an associated Docker image.
Next, we containerize and push the application that has passed our test to Docker Hub.
Finally, we use Ansible to deploy the application to the target environment running on Kubernetes.
Using Jenkins pipelining and Ansible, you can easily and flexibly modify workflows. For example, we can add more tests in the Test phase, we can change the version number of the Go used to build and test the code, and we can use more variables to make changes in other places such as deployment and service definitions.
The best part is that we use Kubernetes deployment, which ensures that we change the container image with zero downtime. Because by default the deployment uses rolling updates to terminate and rebuild the container at one time. The old container will not be terminated until the new container is started and healthy.
The above is the editor for you to share how to use Kubernetes and Jenkins to create a CI/CD assembly line, if you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.