Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deployment of jenkins and pipeline job verification testing

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

This article is about the deployment of jenkins and pipeline job verification testing. The editor thought it was very practical, so I shared it with you as a reference. Let's follow the editor and have a look.

First, background introduction

At present, many enterprise applications have been containerized, with more releases and more builds, which is a great challenge compared with the previous single jenkins.

There are some pain points in the traditional one-master and multi-slave mode of Jenkins Slave:

When a single point of failure occurs in the main Master, the whole process is unavailable; the configuration environment of each Slave is different to complete operations such as compilation and packaging of different languages, but these differentiated configurations make it very inconvenient to manage and difficult to maintain; resource allocation is uneven, some job to be run by Slave are queued, while some Slave are idle In the end, there is a waste of resources, and each Slave may be a physical machine or VM, and when the Slave is idle, the resources will not be completely released.

In the kubernetes cluster, we use this container platform to realize the automatic expansion of jenkins. Jenkins cluster architecture diagram

From the figure, you can see that Jenkins Master and Jenkins Slave run as Docker Container on the Node of the Kubernetes cluster, Master runs on one of the nodes, and its configuration data is stored on a Volume, Slave runs on each node, and it is not always running, it will be dynamically created and automatically deleted according to the requirements.

The workflow in this way is roughly as follows: when Jenkins Master receives a Build request, a Jenkins Slave running in Docker Container is dynamically created and registered on Master according to the configured Label. After running Job, the Slave is logged out and the Docker Container is automatically deleted and restored to the original state.

There are many benefits to this approach:

The service is highly available. When Jenkins Master fails, Kubernetes automatically creates a new Jenkins Master container and assigns Volume to the newly created container to ensure that data is not lost, thus achieving high availability of cluster services. Dynamic scaling and rational use of resources. Every time you run Job, a Jenkins Slave,Job is automatically created. After the Slave automatically logs out and deletes the container, the resources are automatically released. Moreover, according to the usage of each resource, Kubernetes dynamically allocates Slave to idle nodes to create them, reducing the situation that a node is queued to wait in the node because of its high resource utilization. Good scalability, when the serious shortage of resources in the Kubernetes cluster causes the Job to wait in queue, it is easy to add a Kubernetes Node to the cluster to achieve expansion. II. Deploy jenkins

We deploy the master node to the k8s cluster. You can configure it by referring to the official github documentation. I have made it a little simplified here. I use nfs to store jenkins data for persistent storage.

Kubectl apply-f https://raw.githubusercontent.com/wangzan18/jenkins-agent-k8s-cicd/master/master/jenkins.yaml

Let me explain: here Service we exposed that port 8080 and 500008080 are the default ports for accessing Jenkins Server pages, and 50000 is the default port for the created Jenkins Slave to establish a connection with Master. If it is not exposed, Slave cannot establish a connection with Master. Here, the port is exposed by NodePort, and the port number is not specified, which is assigned by the Kubernetes system by default. Of course, you can also specify a non-duplicate port number (in the range of 30000 to 32767).

2.1.Configuring kubernetes plugin

I will not hide the configuration process of Jenkins here, we will configure kubernetes plugin directly.

Log in to the administrator account on the Jenkins Master page, click "system Management"-> "Management plug-ins"-> "optional plug-ins"-> "Kubernetes plugin" check to install.

After installation, click "system Administration"-> "system Settings"-> "add a Cloud"-> Select "Kubernetes", and then fill in the Kubernetes and Jenkins configuration information.

Let me explain:

Name defaults to kubernetes, or you can change it to another name. If you change it here, the following specifies podTemplate () parameter cloud as its corresponding name when executing Job, otherwise it will not be found. The default value of cloud is: kubernetes. I filled in the https://kubernetes at Kubernetes URL. Here I fill in the DNS record corresponding to Kubernetes Service, through which the DNS record can be parsed into the Cluster IP of the Service. Note: you can also fill in the complete DNS record of https://kubernetes.default.svc.cluster.local, because it conforms to the naming method of.. svc.cluster.local, or directly fill in the address https://: of the external Kubernetes. I filled in http://jenkins.default:8080 at Jenkins URL, which is similar to the above, but also uses the DNS record corresponding to Jenkins Service, but it should be specified as port 8080, because we set the leak port 8080. At the same time, it can also use http://: mode.

After the configuration, you can click the "Test Connection" button to test whether you can connect to the Kubernetes. If the Connection test successful is displayed, the connection is successful and there is no problem with the configuration.

Because our jenkins is a pod within the cluster, it can communicate directly with the kubernetes api, and we have given the corresponding permission. If the master is created outside the cluster, we need to create a service account for the jenkins agent in advance, and then assign the corresponding token to the credential sercet text.

3. Pipeline job verification test 3.1, pipeline support

Create a Pipeline type Job and name it jenkins-pipeline, and then fill in a simple test script at the Pipeline script as follows:

PodTemplate {node (POD_LABEL) {stage ('Run shell') {sh' echo hello world' sh 'sleep 60'}

After creating and returning job, click build, and we will find a job to be executed in the build queue. Because we require the name of the jenkins agent node to be POD_LABEL in the pipeline, we do not find the agent, so we will request kubernetes to create the agent node.

After the jenkins agent node is created, it registers with jenkins master, executes the job in the queue, cancels the registration and destroys itself.

We can also go to console to view the build log.

You can also see the launched agent container on K8s.

Wangzan:~/k8s $kubectl get pod--show-labelsNAME READY STATUS RESTARTS AGE LABELSjenkins-5df4dff655-f4gk8 1 app=jenkins,pod-template-hash=5df4dff655jenkins-pipeline-5-lbs5j-b2jl6 1 Running 0 25m app=jenkins,pod-template-hash=5df4dff655jenkins-pipeline-5-lbs5j-b2jl6-0mk2g 1 Running 0 7s jenkins/label=jenkins-pipeline_5-lbs5j Jenkins=slavemyapp1 1/1 Running 0 21h app=myapp1podTemplate

The podTemplate is a template of a pod that will be used to create agents. It can be either configured via the user interface, or via pipeline.

Either way it provides access to the following fields:

Cloud The name of the cloud as defined in Jenkins settings. Defaults to kubernetesname The name of the pod.namespace The namespace of the pod.label The label of the pod. Can be set to a unique value to avoid conflicts across builds, or omitted and POD_LABEL will be defined inside the step.yaml yaml representation of the Pod, to allow setting any values not supported as fieldsyamlMergeStrategy merge () or override (). Controls whether the yaml definition overrides or is merged with the yaml definition inherited from pod templates declared with inheritFrom. Defaults to override (). Containers The container templates that are use to create the containers of the pod (see below). ServiceAccount The service account of the pod.nodeSelector The node selector of the pod.nodeUsageMode Either 'NORMAL' or' EXCLUSIVE', this controls whether Jenkins only schedules jobs with label expressions matching or use the node as much as possible.volumes Volumes that are defined for the pod and are mounted by ALL containers.envVars Environment variables that are applied to ALL containers.envVar An environment variable whose value is defined inline.secretEnvVar An environment variable whose value is derived from a Kubernetes secret.imagePullSecrets List of pull secret names To pull images from a private Docker registry.annotations Annotations to apply to the pod.inheritFrom List of one or more pod templates to inherit from (more details below). SlaveConnectTimeout Timeout in seconds for an agent to be online (more details below). PodRetention Controls the behavior of keeping slave pods. Can be 'never ()', 'onFailure ()', 'always ()', or 'default ()'-if empty will default to deleting the pod after activeDeadlineSeconds has passed.activeDeadlineSeconds If podRetention is set to 'never ()' or 'onFailure ()', pod is deleted after this deadline is passed.idleMinutes Allows the Pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it.showRawYaml Enable or disable the output of the raw Yaml file. Defaults to truerunAsUser The user ID to run all containers in the pod as.runAsGroup The group ID to run all containers in the pod as.hostNetwork Use the hosts network.3.2 、 Container Group

The agent in the previous pipeline uses the default image jenkins/jnlp-slave:3.35-5-alpine, and we can also add some other images to the pod.

Create a Pipeline type Job and name it jenkins-pipeline-container, and then fill in a simple test script at the Pipeline script as follows:

PodTemplate (containers: [containerTemplate (name: 'maven', image:' maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'), containerTemplate (name:' golang', image: 'golang:1.8.0', ttyEnabled: true) Command: 'cat')]) {node (POD_LABEL) {stage (' Get a Maven project') {git 'https://github.com/jenkinsci/kubernetes-plugin.git' container (' maven') {stage ('Build a Maven project') {sh' mvn-B clean install' } stage ('Get a Golang project') {git url:' https://github.com/hashicorp/terraform.git' container ('golang') {stage (' Build a Go project') {sh "" mkdir-p / go/src/github. Com/hashicorp ln-s `pwd` / go/src/github.com/hashicorp/terraform cd / go/src/github.com/hashicorp/terraform & & make core-dev ""}

You can also see from k8s that there are three containers in pod.

Wangzan:~/k8s $kubectl get pod--show-labelsNAME READY STATUS RESTARTS AGE LABELSjenkins-5df4dff655-f4gk8 1 bank 1 Running 0 42m app=jenkins,pod-template-hash=5df4dff655jenkins-pipeline-container-1-6zf73-chltq-b0rjt 3 Running 0 70s jenkins/label=jenkins-pipeline-container_1-6zf73 title jenkinsslavemyapp1 containerTemplate

The containerTemplate is a template of container that will be added to the pod. Again, its configurable via the user interface or via pipeline and allows you to set the following fields:

Name The name of the container.image The image of the container.envVars Environment variables that are applied to the container (supplementing and overriding env vars that are set on pod level). EnvVar An environment variable whose value is defined inline.secretEnvVar An environment variable whose value is derived from a Kubernetes secret.command The command the container will execute.args The arguments passed to the command.ttyEnabled Flag to mark that tty should be enabled.livenessProbe Parameters to be added to an exec liveness probe in the container (does not support httpGet liveness probes) ports Expose ports on the container.alwaysPullImage The container will pull the Image upon starting.runAsUser The user ID to run the container as.runAsGroup The group ID to run the container as.3.3 、 Use SCM

There are many benefits to using SCM:

Every time we modify pipeline, we don't have to go to console to modify it; developers can easily customize pipeline and choose the container; they need when jenkins data is lost without losing pipeline.

To use SCM, we need to put the pipeline code written above into Jenkinsfile, which is usually the name, of course, we can also customize the name. Let's run the first case above using SCM, first of all, modify our job.

My jenkinsfile address is https://github.com/wangzan18/jenkins-agent-k8s-cicd/blob/master/jenkinsfile/jenkins-pipeline-podtemplate.jenkinsfile.

Then view the running log in the console.

Other parameters can be set according to your own situation.

IV. General job verification

In addition to using Pipeline to run Job in Jenkins, we usually also use the normal type Job, if you also want to use kubernetes plugin to build the task

Then you need to click "system Administration"-> "system Settings"-> "Cloud"-> "Kubernetes"-> "Add Pod Template" to configure "Kubernetes Pod Template" information.

Labels name: when configuring a non-pipeline type Job, it is used to specify the node on which the task runs.

Containers Name: note here that if Name is configured as jnlp, Kubernetes will replace the default jenkinsci/jnlp-slave image with the Docker Image specified below. Otherwise, Kubernetes plugin will still use the default jenkinsci/jnlp-slave image to establish a connection with Jenkins Server, even if we specify another Docker Image. Here, I configure it as jenkins-slave, which means to use the default image of plugin to establish a connection with jenkins server. When I select jnlp, I find that the image cannot establish a connection with jenkins server. I am not sure about the details, or it may be the image problem.

Create a new free-style Job named jenkins-simple, configure the "Restrict where this project can be run" check box, in the "Label Expression" after the output we create the template is the specified Labels name jnlp-agent, meaning to specify that the Job matches the jenkins-slave tag on the Slave.

The effect is as we expected:

5. Custom jenkins-slave image

Earlier, I randomly selected an image in https://hub.docker.com/r/jenkins/jnlp-slave and found that I could not establish a connection with jenkins server, so we made an image ourselves.

Some basic operations can be accomplished through the image jenkinsci/jnlp-slave provided by kubernetest plugin by default, which is extended based on the openjdk:8-jdk image, but for us, the image function is too simple. For example, when we want to execute Maven compilation or other commands, we have problems, so we can pre-install some software by making our own image, which can not only achieve jenkins-slave functions, but also complete our own personalized requirements. That would be better. It will be a little more troublesome if we create an image from scratch, but we can refer to the official images of jenkinsci/jnlp-slave and jenkinsci/docker-slave. Note: jenkinsci/jnlp-slave images are based on jenkinsci/docker-slave. Here I will briefly demonstrate, based on the jenkinsci/jnlp-slave:latest image, do an extension based on it, install Maven into the image, and then run to verify whether it is feasible. You can check out my image: https://hub.docker.com/r/wangzan18/jenkins-slave-maven.

PodTemplate (containers: [containerTemplate (name: 'jnlp', image:' wangzan18/jenkins-agent:maven-3.6.3', alwaysPullImage: false, args:'${computer.jnlpmac} ${computer.name}') ]) {node (POD_LABEL) {stage ('git pull') {echo "hello git"} stage (' build') {sh 'mvn-version'} stage (' test') {echo "hello test"} stage ('deploy') {echo "hello Deploy "sleep 10}

Here, the name attribute of containerTemplate must be called jnlp,Kubernetes to replace the default jenkinsci/jnlp-slave image with the image specified by the custom images. In addition, the args parameter passes two parameters needed for jenkins-slave to run. Another point is that there is no need to specify container ('jnlp') {...} here, because it is specified by Kubernetes as the container to be executed, so just execute Stage directly.

You can see that we have achieved the desired effect, indeed using our custom jenkins-slave image.

This is the end of the deployment of jenkins and pipeline job verification testing. I hope the above content can be of some help and learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report