In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Building DevOps continuous Integration and continuous deployment Environment for gitlab+Jenkins+harbor+kubernetes
The structure diagram of the whole environment.
I. preparatory work
Gitlab and harbor I installed on a host outside the kubernetes cluster.
1.1 、 Set the mirror source docker-ce.repo [root@support harbor] # cat / etc/yum.repos.d/docker-ce.repo [docker-ce-stable] name=Docker CE Stable-$basearchbaseurl= https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stableenabled=1gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-stable-debuginfo]name=Docker CE Stable-Debuginfo $basearchbaseurl= https://mirrors.aliyun.com/docker-ce/linux/centos/7/debug- $basearch/stableenabled=0gpgcheck=1gpgkey= https://mirrors.aliyun.com/docker-ce/linux/centos/gpg[docker-ce-stable-source]name=Docker CE Stable-Sourcesbaseurl= https://mirrors.aliyun.com/docker-ce/linux/centos/7/source/stableenabled=0gpgcheck=1gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg1.2, Install dependency package [root@support yum.repos.d] # yum install-y docker-ce-18.09.7 [root@support yum.repos.d] # yum install-y docker-compose [root@support yum.repos.d] # git [root@support yum.repos.d] # cat / etc/docker/daemon.json {"registry-mirrors": ["http://f1361db2.m.daocloud.io"]}[root@support yum.repos.d] # systemctl start docker II, harbor deployment 2.1, Installation package [root@support yum.repos.d] # wget-b https://storage.googleapis.com/harbor-releases/release-1.9.0/harbor-offline-installer-v1.9.0.tgzContinuing in background Pid 9771.Output will be written to 'wget-log'. [root@support ~] # tar zxf harbor-offline-installer-v1.9.0.tgz [root@support ~] # cd harbor [root@support harbor] # vi harbor.ymlhostname: 139.9.134.177http: port: 80802.2, Deploy [root@support harbor] #. / prepare [root@support harbor] #. / install.sh [root@support harbor] # docker-compose ps Name Command State Ports-- -- harbor-core / harbor/harbor_core Up harbor-db / docker-entrypoint.sh Up 5432/tcp harbor-jobservice / harbor/harbor_jobservice Up ... Harbor-log / bin/sh-c / usr/local/bin/ Up 127.0.0.1 usr/local/bin/ Up 1514-> 10514/tcp Harbor-portal nginx-g daemon off; Up 8080/tcp nginx nginx-g daemon off Up 0.0.0.0 etc/redis.conf Up 6379/tcp registry 8080-> 8080/tcp redis redis-server / etc/redis.conf Up 6379/tcp registry / entrypoint.sh / etc/regist Up 5000/tcp. Registryctl / harbor/start.sh Up III, gitlab deployment 3.1, Pull image [root@support yum.repos.d] # docker pull gitlab/gitlab-ceUsing default tag: latestlatest: Pulling from gitlab/gitlab-ce16c48d79e9cc: Pull complete 3c654ad3ed7d: Pull complete a4bd43ad48ce: Pull complete 075ff90164f7: Pull complete 8ed147de678c: Pull complete c6b08aab9197: Pull complete 6c15d9b5013c: Pull complete de3573fbdb09: Pull complete 4b6e8211dc80: Verifying Checksum latest: Pulling from gitlab/gitlab-ce16c48d79e9cc: Pull complete 3c654ad3ed7d: Pull complete 6276f4f9c29d: Pull complete a4bd43ad48ce: Pull complete 075ff90164f7: Pull complete 8ed147de678c: Pull complete c6b08aab9197: Pull complete 6c15d9b5013c: Pull complete de3573fbdb09: Pull complete 4b6e8211dc80: Pull Complete Digest: sha256:eee5fc2589f9aa3cd4c1c1783d5b89667f74c4fc71c52df54660c12cc493011bStatus: Downloaded newer image for gitlab/gitlab-ce:latestdocker.io/gitlab/gitlab-ce:latest [root@support yum.repos.d] # 3.2 、 Launch Container [root@bogon /] # docker run-- detach\-- hostname 139.134.177\-- publish 10443VO443-- publish 10080Rod 80-- publish 10022Rod 22\-- name gitlab\-- restart always\-- volume / opt/gitlab/config:/etc/gitlab\-- volume / opt/gitlab/logs:/var/log/gitlab\-- volume / opt/gitlab/data:/var/opt/gitlab\ gitlab/gitlab-ce:latestgit warehouse initialization git init -- bare git clone yum install jenkins-yjava-versiontail-f / var/log/jenkins/jenkins.loglog outputs the initialization password of the jenkins web page. IV. Jenkins deployment
Kubernetes cluster deployment jenkins on github
Https://github.com/jenkinsci/kubernetes-plugin/blob/master/src/main/kubernetes/jenkins.yml
4.1dynamic supply of NFS-PV
NFS service preparation
# yum installation nfs-utils [root@support ~] # yum install-y nfs-utils [root@support ~] # mkdir / ifs/kubernetes [root@support ~] # cat / etc/exports# provides a shared directory to the 10.0.0.0 network segment host / ifs/kubernetes 10.0.0.0amp24 (rw No_root_squash) [root@support ~] # systemctl start nfs [root@support ~] # exportfs-arvexporting 10.0.0.0/24:/ifs/kubernetesnfs.yaml [root@master jenkins] # cat nfs.yaml kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-client-provisioner-runnerrules:-apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create" "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"]-apiGroups: [""] resources: ["events"] verbs: ["create", "update" "patch"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-client-provisionersubjects:-kind: ServiceAccount name: nfs-client-provisioner namespace: defaultroleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisionerrules:-apiGroups: ["] resources: [" endpoints "] verbs: [" get " "list", "watch", "create", "update" "patch"]-kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisionersubjects:-kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io---kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: managed-nfs-storageprovisioner: fuseim.pri/ifs # or choose another name Must match deployment's env PROVISIONER_NAME'parameters: archiveOnDelete: "true"-kind: ServiceAccountapiVersion: v1metadata: name: nfs-client-provisioner---kind: DeploymentapiVersion: apps/v1metadata: name: nfs-client-provisionerspec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisionerspec: serviceAccountName: nfs-client-provisioner Containers:-name: nfs-client-provisioner image: lizhenliang/nfs-client-provisioner:latest volumeMounts:-name: nfs-client-root mountPath: / persistentvolumes env:-name: PROVISIONER_NAME value: fuseim.pri/ifs-name: NFS_SERVER value: 10.0.0.123 -name: NFS_PATH value: / ifs/kubernetes volumes:-name: nfs-client-root nfs: server: 10.0.0.123 path: / ifs/kubernetes [root@master jenkins] # # create PV dynamic supply root@master jenkins] # kubectl apply-f nfs.yaml4.2, Jenkins is deployed on kubernetes
Jenkins-master dispatches to the master node of K8S.
Jenkins.yaml [root@master jenkins] # cat jenkins.yaml apiVersion: v1kind: Servicemetadata: name: selector: name: jenkins type: NodePort ports:-name: http port: 80 targetPort: 8080 protocol: TCP nodePort: 30006-name: agent port: 50000 protocol: TCP---apiVersion: v1kind: ServiceAccountmetadata: name: jenkins---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: jenkinsrules :-apiGroups: ["] resources: [" pods "] verbs: [" create " "delete", "get", "list", "patch", "update", "watch"]-apiGroups: [""] resources: ["pods/exec"] verbs: ["create", "delete", "get", "list", "patch", "update", "watch"]-apiGroups: [""] resources: ["pods/log"] verbs: ["get", "list" "watch"]-apiGroups: ["] resources: [" secrets "] verbs: [" get "]-- apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: name: jenkinsroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: jenkinssubjects:- kind: ServiceAccount name: jenkins---apiVersion: apps/v1kind: StatefulSetmetadata: name: jenkins labels: name: jenkinsspec: serviceName: jenkins replicas: 1 updateStrategy: type: RollingUpdate selector: matchLabels: Name: jenkins template: metadata: name: jenkins labels: name: jenkins spec: terminationGracePeriodSeconds: 10 serviceAccountName: jenkins # dispatched to the primary node nodeSelector: labelName: master # tolerates the main node stain tolerations:-key: node-role.kubernetes.io/master effect: NoSchedule containers:-name: jenkins Image: jenkins/jenkins:lts-alpine imagePullPolicy: Always ports:-containerPort: 8080-containerPort: 50000 env:-name: LIMITS_MEMORY valueFrom: resourceFieldRef: resource: limits.memory divisor: 1Mi-name: JAVA_OPTS Value:-Xmx$ (LIMITS_MEMORY) m-XshowSettings:vm-Dhudson.slaves.NodeProvisioner.initialDelay=0-Dhudson.slaves.NodeProvisioner.MARGIN=50-Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 volumeMounts:-name: jenkins-home mountPath: / var/jenkins_home livenessProbe: httpGet: path: / login port: 8080 initialDelaySeconds: 60 TimeoutSeconds: 5 failureThreshold: 12 readinessProbe: httpGet: path: / login port: 8080 initialDelaySeconds: 60 timeoutSeconds: 5 failureThreshold: 12 securityContext: fsGroup: 1000 volumeClaimTemplates:-metadata: name: jenkins-home spec: storageClassName: "managed-nfs-storage" accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi# create jenkins Podroot@master jenkins] # kubectl apply-f jenkins.yaml# opens a browser to access the jenkins address http://139.9.139.49:30006/# card in the startup interface for a long time [root@support default-jenkins-home-jenkins-0-pvc-ea84462f-241e-4d38-a408-e07a59d4bf0e] # cat hudson.model.UpdateCenter.xml default Http://mirror.xmission.com/jenkins/updates/update-center.json 4.3 、 Plug-in installation
Install plug-ins in jenkins system Management-> plug-in Management
4.3.1. List of plug-ins to be downloaded Git plugin gitGitLab Plugin gitlabKubernetes plugin dynamically create proxy Pipeline pipeline Email Extension mail extension
It is too slow to install the plug-in. Several kb per second ╮ ("▽") ╭
We have an idea to solve this problem [] ~ ("▽") ~ *
4.3.2. Tell jenkins which plug-ins I need to update
Use the mirror address https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json of Tsinghua University
1. Enter jenkins system management
two。 Enter plug-in Management (Manage Plugins)
-- > Advanced-- > upgrade site
4.3.3, principle
The https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json file contains the update addresses of all the plug-ins. Tsinghua brought this file over, but did not change the plug-in upgrade address to Tsinghua. Download the plug-in or to the foreign host to download, this will only get updated information fast, the actual download plug-in slow batch.
Curl-vvvv http://updates.jenkins-ci.org/download/plugins/ApicaLoadtest/1.10/ApicaLoadtest.hpi302 to http://mirrors.jenkins-ci.org/plugins/ApicaLoadtest/1.10/ApicaLoadtest.hpi and redirect to a ftp address shunt. The address of Tsinghua is: https://mirrors.tuna.tsinghua.edu.cn/jenkins/plugins/ApicaLoadtest/1.10/ApicaLoadtest.hpi only need to agent mirrors.jenkins-ci.org to mirrors.tuna.tsinghua.edu.cn/jenkins. 4.3.4. Deceive jenkins into downloading plug-ins from Tsinghua University.
Bind the mirrors.jenkins-ci.org domain name to the native / etc/hosts
[root@support nginx] # cat / etc/hosts127.0.0.1 mirrors.jenkins-ci.org
Nginx reverse proxy to Tsinghua's jenkins plug-in download address
[root@support ~] # cat / etc/nginx/nginx.confuser nginx;worker_processes auto;error_log / var/log/nginx/error.log;pid / run/nginx.pid;include / usr/share/nginx/modules/*.conf;events {worker_connections 1024;} http {access_log / var/log/nginx/access.log; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65 Types_hash_max_size 2048; include / etc/nginx/mime.types; default_type application/octet-stream; server {listen 80; server_name mirrors.jenkins-ci.org; root / usr/share/nginx/html; location / {proxy_redirect off; proxy_pass https://mirrors.tuna.tsinghua.edu.cn/jenkins/; Proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; proxy_set_header Accept-Language "zh-CN";} index index.html index.htm index.php; location ~ /\. {deny all;}
Finally, let's take a look at the nginx access log. All requests for downloading plug-ins from jenkins sent from this machine have been forwarded to Tsinghua image source.
127.0.0.1-- [14/Oct/2019:23:40:32 + 0800] "GET / plugins/kubernetes-credentials/0.4.1/kubernetes-credentials.hpi HTTP/1.1" 17893 "-" Java/1.8.0_222 "127.0.0.1-- [14/Oct/2019:23:40:37 + 0800]" GET / plugins/variant/1.3/variant.hpi HTTP/1.1 "200 10252"-" "Java/1.8.0_222" 127.0.0.1-- [14/Oct/2019:23:40:40 + 0800] "GET / plugins/kubernetes-client-api/4.6.0-2/kubernetes-client-api.hpi HTTP/1.1" 11281634 "-" Java/1.8.0_222 "127.0.0.1-- [14/Oct/2019:23:40:42 + 0800]" GET / plugins/kubernetes/ 1.20.0/kubernetes.hpi HTTP/1.1 "320645"-"Java/1.8.0_222" 127.0.0.1-- [14/Oct/2019:23:40:45 + 0800] "GET / plugins/git/3.12.1/git.hpi HTTP/1.1" 2320552 "-" Java/1.8.0_222 "127.0.0.1-- [14/Oct/2019:23:40:47 + 0800] "GET / plugins/gitlab-plugin/1.5.13/gitlab-plugin.hpi HTTP/1.1" 8456411 "-"Java/1.8.0_222"
According to the recommended practice, it is found that the speed is too fast. Basically, most of the tutorials online in seconds are only the first step. When they are set up, sometimes they can speed up and sometimes they can't. This is the real final solution.
Of course, in order to do this step all night, first of all in K8S to pod deployment of jenkins can not use this proxy. After a fruitless attempt, I can only rudely install the same version of jenkins on the NFS server. It is found that the files in the path corresponding to the local persistent directory / var/jenkins_home in pod are copied directly to / var/lib/jenkins, and the running status of this new jenkins is the same as that of jenkins in pod. So after the new jenkins downloads the plug-in, copy the plug-in directory / var/lib/jenkins/plugins directly into the pod persistent volume.
4.4.The gitlab triggers jenkins4.4.1 and gitlab to generate token
Copy this token, and this token will only be displayed once: vze6nS8tLAQ1dVpdaHYU
4.4.2. Jenkins configure connection gitlab
Click system Management-> system Settings to find gitlab
Type select gitlab api token and fill in the token generated by gitab
4.4.3. Create a jenkins task
This address is used to set the webhook: http://139.9.139.49:30006/project/gitlab-citest-pipeline of gitlab
Click to generate token:2daf58bf638f04ce9e201ef0df9bec0f
This token is also used to set the webhook of gitlab
4.4.4. Gitlab sets webhooks
4.4.5. Submit code to gitlab to trigger jenkins task
First clone the warehouse above the gitlab to the local
[root@support] # git clone http://139.9.134.177:10080/miao/citest.gitCloning into 'citest'...remote: Enumerating objects: 3, done.remote: Counting objects: 100% (delta 3), done.remote: Total 3 (delta 0), reused 0 (delta 0) Unpacking objects: 100% (3max 3), done.
Submit the code to gitlab after modification
[root@support citest] # git commit-m "Testing gitlab and jenkins Connection # 1" [master 03264a7] Testing gitlab and jenkins Connection 1 1 file changed, 3 insertions (+), 1 deletion (-) [root@support citest] # git push origin masterUsername for 'http://139.9.134.177:10080': miaoPassword for' http://miao@139.9.134.177:10080': Counting objects: 5, done.Writing objects: 100% (3deletion 3), 294 bytes | 0 bytes/s, done.Total 3 (delta 0) Reused 0 (delta 0) To http://139.9.134.177:10080/miao/citest.git 25f05bb..03264a7 master-> master
The jenkins task has begun to execute
Show that the task is triggered by gitlab, and the first phase is successful.
4.5.The jenkins creates a dynamic proxy in kubernetes
We use Docker in Docker technology here, which is to deploy jenkins in K8s. Jenkins master dynamically creates slave pod and uses slave pod to run instructions such as code cloning, project build, image build, and so on. Delete the slave pod when the composition is complete. Lightening the load of jenkins-master can greatly improve the utilization of resources.
4.5.1. Configure connection kubernetes
We have installed the Kubernetes plug-in, we click directly in the jenkins
System management-- > system settings-- > pull to the bottom there is a cloud.
Add a cloud-> kubernetes
Because jenkins runs directly on k8s, the service name of kubernetes can be accessed directly through the dns of k8s. Click-- > Test connection, successfully connect k8s.
Then click-- > Save
4.5.2. Build a Jenkins-Slave image
Github officially builds slave documents
Https://github.com/jenkinsci/docker-jnlp-slave
To build a jenkins-slave image, we need to prepare four files
1. Enter the following address in the jenkins address bar to get slave.jar
Http://119.3.226.210:30006/jnlpJars/slave.jar
2. The startup script jenkins-slave of slave.jar
[root@support jenkins-slave] # cat jenkins-slave #! / usr/bin/env shif [$#-eq 1]; then # if `docker run`only has one arguments, we assume user is running alternate command like `bash` to inspect the image exec "$@" else # if-tunnel is not provided try env vars case "$@" in * "- tunnel" *); *) if [!-z "$JENKINS_TUNNEL"] Then TUNNEL= "- tunnel $JENKINS_TUNNEL" fi; esac # if-workDir is not provided try env vars if [!-z "$JENKINS_AGENT_WORKDIR"]; then case "$@" in * "- workDir" *) echo "Warning: Work directory is defined twice in command-line arguments and the environment variable";; *) WORKDIR= "- workDir $JENKINS_AGENT_WORKDIR" Esac fi if [- n "$JENKINS_URL"]; then URL= "- url $JENKINS_URL" fi if [- n "$JENKINS_NAME"]; then JENKINS_AGENT_NAME= "$JENKINS_NAME" fi if [- z "$JNLP_PROTOCOL_OPTS"] Then echo "Warning: JnlpProtocol3 is disabled by default, use JNLP_PROTOCOL_OPTS to alter the behavior" JNLP_PROTOCOL_OPTS= "- Dorg.jenkinsci.remoting.engine.JnlpProtocol3.disabled=true" fi # If both required options are defined, do not pass the parameters OPT_JENKINS_SECRET= "if [- n" $JENKINS_SECRET "] Then case "$@" in * "${JENKINS_SECRET}" *) echo "Warning: SECRET is defined twice in command-line arguments and the environment variable";; *) OPT_JENKINS_SECRET= "${JENKINS_SECRET}";; esac fi OPT_JENKINS_AGENT_NAME= "" if [- n "$JENKINS_AGENT_NAME"] Then case "$@" in * "${JENKINS_AGENT_NAME}" *) echo "Warning: AGENT_NAME is defined twice in command-line arguments and the environment variable"; *) OPT_JENKINS_AGENT_NAME= "${JENKINS_AGENT_NAME}";; esac fi # TODO: Handle the case when the command-line and Environment variable contain different values. # It is fine it blows up for now since it should lead to an error anyway. Exec java $JAVA_OPTS $JNLP_PROTOCOL_OPTS-cp / usr/share/jenkins/slave.jar hudson.remoting.jnlp.Main-headless $TUNNEL $URL $WORKDIR $OPT_JENKINS_SECRET $OPT_JENKINS_AGENT_NAME "$@" fi
3. Configuration file of maven
[root@support jenkins-slave] # cat settings.xml central central aliyun maven https://maven.aliyun.com/repository/public
4 、 Dockerfile
FROM centos:7LABEL maintainer lizhenliang# enables the image to have a drag git repository The ability to compile java code RUN yum install-y java-1.8.0-openjdk maven curl git libtool-ltdl-devel & &\ yum clean all & &\ rm-rf / var/cache/yum/* &\ mkdir-p / usr/share/jenkins# will get the slave.jar and put it in the image COPY slave.jar / usr/share/jenkins/slave.jar# jenkins-slave to execute the script COPY jenkins-slave / usr/bin/jenkins-slave# Aliyun image COPY settings.xml / etc/maven/settings.xmlRUN chmod + x / usr/bin/jenkins-slaveENTRYPOINT is set in settings.xml ["jenkins-slave"]
Put these four files in the same directory, and then we start to build the slave image.
Build an image and label it
[root@support jenkins-slave] # docker build. -t 139.9.134.177:8080/jenkinsci/jenkins-slave-jdk:1.8 [root@support jenkins-slave] # docker image lsREPOSITORY TAG IMAGE ID CREATED SIZE139.9.134.177:8080/jenkinsci/jenkins-slave-jdk 1.8 940e56848837 3 minutes ago 535MB
Start pushing image
Login to http is rejected. Docker is https by default, and daemon.json needs to be modified.
[root@support jenkins-slave] # docker login 139.9.134.177:8080Username: adminPassword: Error response from daemon: Get https://139.9.134.177:8080/v2/: http: server gave HTTP response to HTTPS client# increases http's trust [root@support ~] # cat / etc/docker/daemon.json {"registry-mirrors": ["http://f1361db2.m.daocloud.io"]," "insecure-registries": ["http://139.9.134.177:8080"]}# successfully logged in [root@support ~] # docker login 139.9.134.177:8080Username: adminPassword: WARNING! Your password will be stored unencrypted in / root/.docker/config.json.Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded
All K8s hosts also need to be configured to access the harbor. Restart the docker service.
We set the trusted address as the intranet address to ensure sufficient speed.
4.5.3. The Jenkins task is executed by pod of k8s
Use the following pipeline script to create a pod dynamically
/ / Image repository address def registry = "10.0.0.123 jenkins-agent', cloud 8080" podTemplate (label: 'jenkins-agent', cloud:' kubernetes', containers: [containerTemplate (name: 'jnlp', image: "${registry} / jenkinsci/jenkins-slave-jdk:1.8")], volumes: [hostPathVolume (mountPath:' / var/run/docker.sock') HostPath:'/ var/run/docker.sock'), hostPathVolume (mountPath:'/ usr/bin/docker') HostPath:'/ usr/bin/docker')) {node ("jenkins-agent") {stage ('pull substitution code') {/ / for display purposes git 'http://139.9.134.177:10080/miao/citest.git' sh' ls'} stage ('code compilation') {echo 'ok' } stage ('deploy') {echo 'ok'}} Continuous integration using pipeline scripts
Use the pipeline script to pull down the code that submits the gitlab each time, compile it into a docker image and push it to harbor.
Here we need to configure two credentials first, because our gitlab code repository is private, and the harbor repository is private, and only the configuration credential jenkins can access it.
Enter the account number and password of gitlab, generate a credential, copy the id of the credential, and reference it in pipeline
Enter the account number and password of harbor, generate a credential, copy the id of the credential, and reference it in pipeline
/ / Image warehouse address def registry = "10.0.0.123 jenkinsci 8080" / / Image warehouse project def project = "jenkinsci" / / Image name def app_name = "citest" / / Image full name def image_name = "${registry} / ${project} / ${app_name}: ${BUILD_NUMBER}" / / git Warehouse address def git_address = "http://139.9.134.177:10080/miao" / citest.git "/ / Certification def harbor_auth =" db4b7f06-7df6-4da7-b5b1-31e91b7a70e3 "def gitlab_auth =" 53d88c8f-3063-4048-9205-19fc6222b887 "podTemplate (label: 'jenkins-agent' Cloud: 'kubernetes', containers: [containerTemplate (name:' jnlp', image: "${registry} / jenkinsci/jenkins-slave-jdk:1.8")], volumes: [hostPathVolume (mountPath:'/ var/run/docker.sock', hostPath:'/ var/run/docker.sock'), hostPathVolume (mountPath:'/ usr/bin/docker') HostPath:'/ usr/bin/docker')) {node ("jenkins-agent") {stage ('pull substitution code') {/ / for display purposes checkout ([$class: 'GitSCM', branches: [[name:' ${Branch}']], userRemoteConfigs: [[credentialsId: "${gitlab_auth}" Url: "${git_address}"]]) sh "ls"} stage ('code compilation') {sh "mvn clean package-Dmaven.test.skip=true" sh "ls"} stage ('build image') {withCredentials ([usernamePassword (credentialsId: "${harbor_auth}", passwordVariable: 'password') UsernameVariable: 'username')]) {sh "echo' FROM tomcat LABEL maintainer miaocunfa RUN rm-rf / usr/local/tomcat/webapps/* ADD target/*.war / usr/local/tomcat/webapps/ROOT.war '> Dockerfile docker build-t ${image_name}. Docker login-u ${username}-p'${password}'${registry} docker push ${image_name} "}
Write scripts to submit gitlab
[root@support ~] # cat gitpush.sh testdate=$ (date) cd / root/citestecho $testdate > > pod-slave.loggit add-Agit commit-m "$testdate" gitpush origin master
The code submission has triggered the task number 33 to start building.
Logs during the construction of the jenkins.
After the jenkins is successfully built, there is already an image labeled 33 in the harbor.
4.7.The continuous deployment of Jenkins in Kubernetes
After you have successfully built the mirror using jenkins, the next step is to deploy the image on the K8s platform. We need to use the plug-in Kubernetes Continuous Deploy Plugin in this process.
4.7.1, k8s certification
Copy the contents of .kube / config to jenkins to generate credentials
Copy the id of the credential to reference in the pipeline script
4.7.2, k8s add harbor warehouse secret [root@master ~] # kubectl create secret docker-registry harbor-pull-secret-- docker-server=' http://10.0.0.123:8080'-- docker-username='admin'-- docker-password='Harbor12345'secret/harbor-pull-secret created4.7.3, Pipeline script / / Image warehouse address def registry = "10.0.0.123def registry 8080" / / Image warehouse project def project = "jenkinsci" / / Image name def app_name = "citest" / / Image full name def image_name = "${registry} / ${project} / ${app_name}: ${BUILD_NUMBER}" / / git repository address def git_address = "http://139.9.134.177:" 10080/miao/citest.git "/ / certified def harbor_auth =" db4b7f06-7df6-4da7-b5b1-31e91b7a70e3 "def gitlab_auth =" 53d88c8f-3063-4048-9205-19fc6222b887 "/ / K8s certified def k8s_auth =" 586308fb-3f92-432d-a7f7-c6d6036350dd "/ / harbor warehouse secret_namedef harbor_registry_secret =" harbor-pull-secret "/ / k8s exposed nodePortdef nodePort =" 30666 "podTemplate (label: 'jenkins-agent') after deployment Cloud: 'kubernetes', containers: [containerTemplate (name:' jnlp', image: "${registry} / jenkinsci/jenkins-slave-jdk:1.8")], volumes: [hostPathVolume (mountPath:'/ var/run/docker.sock', hostPath:'/ var/run/docker.sock'), hostPathVolume (mountPath:'/ usr/bin/docker') HostPath:'/ usr/bin/docker')) {node ("jenkins-agent") {stage ('pull substitution code') {/ / for display purposes checkout ([$class: 'GitSCM', branches: [[name:' ${Branch}']], userRemoteConfigs: [[credentialsId: "${gitlab_auth}" Url: "${git_address}"]]) sh "ls"} stage ('code compilation') {sh "mvn clean package-Dmaven.test.skip=true" sh "ls"} stage ('build image') {withCredentials ([usernamePassword (credentialsId: "${harbor_auth}", passwordVariable: 'password') UsernameVariable: 'username')]) {sh "echo' FROM tomcat LABEL maintainer miaocunfa RUN rm-rf / usr/local/tomcat/webapps/* ADD target/*.war / usr/local/tomcat/webapps/ROOT.war '> Dockerfile docker build-t ${image_name}. Docker login-u ${username}-p'{password}'${registry} docker push ${image_name} "} stage ('deploy to K8s') {sh" sed-I's #\ $IMAGE_NAME#$ {image_name} #' deploy.yml Sed-I's #\ $SECRET_NAME#$ {harbor_registry_secret} # 'deploy.yml sed-I's #\ $NODE_PORT#$ {nodePort} #' deploy.yml "" kubernetesDeploy configs: 'deploy.yml' KubeconfigId: "${k8s_auth}"}} deploy.yaml
It is used to deploy the image as a pod controlled by the deployment controller, which is pushed along with the code in the code repository.
Kind: DeploymentapiVersion: apps/v1metadata: name: webspec: replicas: 3 selector: matchLabels: app: java-demo template: metadata: labels: app: java-demo spec: imagePullSecrets:-name: $SECRET_NAME containers:-name: tomcat image: $IMAGE_NAME ports:-containerPort: 8080 name: web livenessProbe: httpGet : path: / port: 8080 initialDelaySeconds: 20 timeoutSeconds: 5 failureThreshold: 3 readinessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 20 timeoutSeconds: 5 failureThreshold: 3---kind: ServiceapiVersion: v1metadata: name: webspec: type: NodePort selector: app: java-demo Ports:-protocol: TCP port: 80 targetPort: 8080 nodePort: $NODE_PORT4.7.4, Push
Here is the complete CI/CD process
1. Git pushes the code to the gitlab code repository
2. Gitlab uses webhook to trigger jenkins tasks
The webhook in the lower left corner has been triggered and the jenkins task numbered 53 has started.
Jenkins task flow
3. Harbor Image Repository
The image with tag label 53 has also been pushed to harbor.
4. Use kubectl to monitor the changes of pods
Jenkins builds the slave pod first in the task flow. After deploying the image to kubernetes, the slave pod is destroyed and the web image is in running state.
5. Email notification
Send an email notification after the entire jenkins task is executed successfully
The configuration of the email will be posted in the 4.8 optimization section.
4.8.1.The pipeline script is hosted with the code.
The advantage of keeping Jenkinsfile in the code repository is that you can also manage a version of Jenkinsfile, which is consistent with the current project life cycle.
First save the pipeline script to the local git repository with the file name Jenkinsfile
The jenkins configuration is as follows
4.8.2. Add email notification after successful construction
1. Email notification requires the use of an installed plug-in Email Extension.
2. Configuration of Email Extension
3. Email template content, html template
4. The default mail service configuration of the system can send test mail after configuration.
5. Test the content of the email
Mail template ${ENV, var= "JOB_NAME"}-the ${BUILD_NUMBER} build log this email is sent automatically by the system without reply! Hello, colleagues. The following is the ${PROJECT_NAME} project build information build result-${BUILD_STATUS} build information project name : ${PROJECT_NAME} build number: ${BUILD_NUMBER} build trigger reason: ${CAUSE} build status: ${BUILD_STATUS} build Information: ${BUILD_URL} Construction log: ${BUILD_URL} console construction history: ${PROJECT_URL} failure use case $FAILED_TESTS recently submitted (# $SVN_REVISION) ${CHANGES_SINCE_LAST_SUCCESS Reverse=true, format= "% c", changesFormat= "% d [% a]% m"} detailed submission: ${PROJECT_URL} changes
I am still a beginner in the area of continuous integration and look forward to your advice.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.