Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How do I install and use Jenkins on Kubernetes?

2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

How to install Jenkins on Kubernetes

First, we need to install Helm, which is the package manager for Kubernetes:

$curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh$ chmod 700 get_helm.sh$. / get_helm.sh-v v2.15.0

Again, we need to install Tiller for Helm to work properly:

Kubectl-n kube-system create serviceaccount tillerserviceaccount/tiller created~/.kube$ kubectl create clusterrolebinding tiller-- clusterrole cluster-admin-- serviceaccount=kube-system:tillerclusterrolebinding.rbac.authorization.k8s.io/tiller created~/.kube$ helm init-- service-account tiller$HELM_HOME has been configured at / Users/itspare/.helm.

After completing these steps, we need to run the check command to see the configuration values for deployment:

$helm inspect values stable/jenkins > values.yml

Check the configuration values carefully and change them as needed. Then install Chart:

Helm install stable/jenkins-- tls\-- name jenkins\-- namespace jenkins

There are some instructions on what to do next during the installation process:

Note:

Run the following command to get the password for the "admin" user:

Printf $(kubectl get secret-- namespace default my-jenkins-o jsonpath= "{.data.jenkins-admin-password}" | base64-- decode); echo gets Jenkins URL in the same shell to access these commands:

Export POD_NAME=$ (kubectl get pods-- namespace default-l "app.kubernetes.io/component=jenkins-master"-l "app.kubernetes.io/instance=my-jenkins"-o jsonpath= "{.items [0] .metadata.name}") echo http://127.0.0.1:8080kubectl-- namespace default port-forward $POD_NAME 808080

Follow these steps and they will start the proxy server in http://127.0.0.1:8080.

Go there and enter your user name and password. You will have your own Jenkins server:

Keep in mind, however, that there are many configuration options that have not been modified

By default, the server installs the most basic plug-ins, such as Git and Kubernetes-Jenkins, and we can install other plug-ins according to our own needs.

All in all, it's easy to install Jenkins using Helm.

Using K8S to extend CI/CD Jenkins pipeline

Now that we have an overview of how CI/CD runs on Kubernetes, let's look at a sample use case for deploying a highly scalable Jenkins deployment in Kubernetes. People usually use it (with a few modifications) to deal with the CI/CD of the infrastructure, so let's go!

Use Jenkins fixed distribution

Although the official Jenkins image is suitable for getting started, it requires more configuration than we expected. Many users will choose a fixed distribution, such as my-bloody-jenkins (https://github.com/odavid/my-bloody-jenkins), which provides a more complete pre-installed plug-in and configuration options. Among the available plug-ins, we use the saml plug-in, SonarQubeRunner, Maven, and Gradle.

It can be installed via Helm Chart using the following command:

$helm repo add odavid https://odavid.github.io/k8s-helm-charts$ helm install odavid/my-bloody-jenkins

We chose to deploy the custom image using the following Dockerfile:

FROM odavid/my-bloody-jenkins:2.190.2-161USER jenkinsCOPY plugins.txt / usr/share/jenkins/ref/RUN / usr/local/bin/install-plugins.sh < / usr/share/jenkins/ref/plugins.txtUSER root

The plugins.txt file is a list of other plug-ins that we want to pre-install into the image:

Build-monitor-pluginxcode-pluginrich-text-publisher-pluginjacocoscoveragedependency-check-jenkins-plugingreenballsshiningpandapyenv-pipelines3pipeline-awsappcentermultiple-scmsTestng-plugin

Then, whenever the dockerfile changes, we use this generic Jenkinsfile to build the master:

#! / usr/bin/env groovynode ('generic') {try {def dockerTag, jenkins_masterstage (' Checkout') {checkout ([$class: 'GitSCM',branches: scm.branches,doGenerateSubmoduleConfigurations: scm.doGenerateSubmoduleConfigurations,extensions: [[$class:' CloneOption', noTags: false, shallow: false, depth: 0, reference:']], userRemoteConfigs: scm.userRemoteConfigs,]) def version = sh (returnStdout: true Script: "git describe-- tags `git rev-list-- tags-- max-count= 1`). Trim () def tag = sh (returnStdout: true, script:" git rev-parse-- short HEAD "). Trim () dockerTag = version +"-"+ tagprintln (" Tag: "+ tag +" Version: "+ version)} stage ('Build Master') {jenkins_master = docker.build (" jenkins-master " "--network=host.")} stage ('Push images') {docker.withRegistry ("https://$env.DOCKER_REGISTRY",' ecr:eu-west-2:jenkins-aws-credentials') {jenkins_master.push (" ${dockerTag} ")}} if (env.BRANCH_NAME = = 'master') {stage (' Push Latest images') {docker.withRegistry (" https://$env.DOCKER_REGISTRY",) 'ecr:eu-west-2:jenkins-aws-credentials') {jenkins_master.push ("latest")} stage (' Deploy to K8s cluster') {withKubeConfig ([credentialsId: 'dev-tools-eks-jenkins-secret',serverUrl: env.TOOLS_EKS_URL]) {sh "kubectl set image statefulset jenkins jenkins=$env.DOCKER_REGISTRY/jenkins-master:$ {dockerTag}"}} currentBuild.result =' SUCCESS'} catch (e) {currentBuild.result = 'FAILURE'throw e}}

The dedicated cluster we use consists of a number of large and medium-sized instances in AWS for Jenkins jobs. Next, let's move on to the next section.

Use dedicated Jenkins Slaves and tags (label)

To extend some of our Jenkins slaves, we use the Pod template and assign tags to a specific agent. So in our Jenkinsfiles, we can reference them for jobs. For example, we have some agent that need to build Android applications. Therefore, we quote the following tags:

Pipeline {agent {label "android"}...

And Android-specific pod templates will be used. We use this Dockerfile, for example:

FROM dkr.ecr.eu-west-2.amazonaws.com/jenkins-jnlp-slave:latestRUN apt-get update & & apt-get install- y-f-- no-install-recommends xmlstarletARG GULP_VERSION=4.0.0ARG CORDOVA_VERSION=8.0.0# SDK version and build-tools version should be differentENV SDK_VERSION 25.2.3ENV BUILD_TOOLS_VERSION 26.0.2ENV SDK_CHECKSUM 1b35bcb94e9a686dff6460c8bca903aa0281c6696001067f34ec00093145b560ENV ANDROID_HOME / opt/android-sdkENV SDK_UPDATE tools,platform-tools,build-tools-25.0.2,android-25,android-24,android-23 Android-22,android-21,sys-img-armeabi-v7a-android-26 Sys-img-x86-android-23ENV LD_LIBRARY_PATH ${ANDROID_HOME} / tools/lib64/qt:$ {ANDROID_HOME} / tools/lib/libQt5:$LD_LIBRARY_PATH/ENV PATH ${PATH}: ${ANDROID_HOME} / tools:$ {ANDROID_HOME} / platform-toolsRUN curl-SLO "https://dl.google.com/android/repository/tools_r${SDK_VERSION}-linux.zip"\ & & echo" ${SDK_CHECKSUM} tools_r$ {SDK _ VERSION}-linux.zip "| sha256sum-c -\ & & mkdir-p" ${ANDROID_HOME} "\ & & unzip-qq" tools_r$ {SDK_VERSION}-linux.zip "- d" ${ANDROID_HOME} "\ & & rm-Rf" tools_r$ {SDK_VERSION}-linux.zip "\ & & echo y | ${ANDROID_HOME} / tools/android update sdk-- filter ${SDK_UPDATE}-all-- no-ui-- Force\ & & mkdir-p ${ANDROID_HOME} / tools/keymaps\ & & touch ${ANDROID_HOME} / tools/keymaps/en-us\ & & yes | ${ANDROID_HOME} / tools/bin/sdkmanager-- updateRUN chmod-R 777 ${ANDROID_HOME} & & chown-R jenkins:jenkins ${ANDROID_HOME}

We also used Jenkinsfile, which is similar to the previous file and is used to build master. Whenever we make changes to Dockerfile, agent rebuilds the mirror. This provides great flexibility for our CI/CD infrastructure.

Use automatic scaling

Although we have assigned a certain number of nodes to deployment, we can do more by enabling cluster autoscaling. This means that when the workload increases and peaks, we can add additional nodes to handle the job. Currently, if we have a fixed number of nodes, we can only handle a fixed number of job. We can make a rough estimate based on the fact that each slave usually allocates 500ms CPU and 256MB memory and sets a high concurrency. It's not realistic at all.

For example, this can happen when your version is slashed and a large number of microservices need to be deployed. Then, a large number of job accumulates in the pipeline, causing serious delays.

In this case, we can increase the number of nodes at this stage. For example, we can add additional VM instances and then delete them at the end of the process.

We can use the auto-scaling option on the command line to configure the "Vertical" or "cluster" auto-scaling option. However, this method requires careful planning and configuration, because sometimes the following occurs:

More and more job reach a stable stage.

Autoscaler adds new nodes, but takes 10 minutes to deploy and allocate

The old job has completed its task, and the new job will fill in the gaps, thus reducing the need for new nodes

The new node is available, but it takes X minutes to remain stable and unutilized. X is defined by the-scale-down-unneeded-time flag.

The same thing happens many times a day.

In this case, it is best to configure it according to our specific needs, or just increase the number of nodes for the day and restore them at the end of the process. All of this has to do with finding the best way to use all resources and minimize costs.

In any case, we should have a scalable and easy-to-use Jenkins cluster. For each job, a pod is created to run a specific pipeline and destroy it after completion

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report