In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article is about how to run CI/CD on Kubernetes on a large scale. The editor thinks it is very practical, so I share it with you. I hope you can get something after reading this article.
In the cloud native domain, Kubernetes has accumulated a large number of use cases. It can deploy application containers in the cloud, schedule batch job, process workloads, and perform gradual upgrades. Kubernetes uses efficient choreography algorithms to handle these operations, even in large clusters.
In addition, one of the main use cases of Kubernetes is to run a continuous integration or continuous delivery (CI/CD) pipeline. That is, we deploy a unique instance of the CI/CD container that will monitor the code version control system. So, whenever we push to the warehouse, the container runs the pipeline step. Its ultimate goal is to reach a "true or false" state. True passes various tests during the integration phase of commit, while False fails.
In addition to the CI pipeline described above, after the CI test passes, another pipeline can take over the rest of the process to handle the CD part of the release process. At this stage, the pipeline will attempt to deliver the application container to production.
It is important to understand that these actions are run on demand or automatically triggered by various behaviors (such as code check-in, test triggers, results of the previous step in the process, and so on). So we need a mechanism to add individual nodes to run those pipelined steps and phase them out when they are not needed. This approach to managing immutable infrastructure helps us save resources and reduce costs.
Of course, the most critical mechanism is Kubernetes, which has a declarative structure and customizability, so it allows you to efficiently schedule job, nodes, and pod in any scenario.
CI/CD platform for Kubernetes
Kubernetes is an ideal platform for running CI/CD because it has many features that make it easier to run CI/CD on it. So, how many CI/CD platforms can run on Kubernetes? It's fair to say that Kubernetes can run them as long as they can be packaged as a container. Here are some of the most popular CI/CD platforms:
Jenkins:Jenkins is the most popular and stable CI/CD platform. Thousands of enterprises around the world are using it because of its strong ecology and scalability. If you plan to use it on Kubernetes, it is highly recommended that you install its official plugin. JenkinsX is a version of Jenkins designed specifically for the cloud native domain. It is more compatible with Kubernetes and provides better integration features such as GitOps, automatic CI/CD, and preview environment.
Spinnaker:Spinnaker is a scalable CD platform for multi-cloud deployment supported by Netflix. You can install it using the relevant Helm Chart.
Https://github.com/helm/charts/tree/master/stable/spinnaker
Drone: this is a general-purpose cloud native CD platform with multiple functions. You can run it in Kubernetes using the associated Runner.
Another CI/CD platform for GoCD:Thoughtworks provides a variety of workflows and capabilities for cloud native deployment. It can be run as a Helm Chart in Kubernetes.
In addition, there are some cloud services that work closely with Kubernetes and provide CI/CD pipelines such as CircleCI and Travis. These are also useful if you are not going to host the CI/CD platform.
Now, let's look at how to install Jenkins on a Kubernetes cluster.
How to install Jenkins on Kubernetes
First, we need to install Helm, which is the package manager for Kubernetes:
$curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh$ chmod 700 get_helm.sh$. / get_helm.sh-v v2.15.0
Again, we need to install Tiller for Helm to work properly:
Kubectl-n kube-system create serviceaccount tillerserviceaccount/tiller created~/.kube$ kubectl create clusterrolebinding tiller-- clusterrole cluster-admin-- serviceaccount=kube-system:tillerclusterrolebinding.rbac.authorization.k8s.io/tiller created~/.kube$ helm init-- service-account tiller$HELM_HOME has been configured at / Users/itspare/.helm.
After completing these steps, we need to run the check command to see the configuration values for deployment:
$helm inspect values stable/jenkins > values.yml
Check the configuration values carefully and change them as needed. Then install Chart:
Helm install stable/jenkins-- tls\-- name jenkins\-- namespace jenkins
There are some instructions on what to do next during the installation process:
Note:
Run the following command to get the password for the "admin" user:
Printf $(kubectl get secret-- namespace default my-jenkins-o jsonpath= "{.data.jenkins-admin-password}" | base64-- decode); echo
Get Jenkins URL in the same shell to access these commands:
Export POD_NAME=$ (kubectl get pods-- namespace default-l "app.kubernetes.io/component=jenkins-master"-l "app.kubernetes.io/instance=my-jenkins"-o jsonpath= "{.items [0] .metadata.name}") echo http://127.0.0.1:8080kubectl-- namespace default port-forward $POD_NAME 808080
Follow these steps and they will start the proxy server in http://127.0.0.1:8080.
Go there and enter your user name and password. You will have your own Jenkins server:
Keep in mind, however, that there are many configuration options that have not been modified, and you can visit the chart documentation for more information:
Https://github.com/helm/charts/tree/master/stable/jenkins
By default, the server installs the most basic plug-ins, such as Git and Kubernetes-Jenkins, and we can install other plug-ins according to our own needs.
All in all, it's easy to install Jenkins using Helm.
Using K8S to extend CI/CD Jenkins pipeline
Now that we have an overview of how CI/CD runs on Kubernetes, let's look at a sample use case for deploying a highly scalable Jenkins deployment in Kubernetes. People usually use it (with a few modifications) to deal with the CI/CD of the infrastructure, so let's go!
Use Jenkins fixed distribution
Although the official Jenkins image is suitable for getting started, it requires more configuration than we expected. Many users will choose a fixed distribution, such as my-bloody-jenkins (https://github.com/odavid/my-bloody-jenkins), which provides a more complete pre-installed plug-in and configuration options. Among the available plug-ins, we use the saml plug-in, SonarQubeRunner, Maven, and Gradle.
It can be installed via Helm Chart using the following command:
$helm repo add odavid https://odavid.github.io/k8s-helm-charts$ helm install odavid/my-bloody-jenkins
We chose to deploy the custom image using the following Dockerfile:
FROM odavid/my-bloody-jenkins:2.190.2-161USER jenkinsCOPY plugins.txt / usr/share/jenkins/ref/RUN / usr/local/bin/install-plugins.sh < / usr/share/jenkins/ref/plugins.txtUSER root
The plugins.txt file is a list of other plug-ins that we want to pre-install into the image:
Build-monitor-pluginxcode-pluginrich-text-publisher-pluginjacocoscoveragedependency-check-jenkins-plugingreenballsshiningpandapyenv-pipelines3pipeline-awsappcentermultiple-scmsTestng-plugin
Then, whenever the dockerfile changes, we use this generic Jenkinsfile to build the master:
#! / usr/bin/env groovynode ('generic') {try {def dockerTag, jenkins_masterstage (' Checkout') {checkout ([$class: 'GitSCM',branches: scm.branches,doGenerateSubmoduleConfigurations: scm.doGenerateSubmoduleConfigurations,extensions: [[$class:' CloneOption', noTags: false, shallow: false, depth: 0, reference:']], userRemoteConfigs: scm.userRemoteConfigs,]) def version = sh (returnStdout: true Script: "git describe-- tags `git rev-list-- tags-- max-count= 1`). Trim () def tag = sh (returnStdout: true, script:" git rev-parse-- short HEAD "). Trim () dockerTag = version +"-"+ tagprintln (" Tag: "+ tag +" Version: "+ version)} stage ('Build Master') {jenkins_master = docker.build (" jenkins-master " "--network=host.")} stage ('Push images') {docker.withRegistry ("https://$env.DOCKER_REGISTRY",' ecr:eu-west-2:jenkins-aws-credentials') {jenkins_master.push (" ${dockerTag} ")}} if (env.BRANCH_NAME = = 'master') {stage (' Push Latest images') {docker.withRegistry (" https://$env.DOCKER_REGISTRY",) 'ecr:eu-west-2:jenkins-aws-credentials') {jenkins_master.push ("latest")} stage (' Deploy to K8s cluster') {withKubeConfig ([credentialsId: 'dev-tools-eks-jenkins-secret',serverUrl: env.TOOLS_EKS_URL]) {sh "kubectl set image statefulset jenkins jenkins=$env.DOCKER_REGISTRY/jenkins-master:$ {dockerTag}"}} currentBuild.result =' SUCCESS'} catch (e) {currentBuild.result = 'FAILURE'throw e}}
The dedicated cluster we use consists of a number of large and medium-sized instances in AWS for Jenkins jobs. Next, let's move on to the next section.
Use dedicated Jenkins Slaves and tags (label)
To extend some of our Jenkins slaves, we use the Pod template and assign tags to a specific agent. So in our Jenkinsfiles, we can reference them for jobs. For example, we have some agent that need to build Android applications. Therefore, we quote the following tags:
Pipeline {agent {label "android"}...
And Android-specific pod templates will be used. We use this Dockerfile, for example:
FROM dkr.ecr.eu-west-2.amazonaws.com/jenkins-jnlp-slave:latestRUN apt-get update & & apt-get install- y-f-- no-install-recommends xmlstarletARG GULP_VERSION=4.0.0ARG CORDOVA_VERSION=8.0.0# SDK version and build-tools version should be differentENV SDK_VERSION 25.2.3ENV BUILD_TOOLS_VERSION 26.0.2ENV SDK_CHECKSUM 1b35bcb94e9a686dff6460c8bca903aa0281c6696001067f34ec00093145b560ENV ANDROID_HOME / opt/android-sdkENV SDK_UPDATE tools,platform-tools,build-tools-25.0.2,android-25,android-24,android-23 Android-22,android-21,sys-img-armeabi-v7a-android-26 Sys-img-x86-android-23ENV LD_LIBRARY_PATH ${ANDROID_HOME} / tools/lib64/qt:$ {ANDROID_HOME} / tools/lib/libQt5:$LD_LIBRARY_PATH/ENV PATH ${PATH}: ${ANDROID_HOME} / tools:$ {ANDROID_HOME} / platform-toolsRUN curl-SLO "https://dl.google.com/android/repository/tools_r${SDK_VERSION}-linux.zip"\ & & echo" ${SDK_CHECKSUM} tools_r$ {SDK _ VERSION}-linux.zip "| sha256sum-c -\ & & mkdir-p" ${ANDROID_HOME} "\ & & unzip-qq" tools_r$ {SDK_VERSION}-linux.zip "- d" ${ANDROID_HOME} "\ & & rm-Rf" tools_r$ {SDK_VERSION}-linux.zip "\ & & echo y | ${ANDROID_HOME} / tools/android update sdk-- filter ${SDK_UPDATE}-all-- no-ui-- Force\ & & mkdir-p ${ANDROID_HOME} / tools/keymaps\ & & touch ${ANDROID_HOME} / tools/keymaps/en-us\ & & yes | ${ANDROID_HOME} / tools/bin/sdkmanager-- updateRUN chmod-R 777 ${ANDROID_HOME} & & chown-R jenkins:jenkins ${ANDROID_HOME}
We also used Jenkinsfile, which is similar to the previous file and is used to build master. Whenever we make changes to Dockerfile, agent rebuilds the mirror. This provides great flexibility for our CI/CD infrastructure.
Use automatic scaling
Although we have assigned a certain number of nodes to deployment, we can do more by enabling cluster autoscaling. This means that when the workload increases and peaks, we can add additional nodes to handle the job. Currently, if we have a fixed number of nodes, we can only handle a fixed number of job. We can make a rough estimate based on the fact that each slave usually allocates 500ms CPU and 256MB memory and sets a high concurrency. It's not realistic at all.
For example, this can happen when your version is slashed and a large number of microservices need to be deployed. Then, a large number of job accumulates in the pipeline, causing serious delays.
In this case, we can increase the number of nodes at this stage. For example, we can add additional VM instances and then delete them at the end of the process.
We can use the auto-scaling option on the command line to configure the "Vertical" or "cluster" auto-scaling option. However, this method requires careful planning and configuration, because sometimes the following occurs:
More and more job reach a stable stage.
Autoscaler adds new nodes, but takes 10 minutes to deploy and allocate
The old job has completed its task, and the new job will fill in the gaps, thus reducing the need for new nodes
The new node is available, but it takes X minutes to remain stable and unutilized. X is defined by the-scale-down-unneeded-time flag.
The same thing happens many times a day.
In this case, it is best to configure it according to our specific needs, or just increase the number of nodes for the day and restore them at the end of the process. All of this has to do with finding the best way to use all resources and minimize costs.
In any case, we should have a scalable and easy-to-use Jenkins cluster. For each job, a pod is created to run a specific pipeline and destroyed when it is complete.
Best practices for large-scale use of K8s for CI / CD
Now we know what CI/CD platforms Kubernetes has and how to install one on your cluster. Next, we will discuss some ways to run them on a large scale.
First of all, choosing a Kubernetes distribution is one of the most critical factors we need to consider. Find the most appropriate solution before you can proceed to the next step.
Second, choosing the right Docker image repository is just as important as the application package manager. We need to find secure and reliable mirror management that can be quickly retrieved on demand. As for the package manager, Helm is a good choice because it can discover, share, and use software built for Kubernetes.
Third, the use of modern integration processes, such as GitOps and ChatOps, provides significant advantages in terms of ease of use and predictability. Using Git as a single data source allows us to run "operate by pull request", thus simplifying deployment control over infrastructure and applications. Using teamwork such as WeCom or nailing to trigger automated tasks in the CI/CD pipeline helps us eliminate duplication of effort and simplify integration.
Overall, if we want to learn more, you can customize or develop your own K8S Operator to work more closely with K8S API. There are many benefits to using custom operator because they create a better automation experience.
The above is how to run CI/CD on Kubernetes on a large scale. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.