Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to implement CI Workflow based on Jenkins and Kubernetes

2025-03-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article focuses on "how to implement CI workflow based on Jenkins and Kubernetes". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to implement CI workflow based on Jenkins and Kubernetes.

Abstract

Jenkins as the most popular continuous integration tool, on the basis of the combination of container technology and Kubernetes cluster, how to exert new capabilities and provide a better CI way based on the application of micro-service is worth every developer to constantly explore.

Jenkins and Kubernetes

As the most popular continuous integration tool, Jenkins has a rich user base, strong expansion ability and rich plug-ins. It is the most common CI tool for developers. After Jenkins strengthens its Pipeline function, it can realize all kinds of complex processes through rich step library. At the same time, with the popularity of Docker, support for Docker has been added in Jenkins, which can realize the execution of processes in the container.

Kubernetes iterates faster and faster with the version, and the heat in the container circle is getting higher and higher. at the same time, the new features are also increasing every time the version is released. As the current mainstream container management platform, its powerful capabilities do not need to be introduced here.

Thoughts on Application containerization and Application Micro-Service Design

Containerization does not mean micro-service, traditional monomer applications can also be containerized, but it is difficult to enjoy the benefits of containerization. Micro-service does not have to be containerized. After the application is disassembled into micro-services, the system can be built and deployed through traditional operation and maintenance instead of using containers. When the combination of micro-service and containerization, we can make full use of the advantages of all parties, bringing the advantages of elastic scaling, simplified deployment, easy expansion, technology compatibility and so on.

In the process of micro-service split for applications, we first consider the function points, control objects, staffing arrangements of the development team, product roadmap planning and so on. For example, according to the number, allocation and proficiency of the existing development team, the service module development team can be arranged taking into account the control of the number of service modules. For function points and medium-and long-term product planning, specific functions can be summarized into a service module, and in the iterative process of version development, the development of product functions can be completed by expanding the capabilities of this service module. or temporarily integrate part of the function into a module, and then further split or disassemble the module as the function increases or iterative development.

For the requirements of module development, due to the use of container technology, the selection of development language or specific framework can be handed over to specific module developers. Within the team, we do not make mandatory requirements, but make suggested requirements to avoid excessive technology stacks, resulting in later maintenance difficulties. Within our team, we only focus on the use of two back-end development languages, and there are corresponding and clear choices for the corresponding framework or major development libraries. For the module's API interface, use REST and at least provide API according to the maturity model LEVEL2.

Compilation and unit testing in container environment

Our entire CI workflow is driven by Jenkins and uses Jenkins Pipeline. First, Pipeline can better combine the Stage within job, reuse the same part between modules, and gradually increase the expansion of stage with the increase of development complexity to achieve more required functions; second, set the source of Pipeline Groovy script to within the source code, you can control the process according to the function points of the source code, and complete the version management of the script.

Because of the tool of container, the compilation environment and unit test environment of each module are also put into the container. Each module can install its own running characteristics or environmental requirements, and prepare its own compilation environment, unit test environment and running environment. Therefore, the corresponding Dockerfile will be retained in the code base, and different Dockerfile will be used to complete the preparation of different environment images.

At the same time, jenkins can now run Step in the container through the Docker Pipeline plug-in, so the actual compilation and testing process we completed with its functionality is as follows:

1. Use the Dockerfile of the compiled environment to build the compiled environment image.

two。 Use the compilation environment image to start the container and complete the compilation in the container, and the intermediate products that complete the compilation are also temporarily stored in Workspace.

3. Use the Dockerfile component unit test environment image of the test environment.

4. Use the unit test environment image to start the container and run the unit test inside the container, the unit test script comes from the code base, but also uses the intermediate products generated at compile time.

5. Use the release Dockerfile to build the actual release image and upload the image library.

Because the compilation environment and unit test environment do not change frequently, we can also extract the two steps of compilation environment image preparation and unit test environment image preparation into a separate CI job, which can be triggered manually when needed.

Service deployment and upgrade

For the CI process, after compilation and packaging, all you need to do is start and test the service. We use Kubernetes Deployment and Service. After each CI process is compiled and packaged, we complete the uploading and archiving of the image by getting the Build number as the Tag of the image. At the same time, we use Tag to modify the Deployment that has been created in Kubernetes, and use the Rolling Update of Deployment to complete the upgrade.

Extensions to kubernetes service templates and service configuration

In the process of actually deploying services using Kubernetes Deployment upgrades, we found that there are still a lot of inconveniences. For example, the problem of the same name within the Kubernetes, the change of the mirror tag during the Kubernetes Deployment upgrade, and so on, where changes may exist with the CI process. For example, in the case of a Deployment with the same name, a later Deployment cannot be created. As a result, if you want to test a version by starting a new Deployment, you need to change the name, and for the Service related to Deployment, you also need to start the corresponding Service to expose the service after starting the new named Deployment. For the image Tag modification required for Deployment upgrade, you need to change each time the CI generates a new image Tag, so each time you need to modify the image Tag in the corresponding Yaml file to the value generated in the actual CI process, and then use the upgrade feature to complete the service upgrade.

To solve these problems, we use a set of text template engine. The Yaml file used for deployment or upgrade is written as a template itself. It may change or need to use the template to identify the placeholder according to the location of the CI process, while the specific template data content is either obtained through Jenkins's CI process, or read using a specific configuration file, or obtained from specific input parameters. At the same time, the template data content will also be set to the system as the Configmap of Kubernetes during the actual deployment, so the data content can also be used in environment variables or startup commands through Kubernetes using Configmap. Through the text template engine, the template and data are merged, and the resulting Yaml file is used as the content for subsequent Kubernetes operations.

In this way, we separate the content that needs to be deployed into templates and configurations; templates generally remain unchanged when there are no major changes in the service architecture, image name, startup method or configuration parameters. through the flexible use of different configurations, complete the service upgrade or pull up the new deployment, complete the direction of different data storage usage, and complete the modification of the internal configuration of each module. By using this way, our modifiable content extends from the environment variable or startup command that can only be covered by Configmap itself to the various fillable values in the Yaml file, such as startup name, Label, image, etc., so as to solve the problems encountered when using Kubernetes, such as the same name, image modification, Label addition or change, etc.

Automated testing

After the compilation, packaging and service upgrade deployment are completed through the Jenkins pull, you can pull the automated testing. We chose to use Robotframework in the testing framework. The test script obtains the specific exposed port of the service through Kubernetes Service, and then executes the test against API according to the test script. The source of the test script is partly from the code base of each module, the api tests for their own modules submitted by the module developers, and partly the cross-module tests written and submitted by the testers. For automated testing, our completion is not very high, just set up a basic operating framework, can be docked with the whole process.

Release of version

Because the developed product itself is composed of several images, so the product release can be summed up as a mirror release. After the test passes, you can simply use the mirror copy ability to copy the relevant mirror versions that pass the test through the replication between mirror libraries, copy the internal image library used by the development and testing to the external release image library, and complete the release of the version. At the same time, you can release the version to the specified version number through the Tag control at the time of replication.

Summary

As described above, it illustrates how we set up the CI process in our own development process. For the establishment of the whole process, we do not need much special treatment; for the use of various tools, we do not have much prominence; we only establish a ci flow that adapts to the development process according to our own needs. Here to introduce the establishment of our ci process, but also hope to throw a brick to attract jade, can get more opportunities to communicate and learn from you.

Quan A

Q1: the content shared by the guests is still a little summarized. are there any specific examples? What is the intuitive content of the illustration?

A: the CICD product based on K8s will be released soon, and the corresponding Demo demonstration platform will be provided. Please follow us in time. Thank you!

Q2: can you share several plug-ins for Jenkins and Docker integration?

A:Docker Pipeline, Docker Plugin, Docker-build-step plug-ins.

Q3: what is the general idea of Rongyun's image replication? At present, we are testing, pre-release, release and share a warehouse. Through the auxiliary module tag to achieve!

A: image replication function, to put it simply, implements image cloning between different projects. According to the design of our image repository, images at different stages (development, testing, and available online) are classified according to different projects. Based on the image copy feature, you can quickly achieve product release at different stages, that is, image release. This feature can be downloaded from AppHouse for trial.

Q4: is your company synchronized? Your configuration file is through the configuration? Is your configuration file implemented through configuration center management or environment variables between images?

A: it is not synchronized in real time, but through the mirror copy ability of our company's mirror library products. At present, the running configuration of the product in the development process is realized through its own enhanced configuration file capability, and the configuration will be used to modify the Yaml file deployed by the application, and will also be generated as Configmap.

Q5: is there a simple Sample? You can start and practice.

A: the CICD product based on K8s will be released soon, and the corresponding Demo demonstration platform will be provided. Please follow us in time. Thank you!

Q6: can you provide a Demo for the Step by step example of this content?

A: the CICD product based on K8s will be released soon, and the corresponding Demo demonstration platform will be provided. Please follow us in time. Thank you!

Q7: version 2 of my project only modified a few pages, is it the same process as the first release?

A: yes, process means repeatable and automated, without having to relate to specific changes.

Q8: which version of K8s do you deploy? The scheduling service of K8s can only feel that the container is on which node. If Rc is 2, how to schedule the construction task in which container? why are all mine running in the same container?

A: we are currently using the Kubernetes1.5 version. The build task itself does not run on the Kubernetes, but only runs on the Kubernetes.

Q9: are you similar to the Openshift platform? Or do developers need to deploy themselves?

A: we are talking about our own product development process, and the related products will be similar to platform products and need to be self-deployed, not public cloud products.

Q10: are the two frameworks open source?

A: yes, they are all open source.

How does the template of the deployment file of Q11:K8s 's Yaml replace the placeholder? What about the old version of the container?

A: use a set of text template engine developed by yourself, which is actually similar to the template engine in Web framework, which completes the combination of template and configuration, and replaces the placeholder of key in the template by using key-value in the configuration. In addition, because the Rolling Update of K8s Deployment is used, the old version of container / Pod will be deleted by Kubernetes after the upgrade is completed.

Q12: is it convenient to share the images of various stages of development, testing, deployment, etc.?

A: these images are prepared for our application modules and are of no use to others. Many of the images used in the compiler environment are official images, such as Golang:1.7 and Python:2.7.

Q13: how to transform traditional monomer applications into containers, and can they be implemented in stages?

Each team can choose its own process to transform.

Q14: can the database be containerized?

A: you can containerize the database by hanging the data volume into a container, but it is still rare in actual projects.

Q15: at present, we also use the jenkins dynamic mount container to run the Android build job, but now the overall compilation time is more than 10 minutes longer than that of using the physical machine every time we use the image generation container, so the overall compilation time is more than 10 minutes longer than before. Our images are made and downloaded automatically every day, but because the code updates are too fast and there are too many concurrency, we need to monopolize the physical machine during compilation, so the machine occupies a lot. Do you have any suggestions to solve this problem, thank you!

A: containerized compilation encapsulates the compilation environment. In this case, a physical machine can run multiple Job synchronously. You can see how to improve the utilization of the physical machine.

Q16: there are multiple images in the project. How is the associated release done?

A: we use Jenkins's Mutil Pipeline. In addition, due to the micro-service disassembly of our module, the need for synchronous release images is not high.

Q17: there is a scenario where two services are dependent, and service A depends on service B. how can the startup sequence of service An and B be guaranteed?

A: a good design is to enable A service to complete the detection, discovery and invocation of B service after startup, rather than strongly relying on its startup sequence.

The template and configuration of Q18:Kubernetes 's service, how did this template come from, is arranged by the user? Or did you prepare it in advance? How is the configuration data stored?

A: because the current templates and configurations are only used to launch our own developed applications, this template is prepared by ourselves for our applications. Configuration data is stored as a file, but parameters can also be accepted as configurations when using a text engine for template and configuration merging.

Q19: what are CI and CD?

A: CI is more biased towards application compilation, code review, unit testing and other actions, while CD is biased towards application deployment and running process. After the compilation and packaging is completed, our development process will actually run the application for testing, which can also be regarded as a CD for testing.

Q20: what are the main benefits of packaging the Jenkins process with containers?

A: because different programs have different dependence on the compilation environment, the original Jenkins method is to complete the environment preparation on Jenkins Node, but now the container can be used to complete the environment preparation, and the dependence on Jenkins Node can be further reduced. At the same time, environmental changes can also be controlled by developers.

Q21: does the multi-compilation environment use different images? How to deal with the compilation environment of pipeline?

A: yes. Because each module of our own product development has its own development language and framework, each module has to maintain its own compilation environment image. When compiling with Pipeline, you use the compilation environment by running the container using an image and then compiling within the container.

Q22: the output of a product release is an image. If there are many images, you will eventually put a unified tag on the image. When will the Tag on the code be played?

A: we release Tag when we do image replication during product release. At this time, we copy from the development test library to the release library, taking advantage of the image replication capability of our own image library products.

Q23: is Jenkins also deployed in Docker? If Jenkins is in Docker, how to use Docker to execute CI in Docker?

A: yes, we are also trying to run Jenkins itself in a container. In this case, the Root permission is used in the Jenkins container to mount the Docker.Sock and Docker data directories into the container.

Q24: will it take a long time to build a compilation environment image before compiling with Pipeline? Is there an optimization plan?

A: since the compiled image does not change very often, the construction of this image can usually be done directly using Cache. In addition, we also extract the step of packaging the compilation environment image and execute it separately as a Job, so that there is no need to build the compilation environment in the actual compilation process.

Q25: directly use Jenkins's Docker plug-in and directly call the script, write your own Docker commands, how to make a tradeoff between flexibility and convenience?

A: it's up to the individual or the development team to make their own tradeoffs, but it's best to be unified within the development team so that they can be maintained accordingly.

How are Q26:Jenkins and Kubernetes users managed? My expectation is that users can only see their own resources, and other users do not have permissions.

A: we only use these two tools, not externally in the development environment, so there are no user management problems. In the CICD products being developed by our company, we have our own understanding of the design.

How is the continuous integration of Q27:Jenkins realized? For example, the submission trigger of different source code repositories, such as Github gitlab, how to control the version number?

A: there are many kinds of CI process triggers for Jenkins, such as code submission trigger, timing trigger, manual trigger. There are also many ways to control version numbers, such as using Job numbers, using Git Commit numbers, using timestamps, and so on.

Does Q28:Jerkins use any plug-ins to call K8s API?

A: you don't use Jenkins to call Kubernetes API directly.

Q29:Jenkins 's Pipeline supports pipeline execution, which usually starts from scratch. Can it be executed from any step in the pipeline?

A: because our team's Jenkinsfile is placed directly in the code base, each execution is pipelined from scratch.

Q30: after containerization, it is also released through Jenkins. I feel that the release of Docker is not as convenient as Jenkins. Apart from the portability of containerization, is there any reason why it is worth promoting project containerization?

A: application containerization pays more attention to the capabilities gained after running on the container management platform, such as horizontal scaling after running on Kubernetes, service discovery, rolling upgrade, and so on.

Q31:K8s Update needs to create a new image to update (upgrade) the roll server. If you only update the Configmap, is there any way to do the roll server update?

A: after our CI process is completed, the image Tag of each module will change. We use the specific generated Tag to generate the configuration, and then the deployed Yaml file is written as a template. The specific Tag of the image will be combined into different Yaml files according to the configuration generated by each CI process, and then use the combined Yaml, that is, Tag has been changed to the latest version of the Yaml file for rolling upgrade of the application.

Q32:Pipeline uses the scheme built in the image, how is it implemented? Use the ready-made plug-in or of Jenkins and implement it with Groovy script?

A: Jenkins's Docker plug-in is used, and the corresponding commands are written in the Groovy script.

For example: stage ('Build') {docker.image (' golang:1.7'). Inside {sh'. / script/build.sh'}} so far, I believe you have a deeper understanding of "how to implement CI workflow based on Jenkins and Kubernetes". You might as well do it in practice! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report