In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces how to combine DevOps with OpenShift to achieve the effect of 1: 1 > 2. The content is very detailed. Interested friends can use it for reference. I hope it will be helpful to you.
Foreword:
In recent years, the concept of DevOps has gradually taken root in the hearts of the people. With the concepts of container, Docker, Kubernetes and OpenShift coming into our field of vision, more and more enterprises begin to use these technologies in production. While the convenience brought by these technologies and ideas continues to empower software development, some people may wonder how technologies such as Kubernetes and OpenShift join the tool chain family of DevOps to further improve production efficiency and quality.
Today, let me take a look at how DevOps combines with OpenShift to achieve a 1: 1 > 2 effect.
What is the container?
Container is a kernel lightweight operating system layer virtualization technology.
The nature of a container, in a word, is a set of processes that are resource-constrained and isolated from each other.
Docker, one of the container services, is an open source application container engine.
What are the characteristics of the container?
1. Extremely lightweight: only the necessary Bin/Lib is packaged
two。 Second deployment: depending on the image, the container is deployed between milliseconds and seconds (much better than a virtual machine)
3. Easy to migrate: build once, deploy anywhere
4. Auto scaling: open source, convenient and easy-to-use container management platforms such as Kubernetes, Swam and Mesos have very strong flexible management capabilities.
What are the benefits of using containerization technology?
In traditional development scenarios, the development test team and the production operation and maintenance team use different infrastructure, and usually use the same media and different configuration files to distinguish the environment, but in the process of environment transformation, there will still be some problems caused by team cooperation or environment dependence. The greatest convenience brought by containerization is that it can simplify the collaborative relationship between teams, and the construction of container images packages not only the application, but also the entire running environment in which the application is running. this way shields the problems caused by different environments. In this way, the environment and dependencies required by this application are highly consistent no matter what environment it runs in.
When the number of containers reaches a certain order of magnitude, how to maintain and manage the containers efficiently?
The answer is to use the Kubernetes container management tool.
What is Kubernetes?
Kubernetes is a container cluster management system, which can realize automatic deployment, automatic expansion, maintenance and other functions of container cluster to ensure the high availability of applications.
Through Kubernetes you can:
1. Rapid deployment of applications
two。 Rapidly expand the application and expand the capacity of the application
3. Seamless docking of new application functions
4. Save resources and optimize the use of hardware resources
What are the characteristics of Kubernetes?
1. Portable: support public cloud, private cloud, hybrid cloud, multiple cloud
two。 Extensible: modular, plug-in, mountable, combinable
3. Automation: automatic deployment, automatic restart, automatic replication, automatic scaling / expansion
What is OpenShift?
OpenShift is a cloud platform based on the mainstream container technologies Docker and Kubernetes. The bottom layer of OpenShift uses docker as the container engine driver and Kubernetes as the container orchestration engine component. At the same time, OpenShift provides a complete set of container-based application cloud platform by providing elements such as development language, middleware, automated process tools and interfaces.
You can simply understand OpenShift as Kubernetes PRO PLUS, so if we can dock with OpenShift, it is equivalent to docking with Kubernetes.
Why do I need to manage OpenShift through DevOps?
In order to ensure the consistency of the performance of business applications in the test environment, pre-release environment and production environment, we use a series of container-related tools that can help us reduce the problems caused by inconsistencies in media, environment, etc. however, with the in-depth use of these tools, we will also want to be able to further abstract, regardless of whether our business applications are deployed on the cloud host or the container cloud We all want to be able to deploy in the same way, and we also want to be able to further simplify operations, truly achieve one-click deployment, and effectively improve productivity and quality.
The core value of DevOps is to improve the production efficiency and quality of enterprises by opening up various tool chains, to bid farewell to the black and white Linux interface, and the friendly visual interface can geometrically enhance the operation and maintenance experience; no longer worry about the deployment environment, a set of DevOps can easily manage multiple environments Instead of being distracted by a large number of configuration files, we can customize the configuration flexibly enough through a large number of templates.
The above is the system architecture deployment diagram at the time of DevOps implementation. Usually, our deployment is divided into different deployment areas such as development environment, test environment and production environment. The resources of these different deployment areas are isolated from each other to ensure that they will not affect each other.
In our development and test pipeline, first of all, the developer submits the code to trigger the continuous integration pipeline, and the build server automatically pulls the code, scans the quality of the code, and then compiles the product. finally, the product is uploaded to the media warehouse. We support the synchronization strategy of configuring media repositories, such as timing synchronization, timing push or pull of media repositories in different environments, which can ensure the consistency of media deployment in different environments. For images, images are also a type of deployment media. After building and uploading images, images can also be synchronized to image repositories in different environments. When the continuous deployment process is triggered, the deployment server deploys the media to the application deployment machine or the container cloud environment. For the application deployment machine, the media is obtained from the media warehouse server. For the container cloud, the image comes from the image warehouse.
How do we design and land?
Our overall design idea for continuous integration and continuous deployment is to design it in DevOps, then execute it through Jenkins, and finally deploy through OpenShift. We define multiple atomic tasks in a pipeline in DevOps, and then build the definition or deploy the architecture design to generate the Pipeline Job configuration file of Jenkins; then Jenkins creates and executes Pipeline Job; according to this dynamically generated configuration file. After the deployment is completed, DevOps tracks the execution progress and results by calling Jenkins's Rest API, obtains the instance status of the application container and performs operation and maintenance operations on the application container through OpenShift's Rest API.
Image construction and upload
Before deployment, we need to prepare the media. DevOps provides a series of tasks to help us easily complete the construction and upload of images. For Kubernetes and OpenShift, deployment media is images, which means that different types of container clouds can be compatible with different types of container clouds as long as we connect to the network of image repositories, whether deployed to Kubernetes or OpenShift.
DevOps provides a variety of image construction tasks, which can be built by specifying a basic image or building through DockerFile, which is very flexible in usage. Generally speaking, enterprises will build private image repositories in the private network environment. After construction, we need to push the images to the trusted image repositories.
Component type expansion
We have added components of OpenShift type to expand. The component is the smallest unit of deployment, which contains all kinds of information about the deployment media. You can trace the code, branch and build serial number of the production media forward, and track the application after deployment and the changes of application status such as upgrade, rollback and so on. The component also contains its deployment information in each environment and the operation of the instance, such as access address, running status, and so on.
Resource type expansion
We added a deployment resource of type OpenShift, which is completely independent and mutually exclusive with the media. It describes the environment (container cloud) to which our media (image) needs to be deployed and the namespace in which the container cloud is deployed. The resource also describes other information needed to access the container cloud, such as user name, password or APIToken, and certificate of the container cloud.
Deploy atomic mission expansion
DevOps has a strong expansion capability, which can be expanded by adding atomic tasks and attribute parameters of atomic tasks for OpenShift deployment tasks to the database, such as the name and version of the image, the yaml or yaml template used for deployment, the OpenShift resources used for deployment, the image repository used to pull the image, and so on.
What are the benefits of this approach?
The advantage of DevOps pipeline design is obvious. CICD can reduce a lot of repetitive work in the process of development, testing and deployment, at the same time reduce manual errors, and greatly increase the frequency of functional verification. Developers and testers can get changes earlier, enter testing earlier, find problems earlier, shorten the development cycle, and greatly reduce the cost of solving problems. In this process, developers can find errors earlier and reduce the amount of work required to resolve them, and if errors are found in the deployment process, they can fall back to the previous version, ensuring that a version of the deliverable is always available.
Introduction of OpenShift Client plug-in and pit removal
We use OpenShift Client in Jenkins Plugin to operate on OpenShift, this plug-in has a concise, comprehensive, readable features, and provides a smooth Jenkins Pipeline syntax to interact with the OpenShift server, and then I will deal with some of the problems encountered in using this plug-in.
The plug-in takes advantage of the OpenShift command line tool (oc), which must be available on the node where the script is executed, so our Jenkins Master and Node nodes are required to install oc commands and configure environment variables, while ensuring access to the network of the OpenShift we want to manage.
The plug-in requires us to configure OpenShift's certificate and ApiToken, which we can copy directly from the OpenShift server's installation directory / etc/origin/master/ca.crt. With regard to the acquisition of ApiToken, I summarized the following two ways:
1. Through the command line
Oc login-u-poc whoami-t
two。 Through the command line
Curl-u admin:abc123-kv-H "X-CSRF-Token: xxx" > is obtained from the returned Response Header. In terms of the use of plug-ins, his Groovy syntax sugar is very consistent with the habits of the OpenShift command line, and it is very difficult to learn, so the operators who are familiar with kubectl or oc commands can master it in a very short time. For example, the following three lines of commands: oc create-f templateYaml oc describe deploymentconfig test-demooc scale deploymentconfig test-demo-- replicas=5 can be written simply as the deployment, upgrade, stop and expansion of the application container can be operated with concise and clear syntax. Here is a code example: after the image is deployed to OpenShift, DevOps will automatically create the corresponding application. At the same time, we can obtain some basic information of the application by calling back the data returned by DevOps by Jenkins. However, this information is not effective enough for application monitoring and operation and maintenance, so we encapsulate the RestApi provided by OpenShift and provide several interfaces commonly used by OpenShift application operation and maintenance. When we successfully deploy the built image to OpenShift through DevOps, it is far from enough to do so. In a way, we have not completely relieved the pressure of operation and maintenance personnel. For the long operation and maintenance cycle after application deployment, operation and maintenance personnel still have to face the black and white linux console and type in various complex commands in order to solve application problems. What has DevOps done in the application, operation and maintenance of OpenShift? After the image is deployed to OpenShift, DevOps will automatically create the corresponding application. At the same time, we can obtain some basic information of the application by calling back the data returned by DevOps by Jenkins. However, this information is not effective enough for application monitoring and operation and maintenance, so we encapsulate the RestApi provided by OpenShift and provide several interfaces commonly used by OpenShift application operation and maintenance. Through these interfaces, we can obtain the pods,events,logs of the application container, application access address, port and other details. Then in the early stage of software development, developers and testers can quickly build applications for deployment through DevOps and can easily obtain access addresses for functional verification, which can greatly shorten the cycle of functional verification and improve production efficiency. At the same time, DevOps provides some basic capabilities of application operation and maintenance in the application interface, such as scaling, starting, stopping, rolling back, viewing POD logs, and so on. The operation and maintenance personnel can obtain the detailed information of the current application through the interface, and can also easily carry out the operation and maintenance of the application, which can greatly reduce the pressure of the operation and maintenance. Summary and prospect the combination of DevOps and OpenShift can generate huge momentum, which can improve the quality of delivery while improving efficiency, and the improvement of the degree of automation can geometrically enhance the corresponding capabilities brought about by changes in business requirements. With the development and empowerment of the Internet, the competition in the industry is becoming increasingly fierce, more and more enterprises begin technological innovation and strategic transformation, and the business model changes from "product-centered" to "user-centered". The marketing mode has changed from "extensive marketing" to "precision marketing", and the service mode has changed from "standardized service" to "personalized service". Every enterprise has its own ideas and methods to use tools such as container cloud, and the blessing of all new technologies should be based on the enterprise's ability to integrate innovation and adapt to the market, optimize business processes and improve user experience. But at present, in the process of innovative business delivery, from collecting and clarifying user requirements, developing code and testing to the final production line delivery business, there exists the problem of wasting time and cost, which affects the delivery speed, and DevOps is the best solution to improve the delivery speed and further optimize the user experience. About the author: washbasin, Puyuan R & D engineer, specializing in micro services, containers, DevOps and other related fields, currently responsible for DevOps container cloud related design and development. Algorithm definition software, data-driven products, and technology change the future. About EAWorld: micro services, DevOps, data governance, mobile architecture original technology sharing. On how to combine DevOps with OpenShift to achieve 1: 1 > 2 effect is shared here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.