In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
How to build the next generation DevOps platform based on K8s, aiming at this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible method.
Guide: what is the current situation of cloud native DevOps system? What are the challenges? The following is how to solve many problems in cloud native DevOps scenarios through OAM, and share how to build a next-generation DevOps platform with unlimited capabilities based on OAM and Kubernetes.
What is DevOps? Why build on Kubernetes?
The first DevOpsDays conference was held in 2009, and the name DevOps was proposed for the first time. By 2010, the concept of DevOps became more and more popular, and What is DevOps published an article explaining the concept, methodology and supporting tools of DevOps. To put it simply, R & D engineers need to cooperate deeply with operation and maintenance engineers, and at the same time ensure that R & D is smoother through a series of tools, thus making it easier to get in touch with the production environment.
By 2013, Docker had emerged, and engineers could define it in the software production environment for the first time to deliver and distribute stand-alone software through Docker image. At this point, DevOps began to land slowly. Since 2015, there are more and more tools related to DevOps, and there are some problems in resource utilization. The establishment of CNCF makes the practice of DevOps go to Kubernetes.
Why build DevOps platform based on Kubernetes? 1) unlimited resource pool and infrastructure capabilities based on Kubernetes
Large-scale-single cluster up to 10000 nodes, millions of Pod
High performance-second expansion, intelligent telescopic, dragon + security container
Extreme elasticity-minute level disassembly of public cloud computing resources, unlimited resource pool
2) the community has rich ecological basic functions of Kubernetes around DevOps.
Source code to container image repository. Kubernetes is the de facto standard for container platform: Github / DockerHub
CI/CD pipeline, task scheduling and publishing strategy: Argo / Teckton / Spinnaker / Jenkins-X / Flagger
Image packaging and distribution: Helm / CNAB
Rich application running load support: Deployment (stateless) / StatefulSet (stateful) / OpenKruise (native stateful enhancement)
Rich flexibility and application scalability: HPA / KEDA.
New challenges of DevOps platform based on Kubernetes
The following figure shows a typical flow of a cloud-generated DevOps pipeline. The first is the code development, the code is hosted to Github, and then connected to the unit testing tool Jenkins, at this time the basic research and development has been completed. Then it comes to the construction of the image, which involves configuration, orchestration and so on. HELM can be used to package applications native to the cloud. Packaged applications are deployed in various environments. But there will be many challenges in the whole process. First of all, different operation and maintenance capabilities are needed in different environments.
(a typical flow of a cloud native DevOps pipeline)
Second, in order to create a database on the cloud during configuration, you need to open another console to create the database. You also need to configure load balancing. Additional features need to be configured after the application is launched, including logging, policy, security, and so on. It can be found that cloud resources are separated from the DevOps platform experience, which is full of the process of creating with the help of external platforms. This is very painful for beginners.
Challenge 1: coupling of R & D, operation and maintenance, and infrastructure concerns
The figure below is a commonly used YAML configuration file for K8s, which people often complain about being complex.
To put it simply, the YAML configuration file can be divided into three chunks. One is the configuration that operators are concerned about, including the number of instances, policies and releases. The second is the R & D concern, involving mirrors, port numbers, and so on. The third piece is what infrastructure engineers can understand, such as scheduling strategy. All aspects of information are coupled together in the configuration file of K8s, which is very suitable for K8s engineers, but for terminal engineers on the application side, there are many configuration indicators that do not need to be concerned about.
Lack of description of the concept of "application" in the DevOps process
The location of the YAML file of K8s is not an end user.
Challenge 2: the custom encapsulation of the platform is simple but insufficient.
The DevOps platform encapsulates the abstraction of K8s capabilities, leaving only 5 Deployment fields to be filled in by research and development. From the user's point of view, this setting is very easy to use. However, for slightly complex applications, involving a series of operations such as application state management, health check and so on, these five fields are not enough.
Challenge 3: CRD is so scalable that the DevOps platform cannot be directly reused
CRD (Customize Resource Definition) has strong extensibility, and almost all software can be extended through CRD, including database, storage, security, orchestration, dependency management, release and so on. However, for the DevOps platform, the above interface is not exposed to users, resulting in direct reuse.
Challenge 4: high barriers to the use of new capabilities developed by the DevOps platform
If the platform wants to expand some capabilities, but the native automatic capacity expansion and scaling capability is not appropriate, you want to develop regular expansion and expansion YAML files and set them according to the business situation. But at this time, the threshold for users to use YAML is very high, and it is not clear how to use YAML. As more and more new capabilities are developed, there will be conflicts between capabilities, which are also very difficult to manage.
How do the operation and maintenance students know how to use this expansion ability? See CRD? See the configuration file? Look at... Documents?
Conflicts between scalability capabilities lead to online failures, such as: CronHPA and default HPA are installed to the same application at the same time; how to effectively manage the conflict relationship between K8s scalability capabilities? How to effectively expose to the operation and maintenance?
Challenge 5: different DevOps platforms need to be completely redocked
The problem encountered in many cloud native practices is the need to define a very complex YAML, which can solve all the problems within the enterprise, but the challenge is that it is difficult to interface with the ecology. For example, the capabilities of RDS,SLB are embedded in YAML files, can not be reused, and hardly have the ability of atomization. At the same time, it can not cooperate, can not be provided to brother departments or ecological use, can only be used for internal closed ecological use. When different applications of the upper system connect to the DevOps platform, they need to write different formats of YAML, which is also very painful.
It is difficult to understand. It must be visualized through the interface.
Can not be reused and has almost no atomization ability.
Unable to cooperate, can only be used internally and ecologically
Technical principle of OAM Application Model
The emergence of the OAM application model solves the above problems of application management. Let's introduce the technical principle of the OAM model.
1. Component component
The common concept in OAM is the Component component, a unit to be deployed defined entirely from the perspective of research and development. On the right side of the following figure is an example of Component in YAML, where the yellow part can be flexibly customized. A standard architecture ContaineriseWorkload is defined in OAM, which represents the workload part and contains the specific description of the unit to be deployed. At this point, we can solve the problem of separation of concerns, help application-side engineers get rid of a lot of details, and only need to care about the port number, image, and so on.
To meet the first challenge, you can define database expression resources in OAM that need to use cloud resources, and Workload can define different components according to your own needs, including virtual machine-based applications or old Function applications. Components are of concern to application developers.
2. Trait
If it is just a component, it can be combined to build a simple application. If you are concerned about the application of operation and maintenance, there is the concept of Trait in OAM, which refers to the addition of some features to the original components. Features refer to the capabilities of operation and maintenance, such as manual capacity expansion, external access, publishing, and load balancing. Flexible capacity expansion, traffic-based management and so on. The plug-in expansion capability can be obtained flexibly through the Trait of OAM. Different component binds different features.
3. Scope
Component,Trait and all assembled Application Configuration are the three main concepts in OAM. But what happens when multiple components work together? There is a concept of boundary Scope in OAM, which is a special Trait, which combines multiple Component together and shares a set of resource groups. CPU and other features are represented by Scope to expand the common characteristics of multiple components.
The next generation DevOps technology under the OAM blessing 1. OAM: an application-centric hierarchical model
OAM is an application-centered hierarchical model, which first needs to be run on the server side of the OAM interpreter, and the YAML needs to be read through the OAM interpreter. OAM provides Trait,Component for users to fill in and compile it into APP Config. APP Config has the capabilities of Deployment,Ingress,HPA or cloud resources through the OAM interpreter. This method can delaminate R & D and operation and maintenance based on infrastructure, R & D cares about Component, operation and maintenance care about Trait, infrastructure provides various capabilities through OAM interpreter, closely integrates with K8s, and complements its application concept.
Stratification
Modularization
Reusable
two。 Rapid integration into K8s ecology already has Operater capability
OAM can quickly incorporate the existing Operater capabilities of K8s ecology. In the following figure, an instance of CRD is shown in Component on the left, and an instance of CRD in Trait on the right. The Workload and Trait under Component correspond to the capabilities of K8s custom resources, respectively. If you want to use some of the capabilities in K8s, you just need to write the corresponding fields in Trait.
3. OAM framework solves component dependency and startup sequence.
The OAM framework addresses component dependencies and startup order. The OAM Runtime,OAM interpreter handles component dependencies and startup sequence properly. In the figure below, there is a dependency relationship between Component, and a relationship such as preComponent or postComponent between Trait and Component.
4. OAM Trait flexibly solves the problem of resource binding.
After the startup sequence is clarified, it involves the problem of resource binding, one side is the database used, the other is the program of Web, and the program of Web binds the database connection string resources. You only need to write a Trait in OAM to solve the resource binding problem. On the right side of the figure, K8s carries connection string information through Secret, and Service Binding Trait corresponds to a running Operator,Web Hook to get the Secret and inject it into the database.
5. Interaction mechanism between Workload and Trait
You will consider whether it will be troublesome to connect to OAM and whether you need to change the code. OAM designed the interaction mechanism between Workload and Trait, OAM internal zero transformation, only need to extend Workload and Trait. First, to create a Workload instance in Component, and then to create an Trait instance, you only need to view the Definition of Workload in Trait to configure the capabilities needed in Trait.
(zero modification of OAM kernel, new plug-in fast access capability)
If you develop new capabilities, it is also a headache to encounter conflict problems. When defining Trait in the OAM framework, we can check which fields are conflicting and reject the creation of new applications, so as to ensure the compatibility between Trait and make the operation and maintenance problems discoverable and manageable.
(discoverable and manageable Traits system)
6. OAM: DevOps platform system with unlimited capacity
The following figure shows the DevOps platform architecture, the lowest layer is OAM Runtime, part is Workload, corresponding to the runtime hosted Runtime, such as Function, Container, virtual machine, Serverless Service and so on. The other part is Trait, which corresponds to operation and maintenance capabilities, such as publishing, elastic scaling, logging, security, and so on. The next layer can be assembled into different business form platforms according to the scene combination (Application Profile). Different platforms can use different combinations of Workload and Trait with different capabilities. Through the OAM standardized model to build an unlimited capacity DevOps platform to meet the needs of various scenarios.
On the user side, the R & D DevOps process supported by OAM is unified after the image construction is completed. OAM provides APP Config, including different Component, and each Component contains different operation and maintenance capabilities Trait, supporting different environments, such as test environment and generation environment. OAM is configured uniformly and is suitable for different clouds. It can be directly run in different clusters. On the K8s side, users only need to install plug-ins to easily embed a lot of rich capabilities.
This is the answer to the question on how to build the next generation DevOps platform based on K8s. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.