Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Interpreting the 12 elements of Micro-Service with Kubernetes

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

This article is translated and published by the official account EAWorld, and the source should be indicated when reprinted.

Author: Michael D. Elder

Translator: Bai Xiaobai

Original title: Kubernetes & 12-factor apps

Original: http://t.cn/EoBns1o

For more information on the 12 principles, you can further read "Web-wide launch: interpreting Cloud Native Application Development" 12-Factors one by one.

If you are using containers to build applications, you must have heard what the "12 elements principle" is. "12 elements" provides a set of clear guidelines for the development of micro services. It is believed that as long as these principles are followed, applications and services can be run, expanded, and deployed more easily.

When talking about the "12-element principles", the author is used to grouping these principles into common scenarios. For me, the 12 elements are ultimately about the principles of how to code, deploy, and operate. These are the most common scenarios in the software delivery lifecycle and are familiar to most developers and DevOps integration teams.

So how do you apply the 12-element principle when building micro-services when using Kubernetes? In fact, the 12-element principle has a far-reaching impact on the development and evolution of Kubernetes. In the following content, I will analyze one by one how Kubernetes's container orchestration model directly supports the 12-element principle of each group.

I. elements related to coding

With regard to source management, the factors to consider are the benchmark code, the build and deployment life cycle, and ultimately how to maintain the consistency of the production and development environments.

A basic software delivery cycle diagram

Source control all the things

The source code controls everything

Element 1: one benchmark code, multiple deployments

Declarative structures are widely used in Kubernetes. All the information of the application is expressed based on text via YAML or JSON. The container itself is described as a source format such as Dockerfile, and the Dockerfile in text format can be converted into a container image repeatedly during the build process. Because the links from the image to the container deployment are encapsulated in text form, it is easy to achieve source control of everything, and the most commonly used tool is Git.

In Kubernetes, many things can be described declaratively. Many complete examples are documented in the book "Kubernetes in the Enterprise", and the related code is on Github.

If the application needs to run across different environments such as development, user acceptance, production, etc., the most commonly used method is to use the GitOps delivery model to achieve multi-branch management to distinguish the differences between environments.

Element 5: strict separation of build, release and operation

As mentioned in the title, in order to follow this principle, the build, release, and running environments need to be strictly separated. The most typical implementation is artifact management: once the code is submitted, the build begins, and then the container image is generated and published to the mirror library. If you use Helm (Kubernetes's package manager, similar to Ubuntu's apt-get), Kubernetes applications will be packaged and published to the Helm library at the same time. By rebuilding binaries or images, all of these publishers can be reused and deployed in different environments, and it is guaranteed that no unknown changes will be introduced in the process.

Element 10: development environment is equivalent to production environment

By following this principle of equivalence, you can avoid the annoying conversation, "it's good in my place, but not in yours." Containers (or rather Kubernetes) standardize application delivery and running dependencies, which means that everything can be deployed anywhere in the same way. Therefore, if you use a highly available MySQL configuration in a production environment, you can deploy the same architecture in the development cluster. By establishing the equivalent architecture of the production environment in the previous development environment, we can usually avoid some unforeseen differences, which may have a vital impact on the normal operation (or even failure) of the application.

II. Elements related to deployment

The value of the build can only be realized when it is successfully deployed. Of the 12 elements, a large proportion of principles describe relevant best practices, including how microservices should be deployed, how to handle dependencies, and how to parse the details of other microservices.

The reliability of microservices depends on their least reliable dependencies.

How to understand this sentence, the answer is in the following Kubernetes architecture diagram:

Pod and related Kubernetes objects

Element 2: explicitly declare dependencies

For the understanding of the above sentence, we first need to consider the construction of dependency relationships, which are described in the 12 elements with reference to the principles of construction management. However, I still prefer to put dependency elements in deployment-related groupings, because dependencies on other API or data stores will have a broad impact on the reliability of microservices. Kubernetes introduces the probe mechanism of readiness and liveness, which can perform runtime dependency checking. The readiness probe can verify whether there is a healthy backend service to respond to the request at a certain point in time or during a certain period of time. The liveness probe can confirm the health of service itself. Within a given time window, if one of the two metrics triggers the critical value of failure, Pod will be restarted.

Translation Note:

Here the author's understanding of liveness and readiness is misunderstood. The liveness indicator actually marks the bottom line that the container can work properly, and the container will be restarted only when the bottom line is exceeded. The readiness indicator indicates that the container can work properly. If it does not meet the requirements of readiness, the container will not be restarted, but will only be marked as "abnormal". For example, after the successful launch of the Tomcat application, it will be liveness, but it will be readiness only after the spring container initialization, database connection and other related processes have been completed.

For more information, see http://t.cn/EorvPAh

It is recommended that you take the time to read Release It!, appreciate the wisdom in the book, and use the architectural patterns described in the book (fuses, Fail Fast,Timeouts, etc.) to improve the reliability of your application.

Store configurations in the environment

In accordance with the requirements of this element, developers need to store the configuration source code in the process's environment variable table, such as ENV VARs. Through the separation of configuration and code, micro-services will be completely independent of the environment and can be migrated to another environment without any source-level changes. Kubernetes provides ConfigMaps and Secrets objects for configuring source management (in fact, Secrets should not be included in source management without additional encryption). The container can get the configuration details at run time. Storing configuration information as environment variables is beneficial to the expansion of the system and to deal with the growing demand for services.

Element 6: running applications as stateless processes

In Kubernetes, the container image runs as a process in Pod. Observations from 12 elements show that the Linux kernel has achieved a large number of optimizations through resource sharing around the process model. Kubernetes or containers simply provide an interface for better isolation so that container processes on the same host can run side by side. The application of process model makes it easier to manage system expansion and fault recovery. In general, processes should be stateless so that workloads can scale out as replicas. But in Kubernetes, there are also stateful workloads such as database / cache.

Persistent data stores should be used to save the state of the application on demand, and all instances of the application process can discover these stores through configuration files. In Kubernetes-based applications, multiple copies of Pod run at the same time, and requests may be routed to any one copy, so it is impossible for micro-services to expect sticky sessions (that is, to allow users to always forward all requests within a session cycle to the same specific object).

Thanks to the mechanism of the process model, all service can be easily extended by creating more process instances. Kubernetes provides many controllers to do this work, such as ReplicaSets, Deployments, StatefulSet, ReplicationController and so on.

See lines 2 through 7 of the code snippet below, about the copy.

Watson-conversation.yaml hosted with ❤ by GitHub

Kubernetes's deployment file lists a declaration of the number of copies required (as shown in line 7)

Element 4: treat the back-end service as an additional resource

We usually define dependencies such as the network environment as "back-end services". The right thing to do is to consider the life cycle of these upstream services independently of the life cycle of the micro-service itself. Whether the attachment or divestiture of the back-end service should not affect the normal response ability of the micro-service itself.

For example, if the application needs to interact with the database, you need to set some connection details to isolate the interaction between the application and the database, which can be achieved using dynamic service discovery or using Kubernetes Secret's Config configuration. The next thing to consider is whether the network request implements a fault-tolerant mechanism to ensure that even if the back-end service fails at run time, it does not trigger the cascade failure of the microservice (described in more detail in the "Release It!" book). The relevant back-end services should run in separate containers or somewhere outside the cluster. Microservices should not pay attention to the details of the interaction, and all interactions with the database are done through API.

Element 7: providing services through port binding

In a production environment, multiple microservices provide different functions, and communication between services needs to be achieved through a well-defined protocol. You can use the Kubernetes Service object to declare and resolve the network endpoints of related services inside and outside the cluster.

Before the advent of the container, it took a lot of time to resolve port conflicts on the host at any time if you needed to deploy a new service or update a version of an existing service. The isolation mechanism of the container and the network namespace mechanism of the Linux kernel make it possible to run multiple processes or multiple versions of the same microservice on the same port of a single host. Thus, the Service object of Kubernetes can expose the micro-service pool to all hosts and achieve basic load balancing for inbound requests.

In Kubernetes, the Service object is declarative and load balancing of related requests routed to Pod is done automatically.

Example of a Service declaration for a load balancer that handles requests related to Pod:

Watson-conversation-service.yaml hosted with ❤ by GitHub

III. Elements related to operation

Element 8 (concurrency), element 9 (disposability), element 11 (log) and element 12 (task management) are related to how to simplify the operation of microservices.

Kubernetes focuses on how simple deployment units of multiple Pod are created and destroyed on demand, and a single Pod is worthless on its own.

Element 8: extending through the process model

The process model mentioned in element 6 shines brilliantly in the implementation of the concurrency mechanism. As mentioned earlier, Kubernetes can achieve runtime extension of stateless applications through different kinds of lifecycle controllers. The number of copies required is defined in a declarative model and can be changed at run time. Similarly, Kubernetes defines a number of lifecycle controllers for managing concurrency, such as ReplicationControllers, ReplicaSets, Deployments, StatefulSets, Jobs, and DaemonSets.

The following animation shows the process of adding a copy:

ReplicaSet can be used to add more Pod. The newly created Pod automatically responds to inbound requests from the Service object.

Based on the threshold monitoring of CPU, memory and other related computing resources and other external indicators, Kubernetes introduces an automatic expansion mechanism. Horizontal Pod Autoscaler (HPA) can automatically expand the number of Pod in a Deployment or ReplicaSet.

HPA adds Pod based on indicator observations

When it comes to automatic extension, the topic of Pod vertical expansion and cluster expansion is worth paying attention to. Vertical Pod extension is suitable for stateful applications. Kubernetes uses CPU and memory as triggers to add more computing resources to Pod. You can also use custom metrics instead of HPA to trigger automatic extension.

Vertical Pod Autoscaler expands the available memory of the container

Element 9: quick start and elegant termination

The disposability in the principle of 12 elements considers how to use the signaling mechanism of the Linux kernel to interact with running processes. Following the principle of disposability, microservices can be started quickly and can die at any time without affecting the user experience. For stateless micro-services, implementing deployment-related principles helps to achieve disposability. In fact, through the livenessProbes and readinessProbes probe mechanisms mentioned earlier, Kubernetes destroys Pod that is unhealthy within a given time window.

Element 11: treat logs as event streams

For containers, all logs are usually logged to the stdout or stderr file descriptor. An important design principle here is that containers should not manage internal files for log output, but should be left to the container orchestration system to collect logs and analyze and archive them. In Kubernetes, log collection generally exists as a public service. In my own work experience, I can use Elasticsearch-Logstash-Kibana combinations to achieve this smoothly.

Element 12: running background management tasks as an one-time process

Administrative tasks such as database migration / backup / recovery / routine maintenance should be isolated and decoupled from the basic runtime logic of microservices. In Kubernetes, the Job controller can create a Pod that runs at once, or a Pod that performs different activities according to the schedule. The Job controller can be used to implement business logic, because Kubernetes loads API tokens into Pod, so you can also use the Job controller to interact with the Kubernetes orchestration system.

By isolating such administrative tasks, the behavior of microservices can be simplified in the future, thereby reducing potential failures.

IV. Conclusion

If readers are interested in this topic, it is recommended to like and forward to share the views of this article with friends. If you are lucky enough to join KubeCon Europe, you are welcome to join me and Brad Topol to discuss this topic in depth, and we will share more demonstrations of Kubernetes-related capabilities.

In addition to the 12 principles for building microservices, another author, Shikha Srivastava, and I have also identified seven principles that are missing in the production environment, which Shikha will soon write about. Please wait for it.

About EAWorld: micro services, DevOps, data governance, original technology sharing of mobile architecture

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report