In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you the example analysis of Kubernetes and container design patterns, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's learn about it!
In the field of programming, object-oriented design and object-oriented language are the most familiar and powerful tools. In addition to its powerful core features, object-oriented has a series of design patterns summarized by people through practice. it can be used to solve some complex problems in practical application design.
The environment in which cloud native applications run is a complex distributed environment, in which case, some useful design patterns can play an important role, while the container design pattern launched by the K8s community, it is a series of reusable patterns to solve typical distributed system problems proposed by the micro-service model of K8s cluster. At present, the container design patterns launched by the K8s community are mainly divided into three categories:
1) single container management mode
2) single-node multi-container mode
3) Multi-node and multi-container mode
1. Single container management mode
The most important feature of K8s is that it supports multi-container micro-service instances. Of course, the single container model is also supported, but this model does not highlight the features and power of K8s. Many people have always had the impression that K8s is powerful, but it is difficult to get started. In fact, just to start a single container microservice instance, the command line operation of K8s is as simple as the Docker native command.
[root@demo-k8s] # kubectl run nginx--image=nginxdeployment "nginx" created [root@demo-k8s] # kubectl get deploymentNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGEnginx 1 1 1 24s [root@demo-k8s] # kubectl get rsNAME DESIRED CURRENT AGEnginx-3137573019 1 11m
As you can see from the above example, K8s can start a micro-service instance mirrored by nginx with only one command. At the same time, the power of K8s lies in that it is convenient for users to use a command, while still ensuring the integrity and standardization of the K8s application system. In other words, although the user runs only one command, K8s automatically creates four API objects for the user, including: Deployment,ReplicaSet (RS), Pod, and Container. It is also very easy to expand and scale the number of instances of the same service.
[root@demo-k8s ~] # kubectl scale deployment nginx--replicas=3deployment "nginx" scaled [root@demo-k8s ~] # kubectl get rsNAME DESIRED CURRENT AGEnginx-3137573019 3 322m
Relying on this design concept that takes into account both ease of use and model consistency, K8s makes itself suitable for both simple and complex scenarios.
Second, single-node multi-container mode
The container design pattern, which starts from the single-node multi-container pattern, really embodies the design characteristics of K8s, that is, the distributed application model based on multi-container micro-service model. In K8s architecture, Pod is a lightweight node, and containers in the same Pod can share the same storage space and the same network address space, which enables us to implement some patterns that combine multiple containers to work on the same node. Since Pod is characterized by shared storage space and network addresses, the single-node multi-container model must take advantage of these two features.
2.1 sidecar mode (Sidecar pattern)
The first single-node multi-container mode is sidecar mode. This mode mainly takes advantage of the ability of containers in the same Pod to share storage space.
A typical sidecar application scenario is shown in the figure: a tool container writes files to a shared file directory, and the application main container reads files from the shared file directory. For example, we can use Nginx to build a code distribution repository and simply put the code in a local directory. In order to keep synchronization, we also use a container with a Git client to synchronize the latest code to the original code repository. The advantage of this model is that the image of the tool container, that is, the image packaged with the Git client, can be reused without having to be packaged with the application container. For the same application, the main application container can use Apache Httpd without Nginx, and can be combined with the tool container to form a micro-service.
Another typical sidecar pattern is shown in the figure: a tool container reads the file and the application container writes the file. For example, a Nginx-based Web application writes logs to the system file system, while a container that collects logs reads logs from the shared directory and outputs them to the cluster's log system. The advantage of this model is that the image of the tool container can be reused and there is no need to package the execution file of the tool container every time the application container is updated.
2.2 diplomat Model (Ambassador pattern)
The second single-node multi-container model is the diplomat model. This mode mainly takes advantage of the fact that containers in the same Pod can share network address space. As shown in the figure, match the application container with a tool container as a proxy server in a Pod. The tool container helps the application container access external services, so that when the application container accesses the service, it does not need to use the IP address of the public network, but only needs to use localhost to access the local service. In this mode, the tool container as a proxy server is like a "diplomat" of external services stationed in the Pod, so that the application container only needs to deal with the diplomats of the Pod and does not need to go abroad, hence the name.
2.2.1 Redis access case based on diplomat model
2.3 Adapter pattern (Adapter pattern)
The third single-node multi-container mode is the adapter mode. This model is particularly important for monitoring and managing distributed systems. An ideal design goal for distributed systems is to achieve "distributed execution and storage, unified monitoring and management". In order to realize "unified monitoring and management", the interface between application and monitoring management interaction needs to be unified, and the interface is implemented according to the interface mode of "unified monitoring service". This is also very similar to the "adapter pattern" in object-oriented design patterns.
A typical system that can adopt the adapter pattern is a distributed system that uses Prometheus as a monitoring service. In the projects around Prometheus, there are many monitoring data exporters (Exporter) suitable for different application systems, which are responsible for collecting monitoring data related to specific applications, so that Prometheus services can collect monitoring data from different application systems in a unified data mode, and each Exporter is also an implementation of an adapter pattern.
III. Multi-node combination mode 3.1 Multi-node election mode
Multi-node election is an important mode in distributed systems, especially for stateful services. In distributed systems, generally speaking, stateless services can scale horizontally at will, as long as the instances running business logic are copied and run, which is what ReplicationController and ReplicaSet do in K8s.
For stateful services, people also want to be able to scale horizontally, but because each instance has its own persistent state, and this persistent state must continue its life, the horizontal scaling mode of stateful services is the fragmentation of states, and the mechanism is consistent with the fragmentation of the database. Then for a stateful service originally designed for a distributed system, the corresponding relationship between each instance and fragmented data becomes the global information of the stateful service. For any service, the global information of multiple instances needs a place to be saved.
A simple way is to keep it on an external proxy server, which is what MariaDB's Galera solution does, and that's what the proxy server does for the back-end server. But the problem with this approach is that the system relies on an external proxy server, and the high availability and horizontal scalability of the proxy server itself are still unsolved.
Therefore, for systems that want to natively solve high availability and horizontal scaling problems, such as Etcd and ElasticSearch, there must be a native master node election mechanism. In this way, the distributed system does not need to rely on external systems to maintain its own state. For a distributed system, the most important system global information is which nodes are in the cluster, which is the Master node, and which shard is corresponding to each node. The task of the master node is to save and distribute this information.
In a K8s cluster, a microservice instance Pod can have multiple containers. This feature improves the reusability of the multi-node election mechanism. It allows us to specially develop a container image for elections. In actual deployment, the election container and the ordinary application container are combined. The application container only needs to read the status from the local election container to get the election results. This allows the application container to focus only on the code related to its own business logic.
3.2 work queue mode
An important role of distributed systems is to make full use of the ability of multiple physical computing resources, especially to dynamically mobilize computing resources on demand to complete computing tasks. Imagine that if there are a large number of tasks that need to be dealt with randomly, the capacity required for computing resources is uncertain; obviously, it is unreasonable to set up computing nodes according to the maximum and minimum possible amount of computation.
In this case, the tasks that need to be processed can be put into a queue to be processed, and the computing node can be started to read tasks from the queue for processing as needed. Before the wide application of container technology, there are many distributed processing systems that rely on queues to handle a large number of computing tasks, such as big data processing systems Hadoop and Spark. One of the limitations of these systems is that the implementation of queue processing patterns mostly follow specific programming patterns and specific programming languages, while the construction of infrastructure is mostly complex and time-consuming. The advantage of the work queue mode based on container and Kubernetes orchestration technology is that the work queue pattern can be realized by using very simple orchestration scripts, while using Pod as a lightweight node processing mode makes it very easy to schedule computing resources dynamically. The logical diagram of applying the work queue mode in Kubernetes is as follows:
The above is all the content of the article "sample Analysis of Kubernetes and Container Design patterns". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.