Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Advanced StatefulSet stateful deployment of Kubernetes

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

K8s stateful application deployment

Catalogs: divided into two categories

1.Headless Service

2.StatefulSet

Deploy stateful applications

Solve Pod independent life cycle, keep Pod startup sequence and uniqueness

Stable, unique network identifier, orderly persistent storage, elegant deployment and expansion, deletion and termination order, rolling updates

Application scenario: if the database is said in front, whether our Mysql or Redis, Zookerper and so on are suitable for deployment in K8s, in fact, it is not very suitable, but it can also be deployed in it. For example, to deploy a Mysql, it is very easy to deploy a Mysql instance in K8s. Just like deploying other applications, access it through service and IP, but if you deploy it as a cluster, you can deploy it. That might be troublesome. The first point: for example, to be a master and slave of Mysql, Mysql master and slave has a master-slave topological relationship, one master and one slave, and their respective data are different, which means that if you want to build a master and slave of Mysql, you need to know its mutual ip address, that is, you must know the master ip address, and then connect the master ip address to do data synchronization. The second point: its storage, its two stored information is not quite the same, so how to ensure that the storage of its two data is guaranteed to be persistent? there may be multiple nodes in a cluster. If you deploy to k8s, you must ensure that the deployed application can use the original state in any node, that is to say, one node is dead, and the above pod drifts to another node. Can you use the previous state? So we have to consider these issues. The essence of K8s design is not that you deploy a single instance, but that it is a distributed deployment with fault resilience, so it talks about stateful and stateless applications.

Stateful application: DB, like a Mysql master-slave, consider two points (your own storage, it must be remote storage, which can be mounted on any node to restore the original state) and the only network ID that needs to be a master-slave connection. Our pod is short-lived. Rebuilding an ip will change, and you have to make sure that this ip is used all the time. Whether it's rebuilding or migrating pod, it can be used.

Stateless application: typical is the web system, which does not have any state to maintain. For example, if three copies are deployed, there is no need to consider what the other two copies are like, they are not directly related to it, and there is no persistent data locally. Even if one copy is dead and helps it get up in other nodes, it can still continue to provide services, which will not have any impact on the original basic services. So this is a premise, typically web.

In K8s, it is better to support stateful deployment after version 1.10, but at the beginning, when it hits the ground, it will mainly do stateless deployment without considering stateful deployment, and whether the database is suitable for deployment in K8s. Looking at some of the features of K8s, K8s has some suitable features, such as applications with high access, applications with fast iteration, or applications with flexible scaling. All of these are suitable to be put in K8sGramK8s to support statelessness at the earliest, and they are also recommended all the time. Stateful ones are also more complex, and complex ones are clustered with stateful ones, such as zookerper.

Mysql master and slave, these all need a state maintenance of their own. When these are deployed, they may consider its network topology relationship, as well as the persistence of the storage state, while the database does not have these characteristics. Deploying a database generally has a higher configuration and can resist a lot of concurrency, and it is also very convenient to expand capacity. Moreover, there are many traditional deployment articles, and they are more mature. Deploying these things in K8s is definitely a challenge, so we should consider K8s according to different application characteristics, not blindly on K8s, and the loss outweighs the gain and cannot achieve the desired results, and the leader will also approve you, so when talking about stateful deployment, you can start from both, one is Headless Service maintenance network, the other is StatefulSet maintenance storage state.

StatefulSet is a controller, which, like the existing principle of Deployment, deploys applications, while Statefulset mainly deploys stateful applications, while deployment deploys stateless applications.

Headless Service is actually similar to service, but this is defined as a headless service. The only difference between them is that Cluster ip is set to none and will not help you configure ip.

[root@k8s-master demo] # mkdir headless-service [root@k8s-master demo] # cd headless-service/ [root@k8s-master headless-service] # vim headless-svc.yamlapiVersion: v1kind: Servicemetadata: name: my-servicespec: clusterIP: app: nginx ports:-protocol: TCP port: 80 targetPort: 9376 [root@k8s-master headless-service] # kubectl create-f headless-svc.yaml service/my-service created [root@k8s-master headless-service ] # kubectl get svckubernetes ClusterIP 10.1.0.1 443/TCP 23hmy-service ClusterIP None 80/TCP 6sservice NodePort 10.1.207.32 80:30963/TCP 87mzhao ClusterIP 10.1.75.232 80/TCP 9m1szhaocheng ClusterIP 10.1.27.206 80/TCP 10m

How do I get to visit? We define a logo for it.

After the creation, there will be this identifier, which will use the DNS to resolve the name and access each other. Headless will not access the name through ip, but must access it through the name.

[root@k8s-master headless-service] # vim statefulset.yamlapiVersion: apps/v1beta1kind: StatefulSetmetadata: name: nginx-statefulset namespace: defaultspec: serviceName: my-service replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers:-name: nginx image: nginx:latest ports:-containerPort: 80 [root@k8s-master headless-service] # kubectl create -f statefulset.yaml [root@k8s-master headless-service] # kubectl get podmy-pod 1 to 1 Running 0 3h60mnfs-744d977b46-dh9xj 1 to 1 Running 0 22hnfs-744d977b46-kcx6h 1 to 1 Running 0 22hnfs-744d977b46-wqhc6 1 / 1 Running 0 22hnfs-client-provisioner 1/1 Running 0 4hnginx-797db8dc57-tdd5s 1/1 Running 0 100mnginx-statefulset-0 1/1 Running 0 73snginx-statefulset-1 1/1 Running 0 46snginx-statefulset-2 1/1 Running 0 24s

The serviceName: my-service field of statfulset should be associated so that it can be accessed based on name.

You can test parsing through the busybox test tool through nslookup. By parsing our my-service, we can parse to

A name is assigned here, the my-service and the number are permanent, and the pod can be accessed through this name

If the mysql master and slave, we can access this tag to connect to the status of our slave library, which is the network state of maintenance.

Then let's take a look at the storage status.

The example of an official document

Https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

In fact, we are also deploying a nginx, using headless,ClusterIP as none, and then using headless to maintain statefulset pod network id as 1, and using volumeClaimTemplates: to maintain an independent storage of pod.

ApiVersion: v1kind: Servicemetadata: name: nginx labels: app: nginxspec: ports:-port: 80 name: web clusterIP: None selector: app: nginx---apiVersion: apps/v1kind: StatefulSetmetadata: name: webspec: selector: matchLabels: app: nginx # has to match .spec.template.metadata.labels serviceName: "nginx" replicas: 3 # by default is 1 template: metadata: labels: app: nginx # has to match .spec .selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers:- name: nginx image: nginx ports:-containerPort: 80 name: web volumeMounts:-name: www mountPath: / usr/share/nginx/html volumeClaimTemplates:-metadata: name: wwwspec: accessModes: ["ReadWriteOnce"] storageClassName: "managed-nfs-storage" resources: Requests: storage: 1Gi

This is our storage to write the name of our nfs

Managed-nfs-storage

[root@k8s-master headless-service] # kubectl get scmanaged-nfs-storage fuseim.pri/ifs 22h

Here we will automatically create a pv and mount it to our NFS

[root@k8s-master headless-service] # kubectl get podmy-pod 1/1 Running 0 6h5mnfs 1/1 Running 0 24hnfs 1/1 Running 0 24hnfs 1/1 Running 0 24hnfs-client-provisioner-fbc 1 + 1 Running 0 6h33mnginx-797db8dc57-tdd5s 1 + 1 Running 0 3h64mnginx-a1-6d5fd7b8dd-w647x 1 + + 1 Running 0 3m28snginx-statefulset-0 1 + + 1 Running 0 135mnginx-statefulset-1 1/1 Running 0 135mnginx-statefulset-2 1/1 Running 0 134mweb-0 1/1 Running 0 3web-1 1/1 Running 0 85sweb-2 1/1 Running 0 57s [root@k8s-master headless-service] # kubectl get pvpvc-06 1Gi RWO Delete Bound default/www-web-2 managed-nfs-storage 63spvc-4f 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 6m3spvc-a2 5Gi RWX Delete Bound default/my-pvc managed-nfs-storage 6h5mpvc-bc 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage

Headless guarantees its network and statefulset storage templates to ensure the uniqueness of each pod storage, which solves the two major pain points of stateful applications.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report