Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Volumes persistent storage of K8s

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Data persistence of K8s

Kubernetes storage volumes:

We know that the data in the container is not persistent by default, and the data is lost after the container is destroyed, so docker provides a volume mechanism to persist the data. Similarly, K8s provides a more powerful volume mechanism and rich plug-ins to solve the problems of container data persistence and data sharing between containers.

Volume:

We often say that containers and Pod are temporary.

The implication is that they may have a short life cycle and are frequently destroyed and created. When the container is destroyed, the data stored in the internal file system of the container will be erased. To persist the container's data, you can use K8s volume.

The life cycle of Volume is independent of containers. Containers in Pod may be destroyed and rebuilt, but Volume will be retained.

The volume types supported by K8s are emptydir,hostpath,persistentVolumeClaim,gcePersistentDisk,awsElasticBlockStore,nfs,iscsi,gitRepo,secret and so on. A complete list and detailed documentation can be found in http://docs.kubernetes.org.cn/429.html.

The following volume types are mainly practiced in this article:

1EmptyDir (temporary storage):

EmptyDir is the most basic Volume type. As its name suggests, an emptyDir Volume is an empty directory on Host. That is, there is no specified directory or file on the host, which is directly mapped from the pod to the host. (similar to docker manager volume mount in docker)

Let's practice emptydir with the following example:

[root@master yaml] # vim emptydir.yamlapiVersion: v1kind: Podmetadata: name: read-writespec: containers:-name: write image: busybox volumeMounts: # define data persistence-mountPath: / write # define the mount directory, which is the directory inside pod name: share-volume args:-/ bin/sh-- c-echo "hello volumes" > / write/hello; sleep 3000 -name: read # define a second container within the pod image: busybox volumeMounts:-mountPath: / read name: share-volume args:-/ bin/sh-- c-cat / read/hello; sleep 300000; volumes:-name: share-volume emptyDir: {} # define a data persistence type empytdir

We simulate that two containers are running in a pod, two containers share a volume, one is responsible for writing data, and the other is responsible for reading data.

/ / run the pod And check: [root@master yaml] # kubectl apply-f emptydir.yaml pod/read-write created [root@master yaml] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESread-write 2 Running 014s 10.244.2.2 node02 / / Let's view the mount content in the two containers: [root @ master yaml] # kubectl exec-it read-write-c read cat / read/hellohello volumes [root@master yaml] # kubectl exec-it read-write-c write cat / write/hellohello volumes

Parameter explanation:

-c: to specify a container, which is an abbreviation for-- container=, which can be viewed through-- help.

Because emptyDir is a directory in the Docker Host file system, the effect is equivalent to executing docker run-v / write and docker run-v / read. We're in node02.

We found that both containers mount the same directory by checking the detailed configuration information of containers on docker inspect:

"Mounts": [{"Type": "bind", "Source": "/ var/lib/kubelet/pods/756b4f4a-917a-414d-a7ee-523eecf05465/volumes/kubernetes.io~empty-dir/share-volume", "Destination": "/ read", "Mode": "", "RW": true, "Propagation": "rprivate"} {"Type": "bind", "Source": "/ var/lib/kubelet/pods/756b4f4a-917a-414d-a7ee-523eecf05465/volumes/kubernetes.io~empty-dir/share-volume", "Destination": "/ write", "Mode": "", "RW": true, "Propagation": "rprivate"}

The "/ var/lib/kubelet/pods/756b4f4a-917a-414d-a7ee-523eecf05465/volumes/kubernetes.io~empty-dir/share-volume" here is the real path where emptydir is mounted to dockerhost.

So we can go to that path and check it out:

[root@node02 ~] # cd / var/lib/kubelet/pods/756b4f4a-917a-414d-a7ee-523eecf05465/volumes/kubernetes.io~empty-dir/share-volume/ [root@node02 share-volume] # cat hello hello volumes

Summary emptydir:

Different containers in the same pod share the same persistence directory. When the pod node is deleted, the contents of the volume will also be deleted, but if only the container is destroyed and the pod is still there, the volume will not be affected. In other words, the life cycle of emptydir's data persistence is the same as the pod used. It is generally used as temporary storage, as well as the temporary storage directory of the intermediate process checkpoint for long-time tasks, and the multi-container shared directory.

2) Mount directories or files that already exist on the host machine into the container. 2) there are not many scenarios for this persistence method, because the core of our virtualization technology is to isolate the host, but this method increases the coupling of pod to nodes. 3) generally, this method is used for data persistence of K8s cluster itself and docker itself.

For example, kube-apiserver and kube-controller-manager are such applications.

We use the "kubectl edit-n kube-system pod kube-apiserver-master" command to view the configuration of kube-apiserver Pod. Here is the relevant part of Volume:

VolumeMounts:- mountPath: / etc/ssl/certs name: ca-certs readOnly: true- mountPath: / etc/pki name: etc-pki readOnly: true- mountPath: / etc/kubernetes/pki name: k8s-certs readOnly: true volumes:-hostPath: path: / etc/ssl/certs type: DirectoryOrCreate name: ca-certs-hostPath: path: / etc/pki type: DirectoryOrCreate name: etc-pki-hostPath: path: / etc / kubernetes/pki type: DirectoryOrCreate name: k8s-certs

The three hostPath volume defined here are k8s-certs, ca-certs and etc- pki, corresponding to the Host directories / etc/kubernetes/pki, / etc/ssl/certs and / etc/pki, respectively.

If the Pod is destroyed, the directory corresponding to the hostPath will also be retained, so from this point of view, hostPath is more persistent than emptyDir. But once Host crashes, hostPath becomes inaccessible.

3Grampv & pvcPersistentVolume (pv): a unified data persistence directory refers to a section of space on a storage system provided by a cluster administrator. It abstracts the underlying shared storage and implements the "storage consumption" mechanism by taking shared storage as a resource that can be applied for by users. PersistentVolumeClaim (PVC): an application (Claim) for pv persistence space, declaration. Specify the required minimum capacity requirements and access mode, and then the user submits a list of persistent volume declarations to the kubernetes api server, and kubernetes finds a matching persistent volume and binds it to the persistent volume declaration.

NFS PersistentVolume

Practice PV and PVC through NFS.

1) We deploy the nfs service on the master node:

[root@master ~] # yum-y install nfs-utils [root@master ~] # mkdir / nfsdata [root@master ~] # vim / etc/exports # write nfs configuration file / nfsdata 172.16.1.0 rw,sync No_root_squash) [root@master ~] # systemctl enable rpcbind [root@master ~] # systemctl start rpcbind [root@master ~] # systemctl enable nfs-server [root@master ~] # systemctl start nfs-server [root@master ~] # showmount-e # to see if Export list for master:/nfsdata 172.16.1.0 amp24 has been mounted successfully

2) create a pv:

[root@master yaml] # vim nfs-pv.yamlapiVersion: v1kind: PersistentVolumemetadata: name: nfs-pvspec: capacity: storage: 1Gi accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: nfs nfs: path: / nfsdata # specify the nfs shared directory server: 172.16.1.30 # specify the ip address of the nfs server / / run pv with the following command: [root@master yaml] # kubectl apply-f nfs-pv.yaml Persistentvolume/nfs-pv created field explanation: capacity: specify the capacity size of the pv Currently, capacity only supports space settings, and you should be able to specify IOPS and throughput in the future. AccessModes: access mode, there are the following modes: ReadWriteOnce: mounted to a single node in a read-write manner, abbreviated as RWO on the command line. ReadOnlyMany: mounts to multiple nodes as read-only, abbreviated as ROX on the command line. ReadWriteMany: mounts to multiple nodes in a read-write manner, abbreviated as RWX on the command line. There are several strategies for recycling when persistentVolumeReclaimPolicy:pv space is released: Recycle: clear the data in pv, and then automatically reclaim it. (the automatic recycling policy is protected by pvc's protection mechanism. When the pv is deleted, the data will remain as long as the pvc is still there.) Retain: keep still and be manually reclaimed by the administrator. Delete: delete cloud storage resources, which are only supported by some cloud storage systems, if AWS,EBS,GCE PD,Azure Disk and Cinder. Note: the recycling policy here refers to whether the stored source files are deleted after the pv is deleted. The basis for the association between storageClassName:pv and pvc. / / verify that pv is available: [root@master yaml] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEnfs-pv 1Gi (capacity 1GB) RWO (read and write) Recycle (automatic recycling) Available (available, make sure this state is available) nfs (based on nfs) 18m (time)

3) create a pvc:

[root@master yaml] # vim nfs-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: nfs-pvcspec: accessModes:-ReadWriteOnce # pv and pvc must have the same access mode resources: # define the resource to be applied for in the requests subfield under this field: requests: storage: 1Gi storageClassName: nfs run the pvc: [root@master yaml] # kubectl apply-f nfs-pvc.yaml persistentvolumeclaim/nfs-pvc created// check Verify that pvc is available: [root@master yaml] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEnfs-pvc Bound nfs-pv 1Gi RWO nfs 3m53s [root@master yaml] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEnfs-pv 1Gi RWO Recycle Bound default/nfs-pvc nfs 29m

Make sure that the status of both pv and pvc is Bound, which means the binding is successful.

The use of pv space.

Next, let's practice the pv use of mysql:

1) create a pod for mysql:

[root@master yaml] # vim mysql-pod.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: mysqlspec: template: metadata: labels: app: mysqlspec: containers:-name: mysql image: mysql:5.7 env: # define a variable Map the mysqlroot password in the container to the local-name: MYSQL_ROOT_PASSWORD value: 123.com # password as 123.com ports:-containerPort: 3306 volumeMounts: # define data persistence-name: mysql-pv-storage mountPath: / var/lib/mysql # this directory is the default mysql data persistence directory volumes: # the volumes field is an explanation above-name: mysql-pv-storage # Note that the name should be the same as the name above persistentVolumeClaim: # specify pvc Note that the pvc declared below is consistent with the pvc name created earlier claimName: nfs-pvc-- apiVersion: v1 # create a service resource object kind: Servicemetadata: name: mysqlspec: type: NodePort ports:-port: 3306 targetPort: 3306 nodePort: 30000 selector: app: mysql run pod: [root@master yaml] # kubectl apply-f mysql-pod.yaml with the following command Deployment.extensions/mysql createdservice/mysql created// check whether pod is working properly: [root@master yaml] # kubectl get pod-o wide mysql-68d65b9dd9-hf2bf NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmysql-68d65b9dd9-hf2bf 1 Running 0 9m34s 10.244.1.3 node01

2) Log in to the mysql database and write data:

[root@master yaml] # kubectl exec-it mysql-68d65b9dd9-hf2bf-- mysql- u root-p123.comType 'help;' or'\ h' for help. Type'c'to clear the current input statement.mysql > mysql > create database volumes_db; # create library Query OK, 1 row affected (0.01 sec) mysql > use volumes_db; # enter the library Database changedmysql > create table my_id (# create table-> id int primary key,-> name varchar (25)->); Query OK, 0 rows affected (0.04 sec) mysql > insert into my_id values # write data Query OK to the table, 1 row affected (0.01 sec) mysql > select * from my_id; # View data +-+-+ | id | name | +-+-+ | 1 | zhangsan | +-+ 1 row in set (0.00 sec)

3) verify:

(1) manually delete pod to verify whether the data in the database still exists.

[root@master ~] # kubectl delete pod mysql-68d65b9dd9-hf2bf pod "mysql-68d65b9dd9-hf2bf" deleted [root@master ~] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmysql-68d65b9dd9-bf9v8 1 Running 0 26s 10.244.1.4 node01

After deleting pod, kubernetes will generate a new pod, and we log in to mysql to view

Whether the data will still exist.

[root@master] # kubectl exec-it mysql-68d65b9dd9-bf9v8-- mysql- u root-p123.comType 'help;' or'\ h' for help. Type'\ c'to clear the current input statement.mysql > select * from volumes_db.my_id; +-+-+ | id | name | +-+-+ | 1 | zhangsan | +-+-+ 1 row in set (0.01 sec)

You can see that the data will still exist.

2) simulate the downtime of the node where the pod is running, and whether the data returns to normal in the newly generated pod.

From the pod information above, we know that pod is running on node01, so we shut down the node01 host in the cluster.

# # [root@node01 ~] # systemctl poweroff

After a period of time, kubernetes will migrate the pod to the node02 host in the cluster:

[root@master ~] # kubectl get nodes # learned that the node01 node has been down NAME STATUS ROLES AGE VERSIONmaster Ready master 39d v1.15.0node01 NotReady 39d v1.15.0node02 Ready 39d v1.15.0

[root@master] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmysql-68d65b9dd9-bf9v8 1 Terminating 0 15m 10.244.1.4 node01 mysql-68d65b9dd9-mvxdg 1 Running 083s 10.244.2.3 node02

You can see that pod has been migrated to node02.

Finally, we log in to mysql to verify that the data is restored:

[root@master] # kubectl exec-it mysql-68d65b9dd9-mvxdg-- mysql- u root-p123.comType 'help;' or'\ h' for help. Type'\ c' to clear the current input statement.mysql > select * from volumes_db.my_id;+----+-+ | id | name | +-+-+ | 1 | zhangsan | +-+-+ 1 row in set (0.09 sec)

It can be learned that after the pod migration, the mysql service is running normally, and the data is not lost.

Pv and pvc realize the persistence of mysql data, separate the responsibilities of administrators and ordinary users, and are more suitable for production environment.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report