Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction and application of K8s storage mode (persistence, data persistence by mysql)

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

K8s storage: (persistence)

Docker containers have a lifecycle.

Volume

1. Storage class is one of the k8s resource types. It is a logical group created more conveniently by administrators to manage PV. It can be classified according to the performance of the storage system, or comprehensive quality of service, backup strategy, etc. But k8s itself does not know what the category is, it is used as a description.

2. One of the advantages of storage classes is that they support the dynamic creation of PV. When users use persistent storage, they do not have to create PV in advance, but directly create PVC, which is very convenient.

3. The name of the storage class object is very important, and in addition to the name, there are three key fields

Provisioner (supplier):

And a storage system that provides storage resources. There are multiple suppliers built into K8s, whose names are prefixed with "kubernetes.io". And it can be customized.

Parameters (parameters): the storage class uses parameters to describe the storage volume to be associated with. Note that the parameters vary from supplier to supplier.

ReclaimPolicy:PV 's recycling policy. Available values are Delete (default) and Retain.

Brief introduction

1, because the container itself is non-persistent, you need to solve some of the problems encountered in running the application in the container. First, when the container crashes, kubelet will restart the container, but the files written to the container will be lost and the container will restart in the initial state of the mirror; second, in containers running together through a Pod, you usually need to share some files between containers. Kubernetes solves both of the above problems by storing volumes.

2. There is the concept of storage volume in Docker, but the storage volume in Docker is only a directory in a disk or in another container, and its life cycle is not managed. The storage volume of Kubernetes has its own lifecycle, which is consistent with the Pod lifecycle used. As a result, storage volumes last longer than any of the containers running in Pod, and data is retained when the container is restarted. Of course, when the Pod ceases to exist, the storage volume will no longer exist. Multiple types of volumes are supported in Kubernetes, while Pod can use all types and any number of storage volumes at the same time. Use storage volumes in Pod by specifying the following fields:

Spec.volumes: provides the specified storage volume through this field

Spec.containers.volumeMounts: use this field to attach the storage volume to the container

Environment introduction host IP address service master192.168.1.21k8snode01192.168.1.22k8snode02192.168.1.23k8s1.emptyDir (empty directory): similar to docker data persistence: docer manager volume

Usage scenario: in the same Pod, different containers, share data volumes.

If the container is deleted, the data still exists, and if the Pod is deleted, the data will also be deleted.

Introduction

An emptyDir is first created when a pod is assigned to a specific node and will always exist in the life cycle of the pod. As its name suggests, it initializes as an empty directory, which can be read and written by containers in pod. This directory can be hung to the same or different paths of each container. When a pod is removed for any reason, the data is permanently deleted. Note: a container crash does not result in data loss, because a container crash does not remove pod.

The usage scenario of emptyDir is as follows: a blank initial space, such as a merge / sort algorithm, temporarily saves data on disk. The checkpoint (intermediate result) is stored in a long calculation so that if the container crashes, it can continue from the last stored checkpoint (intermediate result) instead of starting from scratch. As shared storage between the two containers, the first content management container can store the generated data in it, while a webserver container provides these pages to the outside. By default, emptyDir data volumes are stored on the storage media (mechanical hard disk, SSD, or network storage) of the node node. The role of emptyDir disks:

(1) ordinary space, disk-based data storage

(2) as a backup point to recover from a crash

(3) store data that need to be preserved for a long time, such as data in web services.

By default, emptyDir disks are stored on the media used by the host, which may be SSD or network hard drives, depending on your environment. Of course, we can also set the value of emptyDir.medium to Memory to tell Kubernetes to hang it in a memory-based directory tmpfs, because

Tmpfs will be faster than the hard drive, but all data will be lost when the host is rebooted.

Test and write a yaml file

[root@master yaml] # vim emptyDir.yamlapiVersion: v1kind: Podmetadata: name: producer-consumerspec: containers:-image: busybox name: producer volumeMounts:-mountPath: / producer_dir name: shared-volume args:-/ bin/sh-- c-echo "hello k8s" > / producer_dir/hello Sleep 30000-image: busybox name: consumer volumeMounts:-mountPath: / consumer_dir name: shared-volume args:-/ bin/sh-- c-cat / consumer_dir/hello; sleep 30000 volumes:-name: shared-volume emptyDir: {}

Execute it.

[root@master yaml] # kubectl apply-f emptyDir.yaml

Check it out

[root@master yaml] # kubectl get pod

View the log

[root@master yaml] # kubectl logs producer-consumer producer [root@master yaml] # kubectl logs producer-consumer consumer

View the mounted directory

The node node looks at the container name and looks at the mounted directory through the container name

[root@node01 shared-volume] # docker ps

[root@node01 shared-volume] # docker inspect k8s_consumer_producer-consumer_default_9ec83f9e-e58b-4bf8-8e16-85b0f83febf9_0

Go to the mount directory and check it.

2.hostPath Volume: similar to docker data persistence: introduction to bind mount

The hostPath host path is to associate a directory of the host file system on the host where the pod is located outside the container namespace in pod with pod. When the pod is deleted, the stored data will not be lost.

Action

If Pod is deleted, the data is retained, which is better than emptyDir. But once host crashes, hostPath becomes inaccessible.

Docker or the K8s cluster itself will be stored in this way hostPath.

Applicable scenarios are as follows:

A container needs to access Docker. You can use hostPath to mount the / var/lib/docker of the host node.

Run cAdvisor in the container and use hostPath to mount the / sys of the host node

3.Persistent Volume | PV (persistent volume) A directory where data is persisted and made in advance. Psesistent Volume Claim | PVC (persistent Volume usage statement | Application)

PersistentVolume (PV) is a piece of network storage in a cluster that has been configured by an administrator. A resource in a cluster is like a node is a cluster resource. PV is a volume plug-in such as a volume, but has a life cycle independent of any single pod that uses PV. This API object captures the implementation details of the storage, that is, the NFS,iSCSI or cloud provider-specific storage system.

The concept of PVC and PV

We mentioned earlier that kubernetes provides so many storage interfaces, but first, each Node node of kubernetes can manage the storage, but various storage parameters also need to be understood by professional storage engineers, so our kubernetes management becomes more complex. As a result, kubernetes put forward the concepts of PV and PVC, so that developers and users do not have to pay attention to what the back-end storage is and what parameters to use. As shown below:

PersistentVolume (PV) is a piece of network storage in a cluster that has been configured by an administrator. A resource in a cluster is like a node is a cluster resource. PV is a volume plug-in such as a volume, but has a life cycle independent of any single pod that uses PV. This API object captures the implementation details of the storage, that is, the NFS,iSCSI or cloud provider-specific storage system.

PersistentVolumeClaim (PVC) is a request stored by the user. The logic of using PVC: define a storage volume in pod (the type of storage volume is PVC) and specify the size directly when defining it. Pvc must establish a relationship with the corresponding pv. Pvc will apply for pv according to the definition, and pv is created by storage space. Pv and pvc are a kind of storage resources abstracted by kubernetes.

Although PersistentVolumeClaims allows users to use abstract storage resources, a common requirement is that users need to create PV according to different needs for different scenarios. At this point, the cluster administrator is required to provide PV with different requirements, not just the size and access mode of the PV, but the user does not need to know the implementation details of these volumes. For such a requirement, the StorageClass resource can be used at this time. This plan has already been mentioned earlier.

PV is a resource in a cluster. PVC is a request for these resources, as well as a claim inspection of resources. The interaction between PV and PVC follows this life cycle:

Provisioning (configuration)-- > Binding (binding)-- > Using (use)-> Releasing (release)-- > Recycling (Recycling)

(1) PV based on nfs service

Nfs allows us to hang in our existing shared Pod, unlike emptyDir, emptyDir will be deleted when our Pod is deleted, but nfs will not be deleted, just unsuspended state, which means that NFS can allow us to process data in advance, and the data can be passed between Pod. Moreover, nfs can be hung by multiple pod at the same time and read and write

Note: we must first ensure that the NFS server is running properly when we hang on the nfs.

Download the installation package required for nfs

[root@node02 ~] # yum-y install nfs-utils rpcbind

Create a shared directory

[root@master ~] # mkdir / nfsdata

Permissions to create a shared directory

[root@master ~] # vim / etc/exports/nfsdata * (rw,sync,no_root_squash)

Enable nfs and rpcbind

[root@master ~] # systemctl start nfs-server.service [root@master ~] # systemctl start rpcbind

Test it

[root@master] # showmount-e

Create a yaml file for nfs-pv [root@master yaml] # cd yaml/ [root@master yaml] # vim nfs-pv.yamlapiVersion: v1kind: PersistentVolumemetadata: name: capacity: # pv capacity size storage: 1Gi accessModes: # access pv mode-ReadWriteOnce # can read-write mount to a single node persistentVolumeReclaimPolicy: Recycle storageClassName: nfs nfs: path: / nfsdata/pv1 server: 192.168.1.21 accessModes: (PV Supported access mode)-ReadWriteOnce: can read-write mount to a single node-ReadWriteMany: can read-write mount to multiple nodes. -ReadOnlyOnce: can mount to a single node in a read-only manner. PersistentVolumeReclaimPolicy: (what is the recycling strategy for PV storage space) Recycle: automatically clears data. Retain: requires manual recycling by the administrator. Delete: dedicated to cloud storage. Execute [root@master yaml] # kubectl apply-f nfs-pv.yaml to check [root@master yaml] # kubectl get pv

Create a yaml file for nfs-pvc

PersistentVolumeClaim (PVC) is a request stored by the user. The logic of using PVC: define a storage volume in pod (the type of storage volume is PVC) and specify the size directly when defining it. Pvc must establish a relationship with the corresponding pv. Pvc will apply for pv according to the definition, and pv is created by storage space. Pv and pvc are a kind of storage resources abstracted by kubernetes.

[root@master yaml] # vim nfs-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: test-pvcspec: accessModes:-ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nfs execute [root@master yaml] # kubectl apply-f nfs-pvc.yaml to check [root@master yaml] # kubectl get pvc

[root@master yaml] # kubectl get pv

(2) create a pod resource [root@master yaml] # vim pod.yamlkind: PodapiVersion: v1metadata: name: test-podspec: containers:-name: pod1 image: busybox args:-/ bin/sh-- c-sleep 30000 volumeMounts:-mountPath: "/ mydata" name: mydata volumes:-name: mydata persistentVolumeClaim: claimName: test-pvc execute [root@master yaml] # kubectl apply-f pod.yaml check [root@master yaml] # kubectl get pod-o wide

You can see that it has not been successfully opened now.

Check the test-pod information to see where the problem is [root@master yaml] # kubectl describe pod test-pod

That's because pv's local mount directory has not been created [root@master yaml] # mkdir / nfsdata/pv1/// to recreate pod [root@master yaml] # kubectl delete-f pod.yaml [root@master yaml] # kubectl apply-f pod.yaml [root@master yaml] # kubectl get pod-o wide like nfs-pv.yaml 's name

(3) test-pod creates hello creation file and adds content [root@master yaml] # kubectl exec test-pod touch / mydata/hello

Enter the container

[root@master yaml] # kubectl exec-it test-pod / bin/sh/ # echo 123 > / mydata/hello/ # exit

Mount the directory and check it.

[root@master yaml] # cat / nfsdata/pv1/hello

Same as just now.

(4) Test the recycling policy to delete pod and pvc,pv [root@master yaml] # kubectl delete pod test-pod [root@master yaml] # kubectl delete pvc test-pvc [root@master yaml] # kubectl delete pv test-pv and check [root@master yaml] # kubectl get pv

[root@master yaml] # cat / nfsdata/pv1/hello

The file has been recycled

(5) modify the recycling policy of pv to manually modify [root@master yaml] # vim nfs-pv.yaml apiVersion: v1kind: PersistentVolumemetadata: name: test-pvspec: storage: 1Gi accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Retain # modify storageClassName: nfs nfs: path: / nfsdata/pv1 server: 192.168.1.21 execute [root@master yaml] # kubectl apply-f nfs-pv.yaml create pod [root@master yaml] # kubectl apply-f pod.yaml

[root@master yaml] # kubectl describe pod test-pod

Create pvc [root@master yaml] # kubectl apply-f nfs-pvc.yaml to view pod [root@master yaml] # kubectl get pod

(6) test-pod create hello creation file and add content [root@master yaml] # kubectl exec test-pod touch / mydata/k8s check the mount directory [root@master yaml] # ls / nfsdata/pv1/

Delete pod and pvc,pv, and check the mount directory again [root@master yaml] # kubectl delete pod test-pod [root@master yaml] # kubectl delete pvc test-pvc [root@master yaml] # kubectl delete pv test-pv to view the mount directory [root@master yaml] # ls / nfsdata/pv1/

The content is still there.

Application of 4.mysql to data persistence

The following shows how to provide persistent storage for a MySQL database, as follows:

Create PV and PVC. Deploy MySQL. Add data to MySQL. Simulating node downtime, Kubernetes automatically migrates MySQL to other nodes. Verify data consistency. (1) through the previous yaml file, create pv and pvc [root @ master yaml] # kubectl apply-f nfs-pv.yaml [root@master yaml] # kubectl apply-f nfs-pvc.yaml to see [root@master yaml] # kubectl get pv

[root@master yaml] # kubectl get pvc

(2) write a mysql yaml file [root@master yaml] # vim mysql.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: test-mysqlspec: selector: matchLabels: # support equivalent tags app: mysqlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: test-mysqlspec: selector: matchLabels: app: mysql template: metadata: labels: app: mysqlspec: containers:-image: mysql : 5.6 name: mysql env:-name: MYSQL_ROOT_PASSWORD value: 123.com volumeMounts:-name: mysql-storage mountPath: / var/lib/mysql volumes:-name: mysql-storage persistentVolumeClaim: claimName: test-pvc execute [root@master yaml] # kubectl apply-f mysql.yaml check [root@master yaml] # kubectl get pod

(3) enter the mysql container

① switches to database mysql.

② creates the database table my_id.

③ inserts a piece of data.

④ confirms that the data has been written.

Shut down k8s-node2 to simulate node downtime.

[root@master yaml] # kubectl exec-it test-mysql-569f8df4db-rkpwm-- mysql- u root-p123.com create database mysql > create database yun33; switch database mysql > use yun33; create table mysql > create table my_id (id int (4)); insert data mysql > insert my_id values (9527) in the table; view table mysql > select * from my_id

(4) check the local mount directory [root@master yaml] # ls / nfsdata/pv1/

Check out pod [root@master yaml] # kubectl get pod-o wide-w

Suspend node01

(5) check whether the data on the node02 is the same as before (verify the consistency of the data) enter the database [root@master yaml] # kubectl exec-it test-mysql-569f8df4db-nsdnz-- mysql- u root-p123.com to view the database mysql > show databases

View the table mysql > show tables

Mysql > select * from my_id

You can see that the data is still there.

5. Troubleshooting method

Kubectl describe

/ / View the details and identify the problem

Kubectl logs

/ / check the log to find out the problem

/ var/ log/messages

/ / View the log of the kubelet of this node.

6. Summary

In this chapter, we discussed how Kubernetes manages storage resources.

Volume of emptyDir and hostPath types is convenient, but not durable, and Kubernetes supports Volume for a variety of external storage systems.

PV and PVC separate the responsibilities of administrators and ordinary users and are more suitable for production environments. We also learned how to achieve more efficient dynamic provisioning through StorageClass.

Finally, we demonstrated how to use PersistentVolume to achieve data persistence in MySQL.

1. Access control types of PV

AccessModes: (access modes supported by PV)

ReadWriteOnce: can read-write mount to a single node ReadWriteMany: can read-write mount to multiple nodes. ReadOnlyOnce: the ability to mount to a single node as read-only. 2. Space recovery strategy of PV

PersistentVolumeReclaimPolicy: (what is the recycling strategy for PV storage space)

​ Recycle: automatically clears data.

​ Retain: requires manual recycling by the administrator.

​ Delete: dedicated to cloud storage.

3. PV and PVC are interrelated.

Is associated through the accessModes and storageClassName module

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report