In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Data persistence is the general name of transforming the in-memory data model into the storage model and the storage model into the in-memory data model. The data model can be any data structure or object model, and the storage model can be relational model, XML, binary stream, and so on.
Docker containers have a lifecycle, so data volumes can be persisted.
The main problems solved by data volumes are:
Data persistence: when we write data, files exist temporarily. When the container crashes, host will kill the container and re-create the container from the image. Data sharing will be lost: if you run the container in the same Pod, there will be a need to share files.
Storage class (Storage class) is one of the resource types of k8s. It is a logical group created more conveniently by administrators to manage PV. It can be classified according to the performance of the storage system, or comprehensive quality of service, backup strategy and so on. But k8s itself does not know what the category is, it is used as a description.
One of the advantages of storage classes is that they support the dynamic creation of PV. When users use persistent storage, they do not have to create PV in advance, but directly create PVC, which is very convenient.
The name of the storage class object is very important, and in addition to the name, there are three key fields
Provisioner (supplier):
And a storage system that provides storage resources. There are multiple suppliers built into K8s, whose names are prefixed with "kubernetes.io". And it can be customized.
Parameters (parameters): the storage class uses parameters to describe the storage volume to be associated with. Note that the parameters vary from supplier to supplier.
ReclaimPolicy:PV 's recycling policy. Available values are Delete (default) and Retain.
Volume:
EmptyDir (empty directory): rarely used, generally only for temporary use, similar to Docker data persistence: docker manager volume. When the data volume is initially allocated, it is an empty directory. Containers in the same Pod can read and write to this directory and share data.
Usage scenario: in the same Pod, different containers, shared data volumes
If the container is deleted, the data still exists, and if the Pod is deleted, the data will also be deleted
Examples of use:
[root@master ~] # vim emptyDir.yamlapiVersion: v1kind: Podmetadata: name: producer-consumerspec: containers:-image: busybox name: producer volumeMounts:-mountPath: / producer_dir/ / the path in the container name: shared-volume / / specify the local directory name args: / bin/sh-- c-echo "hello k8s" > / producer_dir/hello Sleep 30000-image: busybox name: consumer volumeMounts:-mountPath: / consumer_dir name: shared-volume args:-/ bin/sh-- c-cat / consumer_dir/hello Sleep 30000 volumes:-name: shared-volume / / the name here must correspond to the name of the mountPath of Pod above emptyDir: {} / / defines the data persistence type, that is, the empty directory [root@master ~] # kubectl applky-f emptyDir.yaml [root@master ~] # kubectl get podNAME READY STATUS RESTARTS AGE producer-consumer 2Gard 2 Running 014s [root@master ~] # kubectl logs producer-consumer consumer hello k8s
/ / use inspect to check where the mounted directory is (check the Mount field)
[root@master ~] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESproducer-consumer 2 Running 0 69s 10.244.1.2 node01 / / you can see the container running on node01 Find this container on node01 and view and view the details [root@node01 ~] # docker psCONTAINER ID IMAGEf117beb235cf busybox13c7a18109a1 busybox [root@node01 ~] # docker inspect 13c7a18109a1 "Mounts": [{"Type": "bind", "Source": "/ var/lib/kubelet/pods/5225f542-0859-4a6a-8d99-1f23b9781807/volumes/kubernetes.io~empty-dir/shared-volume" "Destination": "/ producer_dir", / / Mount directory in the container "Mode": "," RW ": true," Propagation ":" rprivate "/ / View another container [root@node01 ~] # docker inspect f117beb235cf" Mounts ": [{" Type ":" bind " "Source": "/ var/lib/kubelet/pods/5225f542-0859-4a6a-8d99-1f23b9781807/volumes/kubernetes.io~empty-dir/shared-volume", "Destination": "/ consumer_dir", / / Mount directory in the container "Mode": "," RW ": true "Propagation": "rprivate" / / you can see the same mount directory used by both containers [root@node01 ~] # cd / var/lib/kubelet/pods/5225f542-0859-4a6a-8d99-1f23b9781807/volumes/kubernetes.io~empty-dir/shared-volume [root@node01 shared-volume] # lshello [root@node01 shared-volume] # cat hello hello k8s
/ / delete the container and verify that the directory exists
[root@node01 ~] # docker rm-f 13c7a18109a1 13c7a18109a1 [root@node01 ~] # docker psCONTAINER ID IMAGEa809717b1aa5 busyboxf117beb235cf busybox// it will regenerate a new container to achieve the desired state of our users, so this directory still exists
/ / Delete Pod
[root@master ~] # kubectl delete pod producer-consumer [root@master ~] # ls / var/lib/kubelet/pods/5225f542-0859-4a6a-8d99-1f23b9781807/volumes/kubernetes.io~empty-dir/shared-volumels: unable to access / var/lib/kubelet/pods/5225f542-0859-4a6a-8d99-1f23b9781807/volumes/kubernetes.io~empty-dir/shared-volume: data will be deleted without that file or directory / / Pod
HostPath Volume (also relatively few usage scenarios): similar to Docker data persistence: bind mount
Mount a file or directory on the file system of the Pod node into the container
If Pod is deleted, the data is retained, which is better than emptyDir, but once host crashes, hostPath cannot access it.
HostPath is used for storage of docker or K8s cluster itself.
There will be a lot of pod in K8s cluster, and it is very inconvenient to manage if all of them are hostPath Volume, so PV is used.
Persistent Volume | PV (persistent volume) data storage directory prepared in advance and data persistence
It is a storage space in a cluster, managed by the cluster administrator or automatically managed by Storage class (storage class). PV, like pod, deployment and Service, is a resource object.
PersistentVolume (PV) is a piece of network storage in a cluster that has been configured by an administrator. A resource in a cluster is like a node is a cluster resource. PV is a volume plug-in such as a volume, but has a life cycle independent of any single pod that uses PV. The API object captures the implementation details of storage, that is, NFS,iSCSI or cloud provider-specific storage systems
Psesistent Volume Claim | PVC (persistent Volume usage statement | Application)
PVC represents the request of the user to use the storage, and applies for an application and declaration of PV persistence space. K8s cluster may have multiple PV, and you need to constantly create multiple PV for different applications.
It is similar to pod. Pod consumes node resources and PVC consumes storage resources. Pod can request specific levels of resources (CPU and memory). Permission requirements can request a specific size and access mode
The official document has more details: https://www.kubernetes.org.cn/pvpvcstorageclass
PV based on NFS service
[root@master ~] # yum-y install nfs-utils (incorrect mount type is reported if all nodes are required to download) [root@master ~] # yum-y install rpcbind [root@master ~] # mkdir / nfsdata [root@master ~] # vim / etc/exports/nfsdata * (rw,sync,no_root_squash) [root@master ~] # systemctl start rpcbind [root@master ~] # systemctl start nfs-server [root@master ~] # showmount-eExport list for master:/nfsdata *
1. Create PV (actual storage directory) 2. Create PVC 3. Create pod
Create a PV resource object:
[root@master ~] # vim nfs-pv.yamlapiVersion: v1kind: PersistentVolumemetadata: name: test-pvspec: capacity: / / size of PV capacity storage: 1Gi accessModes: / / PV supported access mode-ReadWriteOnce persistentVolumeReclaimPolicy: Recycle / / PV what is the policy for reclaiming storage space storageClassName: nfs nfs: path: / nfsdata/pv1 server: 192.168.1.70 [root@master ~] # kubectl apply-f nfs-pv .yaml [root@master ~] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEtest-pv 1Gi RWO Recycle Available nfs 9m30s
AccessModes: (access modes supported by PV)
-ReadWriteOnce: can mount to a single node in a read-write manner
-ReadWariteMany: can mount to multiple nodes in a read-write manner
-ReadOnlyOnce: mount to a single node in a read-only manner
PersistentVolumeReclaimPolicy: (what is the recycling strategy for PV's storage space)
Recycle: automatically clear data
Retain: requires manual recycling by administrator
Delete: dedicated to cloud storage. Delete data directly
The correlation between PV and PVC: through storageClassName & & accessModes
Create PVC
[root@master ~] # vim nfs-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: test-pvcspec: accessModes: / / access mode-ReadWriteOnce resources: requests: storage: 1Gi / / the requested capacity storageClassName: nfs / / to which PV to apply [root@master ~] # kubectl apply-f nfs-pvc.yaml [root@master ~] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEtest -Application of pvc Bound test-pv 1Gi RWO nfs 14sPV:
Create a Pod resource:
[root@master ~] # vim pod.yamlkind: PodapiVersion: v1metadata: name: test-podspec:-name: pod1 image: busybox args:-/ bin/sh-- c-sleep 30000 volumeMounts:-mountPath: "/ mydata" name: mydata volumes:-name: mydata persistentVolumeClaim: claimName: test-pvc [root@master ~] # kubectl apply-f pod.yaml
The mount directory specified when creating PV was / nfsdata/pv1. We did not create the pv1 directory, so the pod did not run successfully.
Here is how to troubleshoot:
Kubectl describekubectl logs/var/log/messages to view the kubelet log of this node / / use kubectl describe [root@master ~] # kubectl describe pod test-podmount.nfs: mounting 192.168.1.70:/nfsdata/pv1 failed, reason given by server: No such file or directory / / prompt that there are no files or directories
Create a directory and check the pod status:
[root@master ~] # mkdir / nfsdata/pv1NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStest-pod 1 Running 0 12m 10.244.1.3 node01
Verify that the application was successful:
[root@master ~] # kubectl exec test-pod touch / mydata/hello [root@master ~] # ls / nfsdata/pv1/hello [root@master ~] # echo 123 > / nfsdata/pv1/hello [root@master ~] # kubectl exec test-pod cat / mydata/hello123
Delete the Pod and verify the recycling policy (Recycle):
[root@master ~] # kubectl delete pod test-pod [root@master ~] # kubectl delete pvc test-pvc [root@master ~] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEtest-pv 1Gi RWO Recycle Available nfs 42h [root@master ~] # ls / nfsdata/pv1/ [root@master ~] # / / the data has been recovered
Usually, it will not be set to delete automatically, otherwise it will be almost like emptyDir
Delete pv and modify the recycling policy:
Before, we first created PV--- > PVC--- > Pod, but now we adjust it and create PV--- >-Pod--- > PVC first.
[root@master ~] # vim nfs-pv.yaml persistentVolumeReclaimPolicy: Retain [root@master ~] # kubectl apply-f nfs-pv.yaml [root@master ~] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEtest-pv 1Gi RWO Retain Available nfs 7s [root@master ~] # kubectl apply-f pod.yaml [root@ Master ~] # kubectl get podNAME READY STATUS RESTARTS AGEtest-pod 0amp 1 Pending 05s / / Pending is being scheduled [root@master ~] # kubectl describe pod test-podEvents: Type Reason Age From Message -Warning FailedScheduling 41s (x2 over 41s) default-scheduler persistentvolumeclaim "test-pvc" not found// did not find the corresponding pvc creation pvc [root@master ~] # kubectl apply-f nfs-pvc.yaml [root@master ~] # kubectl get podNAME READY STATUS RESTARTS AGEtest-pod 1 kubectl get podNAME READY STATUS RESTARTS AGEtest-pod 1 Running 0 114s
Verify the Retain (manually deleted by the administrator) recycling policy:
[root@master ~] # kubectl exec test-pod touch / mydata/k8s [root@master ~] # ls / nfsdata/pv1/k8s [root@master ~] # kubectl delete pod test-pod [root@master ~] # kubectl delete pvc test-pvc [root@master ~] # ls / nfsdata/pv1/k8s// can be seen that [root@master ~] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEtest-pv 1Gi has not been recovered RWO Retain Available nfs 6s
Mysql's application of data persistence:
/ / instead of creating PV,PVC here, just use the previous one.
[root@master ~] # kubectl apply-f nfs-pvc.yaml [root@master ~] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEtest-pvc Bound test-pv 1Gi RWO nfs 7s
Create Deploment resource object, mysql container
[root@master ~] # vim mysql.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata: name: test-mysqlspec: selector: matchLabels: / / equivalent-based tags app: mysql template: metadata: labels: app: mysqlspec: containers:-image: mysql:5.6 name: mysql env:-name: MYSQL_ROOT_PASSWORD value: 123.com VolumeMounts:-name: mysql-storage mountPath: / var/lib/mysql volumes:-name: mysql-storage persistentVolumeClaim: claimName: test-pvc [root@master ~] # kubectl get deployments.NAME READY UP-TO-DATE AVAILABLE AGEtest-mysql 1Accord 1 1 1 61s
Enter the container creation data to verify whether PV is applied:
[root@master ~] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStest-mysql-569f8df4db-fnnxc 1 Running 0 32m 10.244.1.5 node01 [root@master ~] # kubectl exec-it test-mysql-569f8df4db-fnnxc-- mysql- u root-p123.commysql > create database yun33; / / create database mysql > use yun33 / / choose to use data path Database changedmysql > create table my_id (id int (4)); create table mysql > insert my_id values (9527); / / insert data mysql > select * from my_id in the table / / View all data in the table +-+ | id | +-+ | 9527 | +-+ 1 row in set (0.00 sec) [root@master ~] # ls / nfsdata/pv1/auto.cnf ibdata1 ib_logfile0 ib_logfile1 k8s mysql performance_schema yun33
Shut down the node01 node and simulate the node downtime:
[root@master] # kubectl get pod-o wide-wNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStest-mysql-569f8df4db-fnnxc 1 Running 0 36m 10.244.1.5 node01 test-mysql-569f8df4db-fnnxc 1 Terminating 038m 10.244.1.5 node01 Test-mysql-569f8df4db-2m5rd 0/1 Pending 0 0s test-mysql-569f8df4db-2m5rd 0/1 Pending 0 0s node02 test-mysql-569f8df4db-2m5rd 0/1 ContainerCreating 0 0s node02 test-mysql-569f8df4db-2m5rd 1/1 Running 0 2s 10.244.2.4 node02 [root@master ~] # kubectl get pod-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEStest-mysql-569f8df4db-2m5rd 1 Running 0 20s 10.244.2.4 node02 test-mysql- 569f8df4db-fnnxc 1/1 Terminating 0 38m 10.244.1.5 node01
Verify whether the newly generated pod on node02 contains the data we created
[root@master] # kubectl exec-it test-mysql-569f8df4db-2m5rd-- mysql- u root-p123.commysql > show databases +-+ | Database | +-+ | information_schema | | mysql | | performance_schema | | yun33 | +-+ 4 rows in set (0.01sec) mysql > use yun33 Reading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with-ADatabase changedmysql > show tables;+-+ | Tables_in_yun33 | +-+ | my_id | +-+ 1 row in set (0.01 sec) mysql > select * from my_id +-+ | id | +-+ | 9527 | +-+ 1 row in set (sec) [root@master ~] # ls / nfsdata/pv1/auto.cnf ibdata1 ib_logfile0 ib_logfile1 k8s mysql performance_schema yun33
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.