In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Kubernetes PV is reassigned to PVC to recover data in the Retain policy Released state
[TOC]
1. Description of the purpose and environment of the experiment
Reason: after using helm update stable/sonatype-nexus to update from version 1.6 to version 1.13, the PVC was deleted and the PVC was recreated. Fortunately, the original PV was Retain. So study the PV of Retain how to recover the data.
Experimental purpose: after PVC is deleted, PV restores the data in PV to PVC, mounts it to POD, and achieves data recovery because of the Retain policy and the status of Released.
Environment description:
Kubernetes: 1.12.1StorageClass: ceph-rbdOS: CentOS72. Experimental process
Prepare the yaml file:
Pvc.yaml
ApiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc-testspec: accessModes:-ReadWriteOnce storageClassName: ceph-rbd resources: requests: storage: 1Gi
Nginx.yaml
ApiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginx-rbdspec: replicas: 1 template: metadata: labels: name: nginx spec: containers:-name: nginx image: nginx imagePullPolicy: IfNotPresent ports:-containerPort: 80 volumeMounts:-name: ceph-rbd-volume mountPath: "/ usr/share/nginx/ Html "volumes:-name: ceph-rbd-volume persistentVolumeClaim: claimName: pvc-test
Create a new pvc, deployment, write data and delete the pvc operation procedure:
[root@lab1 test] # lltotal 8 pvc.yaml persistentvolumeclaim/pvc-test created RW pvc.yaml persistentvolumeclaim/pvc-test created [root@lab1 test] # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpvc-test-1 root root 533 Oct 24 17:54 nginx.yaml-rw-r--r-- 1 root root 187 Oct 24 17:55 pvc.yaml [root@lab1 test] # kubectl apply-f pvc.yaml persistentvolumeclaim/pvc-test created [root@lab1 test] Bound pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO ceph-rbd 7s [root@lab1 test] # kubectl apply-f nginx.yaml deployment.extensions/nginx-rbd created [root@lab1 test] # kubectl get pod | grep nginx-rbdnginx-rbd-7c6449886-thv25 1 Running 0 33s [root@lab1 test] # kubectl exec-it nginx-rbd-7c6449886-thv25-/ bin/ Bash-c 'echo ygqygq2 > / usr/share/nginx/html/ygqygq2.html' [root@lab1 test] # kubectl exec-it nginx-rbd-7c6449886-thv25-- cat / usr/share/nginx/html/ygqygq2.htmlygqygq2 [root@lab1 test] # kubectl delete-f nginx.yaml deployment.extensions "nginx-rbd" deleted [root@lab1 test] # kubectl get pvc pvc-test NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpvc-test Bound pvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO ceph-rbd 4m10s [root@lab1 test] # kubectl delete pvc pvc-test # Delete PVCpersistentvolumeclaim "pvc-test" deleted [root@lab1 test] # kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO Retain Released default/pvc-test ceph-rbd 4m33s [root@lab1 test] # kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938-o yaml > / tmp/pvc-069c4486-d773-11e8-bd12-000c2931d938.yaml # keep standby
As you can see above, after the pvc is deleted, the pv becomes Released.
Create a PVC with the same name again to see whether the original PV operation procedure is assigned:
[root@lab1 test] # kubectl apply-f pvc.yaml persistentvolumeclaim/pvc-test created [root@lab1 test] # kubectl get pvc # View the newly created PVC NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpvc-test Bound pvc-f2df48ea-d773-11e8-b6c8-000c29ea3e30 1Gi RWO ceph-rbd 19s [root@lab1 test] # kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938 # View the original PVNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO Retain Released default/pvc-test ceph-rbd 7m18s [root@lab1 test] #
As you can see above, PVC allocates a new PV because the PV state is not Available.
So how do you change the PV status to Available? Let's look at the previous PV:
[root@lab1 test] # cat / tmp/pvc-069c4486-d773-11e8-bd12-000c2931d938.yamlapiVersion: v1kind: PersistentVolumemetadata: annotations: pv.kubernetes.io/provisioned-by: ceph.com/rbd rbdProvisionerIdentity: ceph.com/rbd creationTimestamp: 2018-10-24T09:56:06Z finalizers:-kubernetes.io/pv-protection name: pvc-069c4486-d773-11e8-bd12-000c2931d938 resourceVersion: "11752758" selfLink: / api/v1/persistentvolumes/pvc-069c4486-d773-11e8-bd12-000c2931d938 Uid: 06b57ef7-d773-11e8-bd12-000c2931d938spec: accessModes:-ReadWriteOnce capacity: storage: 1Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: pvc-test namespace: default resourceVersion: "11751559" uid: 069c4486-d773-11e8-bd12-000c2931d938 persistentVolumeReclaimPolicy: Retain rbd: fsType: ext4 image: kubernetes-dynamic-pvc-06a25bd3-d773-11e8-8c3e-0a580af400d5 keyring: / etc/ceph/keyring monitors:-192.168.105.926789 -192.168.105.93 pool 6789 pool: kube secretRef: name: ceph-secret namespace: kube-system user: kube storageClassName: ceph-rbdstatus: phase: Released
As you can see from the above, the spec.claimRef paragraph still retains the previous PVC information.
We boldly delete the spec.claimRef paragraph. Check the PV again:
Kubectl edit pv pvc-069c4486-d773-11e8-bd12-000c2931d938
[root@lab1 test] # kubectl get pv pvc-069c4486-d773-11e8-bd12-000c2931d938 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO Retain Available ceph-rbd 10m
As you can see from above, the previous PV pvc-069c4486-d773-11e8-bd12-000c2931d938 has become Available.
Create PVC and deployment again, and view the data:
New_pvc.yaml
ApiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc-test-newspec: accessModes:-ReadWriteOnce storageClassName: ceph-rbd resources: requests: storage: 1Gi
New_nginx.yaml
ApiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginx-rbdspec: replicas: 1 template: metadata: labels: name: nginx spec: containers:-name: nginx image: nginx imagePullPolicy: IfNotPresent ports:-containerPort: 80 volumeMounts:-name: ceph-rbd-volume mountPath: "/ usr/share/nginx/ Html "volumes:-name: ceph-rbd-volume persistentVolumeClaim: claimName: pvc-test-new
Operation procedure:
[root@lab1 test] # kubectl apply-f new_pvc.yaml persistentvolumeclaim/pvc-test-new created [root@lab1 test] # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpvc-test Bound pvc-f2df48ea-d773-11e8-b6c8-000c29ea3e30 1Gi RWO ceph-rbd 31mpvc-test-new Bound pvc- 069c4486-d773-11e8-bd12-000c2931d938 1Gi RWO ceph-rbd 27m [root@lab1 test] # kubectl apply-f new_nginx.yaml [root@lab1 test] # kubectl get pod | grep nginx-rbdnginx-rbd-79bb766b6c-mv2h8 1 Running 0 20m [root@lab1 test] # kubectl exec-it nginx-rbd-79bb766b6c-mv2h8-ls / usr/share/nginx/htmllost+found ygqygq2.html [root@lab1 test] # kubectl exec -it nginx-rbd-79bb766b6c-mv2h8-- cat / usr/share/nginx/html/ygqygq2.htmlygqygq2
As you can see from above, the new PVC is assigned to the original PV pvc-069c4486-d773-11e8-bd12-000c2931d938, and the data is still there.
3. Summary
The current version of Kubernetes PVC storage size is the only resource that can be set or requested, because we have not modified the PVC size. In the Available status of PV, when a PVC request is allocated the same size, the PV will be allocated and bound successfully.
In the process of turning PV into Available, the most important thing is the spec.claimRef field of PV, which records the binding information of the original PVC. If the binding information is deleted, the PV can be re-released to achieve Available.
Reference:
[1] https://kubernetes.io/docs/concepts/storage/persistent-volumes/
[2] https://kubernetes.io/docs/concepts/storage/storage-classes/
[3] http://dockone.io/article/2082
[4] http://dockone.io/article/2087
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.