In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Advanced PersistentVolumeClaim dynamic supply of K8s
First, let's take a brief look at the process of implementing this picture, and then let's take a look at it.
In the previous words, static supply will require us to create pv manually. If we do not have enough resources and cannot find a suitable pv, then pod will be in a state of pending waiting, that is to say, we cannot find a suitable partner. So to solve these two problems, we will give this kind of dynamic supply, which is mainly able to automatically create pv for you, that is, we will automatically create a capacity for you as much capacity as you need. That is, pv,k8s helps you create, and when you create pvc, you need to find pv. At this time, you will leave it to this storage class, and storage class, uh, to help you create these pv, storage class, is to achieve a support for specified storage, directly help you to call api to create storage class, so there is no need to manually help you create pv. And you think about it, when there are more nodes and more business, it is still a large amount to manually create pv, and it is not very easy to maintain. One of the main implementations of dynamic provisioning is the StorageClass storage object, which actually declares which storage you use, then connects it for you, and then automatically creates pv for you.
For example, it's easier to understand.
Don't say much about the picture above.
In fact, it is a pv supply based on NFS implementation. It roughly goes like this. We may create a statefulset stateful application storage, and then have a management nfs-storageClass, because nfs currently does not support this automatic creation of pv. We can use the plug-in implemented by the community to complete the automatic creation of this pv, that is, the StorageClass piece. After the creation is finished, then pod will refer to it.
This is a dynamically supplied storage plug-in supported by kubernetes
Https://kubernetes.io/docs/concepts/storage/storage-classes/
This will tell you which storage supports and which does not support. If you support it, you do not need to use the community storage class. If you do not support it, you should go to the community storage class. Those that are ticked are supported, and those that are not ticked are not supported.
This is the plug-in provided to us by that community. Let's take a look.
Https://github.com/kubernetes-incubator/external-storage
K8s is not supported by default. We can use the yaml provided in this nfs-client, a component developed by the community, which can help you create PV automatically. In deploy, we will use class, which will declare that you use
Which storage is provided and which is provided, it will be deployed as an application, and the other will use rbac.
This storage class will access API, so it is necessary to define RBAC authorization policy for him, and deployment, which is defined to be deployed in the form of a component. There will be an image in this component, which can be downloaded directly. It can help us automatically create pv. In fact, it does it. If K8s supports it, it can directly adjust it, but K8s does not support NFS. All need to be implemented using the components of such a community, where the address of the NFS server and the source of the data volume are defined
Now let's demonstrate:
Declare that the yaml file here can be downloaded to the community address I sent just now.
[root@k8s-master demo] # mkdir nfs-client [root@k8s-master demo] # cd nfs-client/ [root@k8s-master nfs-client] # rz-Erz waiting to receive. [root@k8s-master nfs-client] # lsclass.yaml deployment.yaml rbac.yaml
Create a rbac first, and you don't need to define it here, just create it.
[root@k8s-master nfs-client] # kubectl create-f rbac.yaml
Here we need to modify the address of our NFS server and the mount directory of our NFS server.
[root@k8s-master nfs-client] # vim deployment.yaml apiVersion: v1kind: ServiceAccountmetadata: name: nfs-client-provisioner---kind: DeploymentapiVersion: extensions/v1beta1metadata: name: nfs-client-provisionerspec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisionerspec: serviceAccountName: nfs-client-provisioner containers:-name: nfs-client-provisioner image: zhaocheng172/nfs -client-provisioner:latest volumeMounts:-name: nfs-client-root mountPath: / persistentvolumes env:-name: PROVISIONER_NAME value: fuseim.pri/ifs-name: NFS_SERVER value: 192.168.30.27-name: NFS_PATH value: / opt/k8s Volumes:-name: nfs-client-root nfs: server: 192.168.30.27 path: / opt/k8s [root@k8s-master nfs-client] # kubectl create-f class.yaml [root@k8s-master nfs-client] # kubectl create-f deployment.yaml
Here the provider is fuseim.pri/ifs. As you can see in our class, storage needs to inform the consumer that the consumer deploys the application to K8s. If you want to use this automatic supply, you must tell it that it requires you to declare the provider in the yaml file, which means that the provider storageclass can be configured with multiple, either NFS, another is Ceph, and the other is cloud storage. As long as it specifies which storageclass to use when creating the application, it creates an automatic pv on which storage it specifies.
[root@k8s-master nfs-client] # kubectl get storageclassmanaged-nfs-storage fuseim.pri/ifs 51s [root@k8s-master nfs-client] # kubectl get podNAME READY STATUS RESTARTS AGEmy-pod 1/1 Running 0 18hnfs-744d977b46-dh9xj 1/1 Running 0 18hnfs-744d977b46-kcx6h 1/1 Running 0 18hnfs-744d977b46-wqhc6 1/1 Running 0 18hnfs-client-provisioner-fbc77b9d4-kkkll 1/1 Running 0 27s
Now we can use the automatic supply of pv
Now let's test it. I'll add the name storageclass to pvc on top of my original static pod.
Specify our storage class
To test our automatic supply, we delete the original static supply
[root@k8s-master nfs-client] # kubectl get pv Pvcpersistentvolume/zhaocheng 5Gi RWX Retain Released default/my-pvc 18hpersistentvolume/zhaochengcheng 10Gi RWX Retain Available 18h [root@k8s-master nfs-client] # kubectl delete persistentvolume/zhaocheng persistentvolume "zhaocheng" deleted [root@k8s-master nfs-client] # kubectl delete persistentvolume / zhaochengchengpersistentvolume "zhaochengcheng" deleted [root@k8s-master nfs-client] # kubectl get pv PvcNo resources found.
This is the yaml format we provide dynamically. Specify our storage class storageClassName in pvc:
"managed-nfs-storage"
[root@k8s-master nfs-client] # vim pod.yaml apiVersion: v1kind: Podmetadata: name: my-podspec:-name: nginx image: nginx:latest ports:-containerPort: 80 volumeMounts:-name: www mountPath: / usr/share/nginx/html volumes:-name: www persistentVolumeClaim: claimName: my-pvc---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: my-pvcspec: storageClassName: "managed-nfs -storage "accessModes:-ReadWriteMany resources: requests: storage: 5Gi [root@k8s-master nfs-client] # kubectl create-f pod.yaml
Check that our pv,pvc has been automatically created for us, so we don't have to create the pv manually.
[root@k8s-master nfs-client] # kubectl get podmy-pod 1/1 Running 0 110snfs-744d977b46-dh9xj 1/1 Running 0 18hnfs-744d977b46-kcx6h 1/1 Running 0 18hnfs-744d977b46-wqhc6 1/1 Running 0 18hnfs-client-provisioner-fbc77b9d4-kkkll 1/1 Running 0 20m [root@k8s-master nfs-client] # kubectl get pv Pvcpersistentvolume/pvc-a24d4a5e-8f9d-4478-bfe5-b86e2360ae5a 5Gi RWX Delete Bound default/my-pvc managed-nfs-storage 67sNAME STATUS VOLUME persistentvolumeclaim/my-pvc Bound pvc-a24d4a5e-8f9d-4478-bfe5-b86e2360ae5a 5Gi RWX managed-nfs-storage 67s
And we can also see the pvc directory on our nfs server, and now we can use it.
[root@localhost K8s] # lsdefault-my-pvc-pvc-a24d4a5e-8f9d-4478-bfe5-b86e2360ae5a wwwroot zhaocheng zhaochengcheng [root@localhost K8s] # cd default-my-pvc-pvc-a24d4a5e-8f9d-4478-bfe5-b86e2360ae5a/ [root@localhost default-my-pvc-pvc-a24d4a5e-8f9d-4478-bfe5-b86e2360ae5a] # ls [root@localhost default-my-pvc-pvc-a24d4a5e-8f9d-4478-bfe5-b86e2360ae5a] # echo "hello persistentvolumeclaim" > index.html
Check out our container pod
[root@k8s-master nfs-client] # kubectl exec-it my-pod bashroot@my-pod:/# cat / usr/share/nginx/html/index.html hello persistentvolumeclaim
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.