In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
In the first two articles, the process of data persistence for K8s is to build nfs underlying storage = "create PV====" create PVC=== "create pod. In the end, container in pod implements data persistence.
In the above process, there seems to be no problem, but if you think about it, when PVC applies for storage space from PV, it decides which PV to apply for space according to the specified pv name, access mode and capacity. If the capacity of PV is 20g, the access mode defined is WRO (only allowed to be mounted to a single node by read and write), while the storage space requested by PVC is 10G. So once the PVC is the space requested from the above PV, that is, the space of 10 gigabytes of that PV is wasted because it is only allowed to be mounted by a single node. Even if we don't think about it, it's troublesome every time we create a PV manually, and we need an automated tool to create a PV for us.
This thing is an open source image "nfs-client-provisioner" provided by Ali, which mounts the remote NFS server to the local directory through the built-in NFS driver of K8s, and then acts as storage (storage).
Of course, PVC cannot directly apply for storage space from nfs-client-provisioner. In this case, you need to apply through the resource object SC (storageClass). The fundamental function of SC is to automatically create PV according to the value defined by PVC.
The following is an example of data persistence based on automatic creation of PV by Nginx.
1. Set up nfs service
For convenience, I do nfs directly on master here.
[root@master ~] # yum-y install nfs-utils [root@master ~] # systemctl enable rpcbind [root@master lv] # mkdir-p / nfsdata [root@master ~] # vim / etc/exports/nfsdata * (rw,sync,no_root_squash) [root@master ~] # systemctl start nfs-server [root@master ~] # systemctl enable nfs-server [root@master] # showmount-eExport list for master:/nfsdata * 2, create rbac license
This way of automatically creating pv involves rbac authorization.
[root@master ~] # vim rbac-rolebind.yaml # to create an authorized rbac user, you must specify a namespace in the following file Even defaultapiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner namespace: default---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: nfs-provisioner-runner namespace: defaultrules:-apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create" "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list" "watch"]-apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"]-apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get", "create", "list", "watch" "update"]-apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects:-kind: ServiceAccount name: nfs-provisioner namespace: defaultroleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io [root@master ~] # Kubectl apply-f rbac-rolebind.yaml # execute yaml file 3, Create nfs-client-provisioner container [root@master ~] # vim nfs-deployment.yaml # write yaml file apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nfs-client-provisioner namespace: defaultspec: replicas: 1 # the number of copies is 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner # specified account Containers:-name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner # uses this image volumeMounts:-name: nfs-client-root mountPath: / persistentvolumes # specify the mount directory in the container env:-name: PROVISIONER_NAME # this Is the built-in variable value: ljz-test # this is the value (name) of the above variable-name: NFS_SERVER # built-in variable IP value to specify the nfs service: 192.168.20.6-name: NFS_PATH # built-in variable Specify the directory of the nfs share value: / nfsdata volumes: # this is to specify the path of the nfs mounted above into the container and IP-name: nfs-client-root nfs: server: 192.168.20.6 path: / nfsdata [root@master ~] # kubectl apply-f nfs-deployment.yaml # execute yaml file 4, Create SC (StorageClass) [root@master ~] # vim test-storageclass.yaml # write the yaml file apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: statefu-nfs namespace: defaultprovisioner: ljz-test # here to correspond to the value value in the env environment variable of the third nfs-client-provisioner. ReclaimPolicy: Retain # Recycling policy is: retain, and a default value is "default" [root@master ~] # kubectl apply-f test-storageclass.yaml # execute yaml file 5, create PVC [root@master ~] # vim test-pvc.yaml # write yaml file apiVersion: v1kind: PersistentVolumeClaimmetadata: name: test-claim namespace: default spec: storageClassName: statefu-nfs # define the name of the storage class To correspond to the name of SC, accessModes:-ReadWriteMany # access mode is RWM resources: requests: storage: 500Mi [root@master ~] # kubectl apply-f test-pvc.yaml # execute yaml file # check if PV is automatically created and bound status [root@master ~] # kubectl get pv Pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpersistentvolume/pvc-355593f0-2dfd-4b48-a3c6-c58d4843bcf4 500Mi RWX Delete Bound default/test-claim statefu-nfs 2m53sNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpersistentvolumeclaim/test-claim Bound pvc-355593f0-2dfd-4b48-a3c6-c58d4843bcf4 500Mi RWX statefu-nfs 2m53s
In fact, we have implemented the automatic creation of PV according to the storage space applied for by PVC (a directory has been generated under the local nfs shared directory with a long name, which is the directory name defined by the pv+pvc name). It doesn't matter which pod the space applied for by this PVC is for use.
6 、 Create pod based on Nginx image [root@master ~] # vim nginx-pod.yaml # write yaml file apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: myweb namespace: defaultspec: replicas: 3 template: metadata: labels: app: web spec: containers:-name: myweb image: nginx:latest volumeMounts:-name: myweb-persistent-storage mountPath: / usr/share/nginx / html/ volumes:-name: myweb-persistent-storage persistentVolumeClaim: claimName: test-claim # the name should be the same as the name of PVC [root@master ~] # kubectl apply-f nginx-pod.yaml # execute yaml file
When the above yaml file is executed, the web page directory in the nginx container is associated with the local nfs shared directory.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.