In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Storage Class resources
1. Why use Storage Class?
There seems to be no problem with the conventional manual mount before, but think about it. When pvc applies for storage space from pv, it decides which pv to apply for space according to the specified pv name, access mode and capacity. It is assumed that the capacity of pv is 20g, and the defined access mode is WRO (only allow mounting to a single node by read and write), while the storage space requested by pvc is 10G. So once the pvc is the space requested from the above pv, that is, 10 gigabytes of space in the pv is wasted because it is only allowed to be mounted by a single node. Even if we don't think about it, it's troublesome every time we create a pv manually, and we need an automated tool to create a pv for us. This thing is an open source tool "nfs-client-provisioner" provided by Ali, which mounts the remote NFS server to the local directory through the built-in nfs driver of K8s and then acts as storage (storage).
2what is the role of the class in the cluster?
Pvc cannot directly apply for storage space from nfs-client-provisioner, so you need to apply through SC as a resource object. The fundamental function of SC is to dynamically create pv according to the definition of pvc, which not only saves our administrator's time, but also encapsulates different types of storage for pvc to choose from.
Each sc contains the following three important fields that will be used when the sc needs to dynamically allocate pv: Provisioner: the storage system that provides the storage resource. For ReclaimPolicy:pv 's recycling policy, the available values are Delete (default) and RetiainParameters (parameter): the storage class uses parameters to describe how to associate to the storage volume.
3, let's practice storage class based on NFS service
1) build the NFS service (I use the master node as the nfs server):
[root@master ~] # yum-y install nfs-utils [root@master ~] # vim / etc/exports/nfsdata * (rw,sync,no_root_squash) [root@master ~] # mkdir / nfsdata # create a shared directory [root@master ~] # systemctl start rpcbind [root@master ~] # systemctl enable rpcbind [root@master ~] # systemctl start nfs-server [root@master] # systemctl enable nfs-server [root@master ~] # showmount-e # ensure that Export list for master:/nfsdata * is mounted successfully
2) create rbac permissions:
Rbac (role-based access control) is that users are associated with permissions through roles.
It is a mechanism from authentication-> authorization-> access.
[root@master sc] # vim rbac-rolebind.yaml apiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner namespace: default---apiVersion: ClusterRolemetadata: name: nfs-provisioner-runner namespace: defaultrules:-apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create" "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list" "watch"]-apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"]-apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get", "create", "list", "watch" "update"]-apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects:-kind: ServiceAccount name: nfs-provisioner namespace: defaultroleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io
/ / execute the yaml file:
[root@master sc] # kubectl apply-f rbac-rolebind.yaml serviceaccount/nfs-provisioner createdclusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner createdclusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created
Above, we created a new ServiceAccount named nfs-provisioner, and then bound a ClusterRole named nfs-provisioner-runner, and the ClusterRole declared some permissions, including adding, deleting, changing, querying and other permissions to pv, so we can use this serviceAccount to automatically create pv.
3) create a Deployment for nfs
Replace the corresponding parameters with our own nfs configuration.
[root@master sc] # vim nfs-deployment.yaml apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nfs-client-provisioner namespace: defaultspec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers:-name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner VolumeMounts:-name: nfs-client-root mountPath: / persistentvolumes env:-name: PROVISIONER_NAME value: nfs-deploy # name of the supplier (custom)-name: NFS_SERVER value: 172.16.1.30 # ip address of the nfs server-name: NFS _ PATH value: / nfsdata # nfs shared directory volumes:-name: nfs-client-root nfs: server: 172.16.1.30 path: / nfsdata
/ / Import nfs-client-provisioner image (every node in the cluster needs to be imported, including master)
[root@master sc] # docker load-- input nfs-client-provisioner.tar 5bef08742407: Loading layer 4.221MB/4.221MBc21787dcfbf0: Loading layer 2.064MB/2.064MB00376105a0f3: Loading layer 41.08MB/41.08MBLoaded image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner:latest
/ / execute the yaml file: [root@master sc] # kubectl apply-f nfs-deployment.yaml deployment.extensions/nfs-client-provisioner created// to ensure the normal operation of pod [root@master sc] # kubectl get pod NAME READY STATUS RESTARTS AGEnfs-client-provisioner-5457694c8b-k8t4m 1 + 1 Running 0 23s
The role of the nfs-client-provisionser tool: it mounts the remote nfs server to the local directory through the built-in nfs driver of K8s, and then associates itself with sc as a storage provide (storage provider).
4) create a storage class:
[root@master sc] # vim test-sc.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: statefu-nfs namespace: defaultprovisioner: nfs-deploy reclaimPolicy: Retain
We declare a sc object named statefu-nfs. Note that the value of the provisioner field below must be the same as the value of the environment variable PROVISIONER_NAME under the Deployment of nfs above.
/ / create the resource object:
[root@master sc] # kubectl apply-f test-sc.yaml storageclass.storage.k8s.io/statefu-nfs created
5) after the SC resource object is created successfully, let's test whether the pv can be created dynamically.
/ / first, let's create a pvc object:
[root@master sc] # vim test-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: test-claim namespace: defaultspec: storageClassName: statefu-nfs # sc be sure to point to the sc name created above accessModes:-ReadWriteMany # use ReadWriteMany access mode resources: requests: storage: 50Mi # request 50m of space
/ / execute the yaml file to create the pvc:
[root@master sc] # kubectl apply-f test-pvc.yaml persistentvolumeclaim/test-claim created
We can see that the pvc has been created successfully, the status is already Bound, and whether a corresponding volume object has been produced, the most important column is STORAGECLASS, and now the value is the name "statefu-nfs" of the sc object we just created.
/ / next, let's take a look at pv to verify whether it is created dynamically:
We can see that an associated pv object has been automatically generated, the access pattern is RWX (the pattern defined in sc), the recycling policy Delete, and the state is also Bound, which is dynamically created through sc, not manually.
4. Deploy nginx to practice pv,pvc
We test the pvc object (data persistence) declared above in stroage class by deploying a nginx service.
[root@master sc] # vim nginx-pod.yamlkind: PodapiVersion: v1metadata: name: nginx-pod namespace: defaultspec: containers:-name: nginx-pod image: nginx volumeMounts: # define data persistence-name: nfs-pvc mountPath: / usr/share/nginx/html volumes:-name: nfs-pvc persistentVolumeClaim: # specify pvc Note that the pvc declared below points to the pvc name claimName: test-claim / / run nginx defined above, and check to see if pod is working properly: [root@master sc] # kubectl apply-f nginx-pod.yaml pod/nginx-pod created
/ / Let's enter pod to create a test web page file:
[root@master ~] # kubectl exec-it nginx-pod / bin/bashroot@nginx-pod:/# cd / usr/share/nginx/html/root@nginx-pod:/usr/share/nginx/html# echo "welcome to Storage Class web" > index.htmlroot@nginx-pod:/usr/share/nginx/html# cat index.html welcome to Storage Class webroot@nginx-pod:/usr/share/nginx/html# exit
/ / Let's go back to the shared data directory of the nfs server to see if the files are synchronized:
In the nfs directory, we can see a folder with a very long name, which is named according to the rules above.
When we enter this directory, we can see that the data in nginx has been synchronized and persisted (instead of testing here, you can test it yourself)
Finally, test whether nginx can properly access the web page we wrote:
/ / create a service resource object, associate the above pod, and map the port number.
A complete yaml file is as follows:
Kind: PodapiVersion: v1metadata: name: nginx-pod namespace: default labels: app: webspec:-name: nginx-pod image: nginx volumeMounts:-name: nfs-pvc mountPath: / usr/share/nginx/html volumes:-name: nfs-pvc persistentVolumeClaim: claimName: test-claim---apiVersion: v1kind: Servicemetadata: name: nginx-svc namespace: defaultspec: type: NodePort selector: app: Web ports:-name: nginx port: 80 targetPort: 80 nodePort: 32134
/ / reload nginx and visit the web page:
[root@master sc] # kubectl apply-f nginx-pod.yaml pod/nginx-pod configuredservice/nginx-svc created
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.