In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
How to use storageclass to implement dynamic pv? In view of this problem, today the editor summarizes this article on the practice of storageclass, hoping to help more students who want to solve this problem to find a more simple and feasible way.
Before deploying nfs-client-provisioner, we need to prepare the nfs storage server and install it on all node nodes
Nfs server: 192.168.248.139
Shared storage directory: / data/nfs
Nfs-client-provisioner deployment file
Vim nfs-client-provisioner.yamlapiVersion: apps/v1kind: Deploymentmetadata: name: nfs-client-provisioner namespace: defaultspec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers:-name: nfs-client-provisioner image: quay.azk8s.cn/external_storage/nfs -client-provisioner:latest volumeMounts:-name: timezone mountPath: / etc/localtime-name: nfs-client-root mountPath: / persistentvolumes env:-name: PROVISIONER_NAME value: fuseim.pri/ifs-name: NFS_SERVER value: 192.168.248.139-name: NFS_PATH value: / data/ Nfs volumes:-name: timezone hostPath: path: / usr/share/zoneinfo/Asia/Shanghai-name: nfs-client-root nfs: server: 192.168.248.139 path: / data/nfs
Storageclass deployment file
Vim nfs-client-class.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: managed-nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true" # set it to the default storage backend provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'parameters: archiveOnDelete: "false" # when pvc is deleted, the pv on the back-end storage is also automatically deleted
Rbac authorization file
Vim nfs-client-rbac.yamlkind: ServiceAccountapiVersion: v1metadata: name: nfs-client-provisioner---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-client-provisioner-runnerrules:- apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch" "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"]-apiGroups: [""] resources: ["events"] verbs: ["create", "update" "patch"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: ServiceAccount name: nfs-client-provisioner namespace: defaultroleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisionerrules:- apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list" "watch", "create", "update", "patch"]-kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-client-provisionersubjects:- kind: ServiceAccount name: nfs-client-provisioner namespace: defaultroleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
After preparing the above three files, use the kubectl apply command application to complete the deployment of nfs-client-provisioner.
[root@k8s-master-01 Dynamic-pv] # kubectl apply-f. Storageclass.storage.k8s.io / managed-nfs-storage createddeployment.apps/nfs-client-provisioner createdserviceaccount/nfs-client-provisioner createdclusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner createdclusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner createdrole.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner createdrolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
View pod running status and sc
[root@k8s-master-01 Dynamic-pv] # kubectl get pod,scNAME READY STATUS RESTARTS AGEpod/nfs-client-provisioner-c676947d-pfpms 1 root@k8s-master-01 Dynamic-pv 1 Running 0 107sNAME PROVISIONER AGEstorageclass.storage.k8s.io/managed-nfs-storage (default) fuseim.pri/ifs 108s
You can see that nfs-client-provisioner is running properly and sc has been created successfully. Next, let's test to create a few pvc
Vim mysql-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: mysql-01-pvc# annotations:# volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" spec: accessModes: ["ReadWriteMany"] resources: requests: storage: 10Gi---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: mysql-02-pvcspec: accessModes: ["ReadWriteMany"] resources: requests: storage: 5Gi---apiVersion: v1kind: PersistentVolumeClaimmetadata: Name: mysql-03-pvcspec: accessModes: ["ReadWriteMany"] resources: requests: storage: 3Gi [root@k8s-master-01 Dynamic-pv] # kubectl apply-f mysql-pvc.yaml persistentvolumeclaim/mysql-01-pvc createdpersistentvolumeclaim/mysql-02-pvc createdpersistentvolumeclaim/mysql-03-pvc created [root@k8s-master-01 Dynamic-pv] # kubectl get pvc PvNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpersistentvolumeclaim/mysql-01-pvc Bound pvc-eef853e1-f8d8-4ab9-bfd3-05c2a58fd9dc 10Gi RWX managed-nfs-storage 2m54spersistentvolumeclaim/mysql-02-pvc Bound pvc-fc0b8228-81c0-4d91-83b0-6bb20ab37cc3 5Gi RWX managed- Nfs-storage 2m54spersistentvolumeclaim/mysql-03-pvc Bound pvc-c6739d7d-4930-49bd-975f-04bffc05dfd6 3Gi RWX managed-nfs-storage 2m54sNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpersistentvolume/pvc-c6739d7d-4930-49bd-975f-04bffc05dfd6 3Gi RWX Delete Bound default/mysql-03-pvc managed-nfs-storage 2m54spersistentvolume/pvc-eef853e1-f8d8-4ab9-bfd3-05c2a58fd9dc 10Gi RWX Delete Bound default/mysql-01-pvc managed-nfs-storage 2m54spersistentvolume/pvc-fc0b8228-81c0-4d91-83b0-6bb20ab37cc3 5Gi RWX Delete Bound default/mysql-02-pvc managed-nfs-storage 2m54s
You can see that the pvc has been created successfully and an associated pv resource object is automatically created. Let's check whether the pv of the corresponding naming format is generated in the backend storage directory.
[root@localhost nfs] # pwd/data/nfs [root@localhost nfs] # lltotal 12drwxrwxrwx 2 root root 4096 Feb 20 10:05 default-mysql-01-pvc-pvc-eef853e1-f8d8-4ab9-bfd3-05c2a58fd9dcdrwxrwxrwx 2 root root 4096 Feb 20 10:05 default-mysql-02-pvc-pvc-fc0b8228-81c0-4d91-83b0-6bb20ab37cc3drwxrwxrwx 2 root root 4096 Feb 20 10:05 default-mysql-03-pvc-pvc-c6739d7d-4930-49bd-975f-04bffc05dfd6
You can see that there is a folder with a long name below. Whether this folder is named in the same way as our above rule: ${namespace}-${pvcName}-${pvName}, the result is in line with our expectations.
Next, let's deploy a mysql application and test the PVC object declared in StorageClass.
Cat mysql-config.yamlapiVersion: v1kind: ConfigMapmetadata: name: mysql-configdata: custom.cnf: | [mysqld] default_storage_engine=innodb skip_external_locking skip_host_cache skip_name_resolve default_authentication_plugin=mysql_native_passwordcat mysql-secret.yamlapiVersion: v1kind: Secretmetadata: name: mysql-user-pwddata: mysql-root-pwd: cGFzc3dvcmQ=cat mysql-deploy.yamlapiVersion: v1kind: Servicemetadata: name: mysqlspec: type: NodePort ports:-port: 3306 nodePort: 30006 protocol: TCP targetPort: 3306 selector: app: mysql---apiVersion: apps/v1kind: name: mysqlspec: replicas: 1 selector: matchLabels: app: mysql strategy: type: Recreate template: metadata: labels: app: mysqlspec: containers:-image: mysql imagePullPolicy: IfNotPresent env:-name: MYSQL_ROOT_PASSWORD ValueFrom: secretKeyRef: name: mysql-user-pwd key: mysql-root-pwd ports:-containerPort: 3306 name: mysql volumeMounts:-name: mysql-config mountPath: / etc/mysql/conf.d/-name: mysql-persistent-storage mountPath: / var/lib/mysql- Name: timezone mountPath: / etc/localtime volumes:-name: mysql-config configMap: name: mysql-config-name: timezone hostPath: path: / usr/share/zoneinfo/Asia/Shanghai-name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-01-pvc [root@k8s-master-01 yaml] # kubectl apply-f .configmap / mysql- Config createdservice/mysql createddeployment.apps/mysql createdsecret/mysql-user-pwd created [root@k8s-master-01 yaml] # kubectl get pod SvcNAME READY STATUS RESTARTS AGEpod/mysql-7c5b5df54c-vrnr8 1 to 1 Running 0 83spod/nfs-client-provisioner-c676947d-pfpms 1 to 1 Running 0 30mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/kubernetes ClusterIP 10.0.0.1 443/TCP 93dservice/mysql NodePort 10.0.0.19 3306:30006/TCP 83s
You can see that the mysql application is running normally, and we test connecting to the mysql database through ip and port 30006 of any node node.
[root@localhost] # mysql-uroot-h292.168.248.134-P30006-pEnter password: Welcome to the MariaDB monitor. Commands end with; or\ g.Your MySQL connection id is 10Server version: 8.0.19 MySQL Community Server-GPLCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type 'help;' or'\ h' for help. Type'\ c'to clear the current input statement.MySQL [(none)] > show databases +-+ | Database | +-+ | information_schema | | mysql | | performance_schema | | sys | +-+ 4 rows in set (0.01 sec) MySQL [(none)] >
You can see that the mysql database connection is normal. When you look at the nfs storage, the mysql database data has been persisted to the nfs server / data/nfs/default-mysql-01-pvc-pvc-eef853e1-f8d8-4ab9-bfd3-05c2a58fd9dc directory
[root@localhost nfs] # du-sh * 177m default-mysql-01-pvc-pvc-eef853e1-f8d8-4ab9-bfd3-05c2a58fd9dc4.0K default-mysql-02-pvc-pvc-fc0b8228-81c0-4d91-83b0-6bb20ab37cc34.0K default-mysql-03-pvc-pvc-c6739d7d-4930-49bd-975f-04bffc05dfd6 [root@localhost nfs] # cd default-mysql-01-pvc-pvc-eef853e1-f8d8-4ab9-bfd3-05c2a58fd9dc/ [root@localhost default-mysql-01-pvc-pvc-eef853e1- F8d8-4ab9-bfd3-05c2a58fd9dc] # lsauto.cnf binlog.index client-cert.pem ibdata1 ibtmp1 mysql.ibd public_key.pem sysbinlog.000001 ca-key.pem client-key.pem ib_logfile0 # innodb_temp performance_schema server-cert.pem undo_001binlog.000002 ca.pem ib_buffer_pool ib_logfile1 mysql private_key.pem server-key.pem undo_002
In addition, we can see that we are creating a PVC object manually here. In practice, using StorageClass is more likely to be a service of type StatefulSet. For a service of type StatefulSet, we can also use StorageClass directly through a volumeClaimTemplates attribute, as follows
Vim web.yamlapiVersion: v1kind: Servicemetadata: name: nginx labels: app: nginxspec: ports:-port: 80 name: web clusterIP: None selector: app: nginx---apiVersion: apps/v1kind: StatefulSetmetadata: name: webspec: serviceName: "nginx" replicas: 8 selector: matchLabels: app: nginx template: metadata: labels: app: nginxspec: containers:-name: nginx image: Nginx imagePullPolicy: IfNotPresent ports:-containerPort: 80 name: web volumeMounts:-name: www mountPath: / usr/share/nginx/html volumeClaimTemplates:-metadata: name: www spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi
Create the above object directly
[root@k8s-master-01 Dynamic-pv] # kubectl apply-f web.yaml service/nginx createdstatefulset.apps/web created [root@k8s-master-01 Dynamic-pv] # kubectl get pod-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnfs-client-provisioner-c676947d-wzwhh 1 Running 0 41m 10.244.0.176 K8s-node-01 web-0 1/1 Running 0 32s 10.244.1.167 k8s-node-02 web-1 1/1 Running 0 31s 10.244.0.188 k8s-node-01 web- 2 1/1 Running 0 29s 10.244.1.168 k8s-node-02 web-3 1/1 Running 0 27s 10.244.0.189 k8s-node-01 web-4 1/1 Running 0 24s 10.244.1.169 k8s-node-02 web-5 1/1 Running 0 22s 10.244.0.190 k8s-node-01 web-6 1/1 Running 0 21s 10.244.1.170 k8s-node-02 web-7 1/1 Running 0 19s 10.244.0.191 k8s-node-01
View the data catalog on storage
You can see that nfs storage volumes can be allocated automatically and dynamically. The above is the storageclass practice of k8s persistent storage
These are the specific steps for using storageclass to implement dynamic pv, which is more comprehensive, and I believe there are quite a few tools that we may see or use in our daily work. Through this article, I hope you can gain more.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.