In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
K8S is a mature container business cluster management solution, but at present, it is too complex and tedious in storage, not to mention the management of resource access and authentication. Although google has many years of practical experience before opening it up, from the current official release documents, it still appears to be a little more expensive. Let's take a look at this learning process:
1. At first, learning k8s found that it listed a series of storage solutions, nothing more than extensive support for a range of local and cloud storage:
It is really bluffing to see so many storage volume types, but there is no way. Before the current storage world has not been unified (and impossible), K8S has made a helpless move in order to maintain its strong ecological compatibility.
2. Fortunately, we do not need to learn every storage type, as long as we need to understand and learn about the type of storage volume we need. Then, in order to decouple Pod from storage resource management, K8S introduces the concept of PV and PVC, so cluster administrators and developers do their own duties, but bring additional resource management work such as PV and PVC creation, deletion, storage volume binding and recycling, as well as read and write control.
3. When the cluster size increases and the management overhead of PV and PVC is amazing, K8S introduces the concept of dynamic volume of StorageClass and CSI based on storageClass, which is the ultimate scheme for flexible allocation of K8S storage at present.
4, just before we fully understand StorageClass and CSI, we see that the relevant Clone and Snapshot have come out on the official website.
To illustrate a problem, the business demand scenario is constantly upgrading, the business demand is increasing, and the solution is constantly updated and changing. Just one storage involves many concepts, which really makes people a little at a loss. Take a ladle of water. Today, this experiment is aimed at the shared storage of NFS.
Why experiment around NFS? As far as enterprises are concerned, NFS sharing is very easy to obtain within the enterprise private cloud, and the cost is low. An ordinary four-disk enterprise-class NAS server carries stable and reliable NFS services at the factory, without the need for self-deployment and integration with the enterprise file sharing server, without additional hardware and deployment costs.
First, we verify that the NFS service of the NAS server is in place:
1. Start the NFS client:
Before mounting, make sure that nfs-utils or nfs-common is installed on the system as follows:
CentOS:
Yum install nfs-utils-y
Note: this step is necessary, the reason is very simple, even K8S does not have any magical powers, the bottom layer is still through the nfs driver to mount the file system, if you do not even install the nfs suite, even if all the configurations of K8S are correct, when creating the interface pod, it will prompt you to report that the NFS service path cannot be connected. In order to experience this error, it is recommended not to install it first, assuming that your nfs service is normal, and then install it later. You can see the error disappear.
Ubuntu or Debian:
Apt-get install nfs-common
Create a mount path:
Mkdir / nfs
Confirm the NFS service and sharing path that exists on the remote NAS server:
Mount the NFS share path:
Mount-IP:/NAS-NFS of the t nfs NAS server
We can go to the mount path and try to create a folder to see if the mount is successful and the read and write are normal.
The easy problem is that the mount is successful but the folder cannot be created, and the file system is prompted to read-only, but the ls command displays 777. This is because the NFS service permission of the NAS server does not have read and write access to open anonymous access:
Different NAS devices may be configured differently. Please enable read and write according to the configuration of your NFS server.
After all the above are successful, further test whether the NFS shared storage is working properly in PV,PVC:
2. Create static PV and PVC for the test
Pv.yml
ApiVersion: v1kind: PersistentVolumemetadata: name: mypv1spec: capacity: storage: 4Gi accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Recycle nfs: path: / NAS-NFS server: [NAS server IP or domain name]
Pvc.yml
Kind: PersistentVolumeClaimapiVersion: v1metadata: name: mypvc1spec: accessModes:-ReadWriteOnce resources: requests: storage: 100Mi
Start creating resource objects (to simplify the experiment, all resource objects are expanded in the default namespace default):
In fact, any of the dynamically stored namespaces does not matter, because the dynamically created PV is cluster-level and can be requested and bound by the pvc of any namespace.
And the dashboard shows that the binding is successful:
The above experiments fully prove that the nfs system is working properly. Delete the contents of the above tests and proceed to the next step.
Note:
If you do not delete PVC to remove the occupation of resources, you cannot delete PV directly and release resources, which violates the persistence principle of K8S:
The specific operation is shown in three aspects:
1the Kubectl delete pv command stays in the execution state and does not display the result.
2Dashboard resource object is displayed normally all the time, ignoring deletion operation
3. The command line forcibly exits the delete command, but it does not mean that the delete operation is undone, but hangs until the PVC is manually cleared, and the system immediately releases PV.
If you want to generate PV dynamically, you need to run a NFS-Provisioner service to input the parameters related to the configured NFS system and provide users with the service to create PV.
It is officially recommended to use Deployment to run a copy set, but you can also use other methods such as Daemonset, which are provided in the official documentation.
Create service access roles and role bindings for persistent volumes Permission assignment for Provisioner role background call: write rbac.yaml file as follows: apiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-provisioner-runnerrules:-apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create" "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"]-apiGroups: [""] resources: ["events"] verbs: ["create", "update" "patch"]-- kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects:-kind: ServiceAccount name: nfs-provisioner namespace: defaultroleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-provisionerrules:-apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list" "watch", "create", "update", "patch"]-kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: leader-locking-nfs-provisionersubjects:-kind: ServiceAccount name: nfs-provisioner # replace with namespace where provisioner is deployed namespace: defaultroleRef: kind: Role name: leader-locking-nfs-provisioner apiGroup: rbac.authorization.k8s.io creates the provisioner role for NFS service The deployment result is a Pod to respond to storage resource requests submitted by various cliet: write the deployment.yaml file as follows: kind: DeploymentapiVersion: extensions/v1beta1metadata: name: nfs-provisionerspec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-provisionerspec: serviceAccount: nfs-provisioner containers:-name: nfs-provisioner image: registry.cn-hangzhou.aliyuncs Com / open-ali/nfs-client-provisioner # official image: quay.io/external_storage/nfs-client-provisioner:latest cannot be downloaded Use the above Ali Cloud image instead of volumeMounts:-name: nfs-client-root mountPath: / persistentvolumes env:-name: PROVISIONER_NAME value: example.com/nfs-name: NFS_SERVER value: [IP address of NAS server]-name: NFS_PATH Value: / NAS-NFS # mount path of NFS system volumes:-name: nfs-client-root nfs: server: [IP address of NAS server] path: / NAS-NFS # mount path of NFS system
After creating the above files with the kubectl command, let's view the deployment results of K8S:
We get a replica set and this replica set starts a Pod to act as the nfs-client-provisioner role, which is used to respond to various storage resource requests, but cannot be used directly, and needs to be triggered with StorageClass.
Create a storage class StorageClass
Write and create the storageclass.yaml as follows:
Kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: nfsprovisioner: example.com/nfs
Next, you will create a PVC for the test to test whether the StorageClass is working properly:
Write and create the test-claim.yaml as follows, and note that the storageClassName should be consistent with the StorageClass name created above.
Kind: PersistentVolumeClaimapiVersion: v1metadata: name: test-claim1spec: accessModes:-ReadWriteMany resources: requests: storage: 1Mi storageClassName: nfs # this is consistent with the created SC name
After creation, view it with kubectl get pvc to see that the newly created PVC can bind PV automatically.
Create a test Pod to use this PVC, and write the test-pod.yaml file as follows:
Kind: PodapiVersion: v1metadata: name: test-podspec: containers:-name: test-pod image: busybox command:-"/ bin/sh" args:-"- c"-"touch / mnt/SUCCESS & & exit 0 | | exit 1" volumeMounts:-name: nfs-pvc mountPath: "/ mnt" restartPolicy: "Never" volumes:-name: nfs-pvc persistentVolumeClaim: claimName: test-claim1
Check to see if the Pod status changes to Completed. If so, you should be able to see a SUCCESS file in the shared path of the NFS system.
In this way, the ability of StorageClass to dynamically create PV is successfully implemented.
Ordinary pod automatically obtains dynamic volumes by declaring that PVC is created manually in advance to request storageclass in orchestration files. In fact, this kind of automatic is also semi-automatic due to the existence of PVC, which is still not flexible enough, so it is officially recommended to use statefulset.
Write the statefulset.yaml file as follows:
ApiVersion: apps/v1beta1kind: StatefulSetmetadata: name: webspec: serviceName: "nginx1" replicas: 2 volumeClaimTemplates:-metadata: name: test annotations: volume.beta.kubernetes.io/storage-class: "nfs" # nfs is consistent with the previously defined storageclass name spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi template: metadata: Labels: app: nginx1 spec: serviceAccount: nfs-provisioner # is consistent with the previously created resource access role containers:-name: nginx1 image: nginx imagePullPolicy: IfNotPresent volumeMounts:-mountPath: "/ my_dir" # this mount path can be arbitrarily specified name: test
View the execution result:
View the results of storage resource creation:
Open the file system directory of nfs and find that the resource folder corresponding to PV is automatically generated:
The automatic naming convention for folders is a collection of various resource names for pod.
Further, you can use kubectl exec-it to enter one of the Pod to verify that K8S is not a fairy, but also generates a temporary path under the NFS path to mount to the specified orchestration path in the container by means of mount:
At this point, the experiment is basically completed.
Github has an official uploaded ready-made source code, which is basically the same as above. This tutorial only demonstrates the process in detail:
Https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.