Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Using nfs to realize persistent Storage in K8s Cluster

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Prepare NFS service 192.168.1.244

$yum-y install nfs-utils rpcbind

$systemctl start nfs-server rpcbind

$systemctl enable nfs-server rpcbind

$mkdir-p / data/k8s

$cd / data/k8s

$echo 11111111 > index.html

$vim / etc/exports

/ data/k8s * (rw,async,no_root_squash)

$systemctl restart nfs-server

$exportfs-arv

Client test, all k8s nodes need to install nfs client

$yum-y install nfs-utils rpcbind

$systemctl start nfs rpcbind

$systemctl enable nfs rpcbind

$showmount-e 192.168.1.244

Create pod to mount the nfs server directly

ApiVersion: v1kind: Podmetadata: labels: run: nginx name: podxxspec: volumes:-name: nfs nfs: server: 192.168.1.244 path: / data/k8s containers:-image: nginx name: nginx volumeMounts:-mountPath: / usr/share/nginx/html name: nfs

$kubectl exec podxx-it bash

The full name of PV is: PersistentVolume (persistent volume), which is an abstraction of the underlying shared storage. PV is created and configured by the administrator. It is related to the implementation of the underlying shared storage technology, such as Ceph, GlusterFS, NFS, etc., all complete the docking with shared storage through plug-in mechanism.

The full name of PVC is: PersistentVolumeClaim (persistent volume declaration). PVC is a declaration of user storage. PVC is similar to Pod. Pod consumes nodes, PVC consumes PV resources, Pod can request CPU and memory, and PVC can request specific storage space and access mode. Users who actually use storage do not need to care about the underlying storage implementation details, they just need to use PVC directly.

Deployment/pod---- > pvc---- > pv---- > shared storage

AccessModes (access mode)

ReadWriteOnce (RWO): read and write permissions, but can only be mounted by a single node

ReadOnlyMany (ROX): read-only permission, which can be mounted by multiple nodes

ReadWriteMany (RWX): read and write permissions, which can be mounted by multiple nodes

PersistentVolumeReclaimPolicy (Recycling Strategy)

Retain (retention)-preserves data, requiring the administrator to manually clean up the data

Recycle (Recycling)-clears the data in PV, which is equivalent to executing rm-rf / thevoluem/*

Delete (delete)-the backend storage connected to PV completes the deletion of volume, which is common in cloud service providers' storage services, such as ASW EBS.

Currently, only NFS and HostPath support recycling policies. Of course, generally speaking, it is safe to set this strategy to Retain.

In the life cycle of a PV, it may be at four different stages.

Available (available): indicates the available state and has not been bound by any PVC

Bound (bound): indicates that it has been bound by PVC

Released (released): the PVC has been deleted, but the resource has not been redeclared by the cluster

Failed: indicates that the automatic collection of the PV failed

Manage pv and pvc manually

Creation order: backend storage-pv---pvc---pod

Delete order: pod---pvc---pv

1. Create a pv

ApiVersion: v1kind: PersistentVolumemetadata: name: pv2 labels: app: nfsspec: capacity: storage: 1Gi accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Recycle nfs: path: / data/k8s server: 192.168.1.244

2. Create pvc

Kind: PersistentVolumeClaimapiVersion: v1metadata: name: pvc2-nfsspec: accessModes:-ReadWriteOnce resources: requests: storage: 1Gi selector: matchLabels: app: nfs

The above pvc will automatically bind to the pv with access mode ReadWriteOnce, storage greater than or equal to 1Gi, and label app: nfs

3 create pod using pvc

ApiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nfs-pvcspec: replicas: 3 template: metadata: labels: app: nfs-pvcspec: containers:-name: nginx image: nginx imagePullPolicy: IfNotPresent ports:-containerPort: 80 name: web volumeMounts:-name: www subPath: nginx1 # you need to manually specify a subdirectory in the nfs server This directory automatically creates the pvc created in step 2 of mountPath: / usr/share/nginx/html volumes:-name: www persistentVolumeClaim: claimName: pvc2-nfs #

The result is that / data/k8s/nginx1 in shared storage will be mounted to the / usr/share/nginx/html directory in the above pod

Use StorageClass to manage pv and pvc

In general, only pv is dynamically generated (except for pvc templates that use StatefulSet), and others need to be created manually

Dynamic generation of pv requires the joint action of StorageClass and nfs-client-provisioner

1. Create a nfs-client-provisioner

Provisione uses nfs server directly

$docker pull quay.io/external_storage/nfs-client-provisioner:latest

Kind: DeploymentapiVersion: extensions/v1beta1metadata: name: nfs-client-provisionerspec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-client-provisionerspec: serviceAccountName: nfs-client-provisioner containers:-name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts:-name: nfs-client-root MountPath: / persistentvolumes env:-name: PROVISIONER_NAME value: fuseim.pri/ifs-name: NFS_SERVER value: 192.168.1.244-name: NFS_PATH value: / data/k8s volumes:-name: nfs-client-root nfs: Server: 192.168.1.244 path: / data/k8s---apiVersion: v1kind: ServiceAccountmetadata: name: nfs-client-provisioner---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: nfs-client-provisioner-runnerrules:-apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get" "list", "watch", "create", "delete"]-apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"]-apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list" "watch"]-apiGroups: [""] resources: ["events"] verbs: ["list", "watch", "create", "update", "patch"]-apiGroups: [""] resources: ["endpoints"] verbs: ["create", "delete", "get", "list", "watch", "patch" "update"]-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-client-provisionersubjects:-kind: ServiceAccount name: nfs-client-provisioner namespace: defaultroleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io

2. Create StorageClass

ApiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: course-nfs-storageprovisioner: fuseim.pri/ifs

3. Create pvc and generate pv dynamically

ApiVersion: v1kind: PersistentVolumeClaimmetadata: name: test-pvc annotations: volume.beta.kubernetes.io/storage-class: "course-nfs-storage" spec: accessModes:-ReadWriteMany resources: requests: storage: 500Mi

$kubectl get pv # pv is automatically generated and bound to pvc

4. Use the above pvc to create a pod

Unlike managing pv and pvc manually, using StorageClass to manage pv and pvc does not need to specify a subdirectory manually when creating a pod, and a random subdirectory is automatically generated within the storage server root directory / data/k8s

ApiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nfs-pvcspec: replicas: 3 template: metadata: labels: app: nfs-pvcspec: containers:-name: nginx image: nginx imagePullPolicy: IfNotPresent ports:-containerPort: 80 name: web volumeMounts:-name: www mountPath: / usr/share/nginx/html volumes: -name: www persistentVolumeClaim: claimName: test-pvc

After testing, whether you create a pod manually or create multiple pod with deployment, only one random directory can be generated, that is, a pair of pvc and pv corresponds to a persistent storage directory.

5. When creating a StatefulSet, use StorageClas to automatically generate pv and pvc directly in its pvc template

$kubectl explain StatefulSet.spec

VolumeClaimTemplates

ApiVersion: apps/v1beta1kind: StatefulSetmetadata: name: nfs-webspec: serviceName: "nginx" replicas: 2 template: metadata: labels: app: nfs-webspec: terminationGracePeriodSeconds: 10 containers:-name: nginx image: nginx ports:-containerPort: 80 name: web volumeMounts:-name: www mountPath: / usr/share/nginx/html VolumeClaimTemplates:-metadata: name: www annotations: volume.beta.kubernetes.io/storage-class: course-nfs-storage spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi

The above script generates two pvc, two pv, two pod, and two random subdirectories automatically in the shared storage root directory / data/k8s, because the number of replicas is 2

Unlike deployment, stateful applications must have one persistent storage folder per pod, not one shared by multiple pod

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report