In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
In a previous article about the deployment of zabbix monitoring system in K8s, we used hostPath to persist the data of mysql and the simple use of emptyDir. If we are going to deploy a production-level application on K8s, it is obvious that it is not possible to use hostPath and emptyDir to persist our data in terms of data persistence, and we also need more reliable storage to store the application's persistent data, so that the container can still use the previous data after reconstruction. This article will introduce in detail two very important resource objects in k8s: pv and pvc to implement storage management.
Kubernetes version: 1.16.0
Concept
Pv, whose full name is PersistentVolume (persistent volume), is an abstraction of the underlying shared storage. It is related to the implementation of the specific underlying shared storage technology, such as Ceph, GlusterFS, NFS and so on.
The full name of pvc is PersistentVolumeClaim (persistent volume declaration). PVC is a declaration of user storage. PVC consumes PV resources. Users who really use storage do not need to care about the underlying storage implementation details, they just need to use PVC directly.
NFS
K8s supports many types of PV, including Ceph, GlusterFs, nfs, and hostPath, but hostPath can only be used for stand-alone testing. For convenience, we demonstrate with NFS storage resources.
Next, we install the nfs service on node 192.168.248.139, and the shared data directory is / data/nfs/
1. Install and configure nfs
Yum install nfs-utils rpcbind-ymkdir-p / data/nfs & & chmod 755 / data/nfs/cat / etc/exports/data/nfs * (rw,sync,no_root_squash) systemctl start rpcbind nfs & & systemctl enable rpcbind nfs
Before configuring pv and pvc on k8s, we need to install the nfs client on all node nodes, and the environment nodes I use are as follows:
[root@k8s-master-01] # kubectl get nodes-o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIMEk8s-node-01 Ready 90d v1.16.0 192.168.248.134 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.6k8s-node -02 Ready 90d v1.16.0 192.168.248.135 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.6
The nfs client must be installed on all nodes, or it may cause problems with PV mounts.
Create pv
After deploying the nfs storage, we can then create pv and pvc resources by editing the yaml file. Let's create a PV resource object that uses nfs-type back-end storage, 2G storage space, ReadWriteOnce access mode and Recyle recycling policy.
Vim pv1.yamlapiVersion: v1kind: PersistentVolumemetadata: name: pv-01 labels: name: pv-01spec: nfs: path: / data/nfs server: 192.168.248.139 accessModes: ["ReadWriteOnce"] persistentVolumeReclaimPolicy: Retain capacity: storage: 2Gi
Explain some of the parameters in the above yaml file as follows
Nfs: indicates that the backend storage used is nfs
Path: represents the data directory where the backend stores the shared data
AccessMode: it is used to set the access permission to pv, including the following ways:
ReadWriteOnce (RWO): read and write permissions, which can only be mounted by a single node
ReadOnlyMany (ROX): read-only permission, which can be mounted by multiple nodes
ReadWriteMany (RWX): read and write permissions, which can be mounted by multiple nodes
PersistentVolumeReclaimPolicy: indicates the recycling policy of pv. The default is Retain. Currently, three types of policies are supported:
Retain (reserved)
Recycle (Recycling)
Delete (Delete)
Then use the kubectl command to create it.
[root@k8s-master-01 pv] # kubectl apply-f pv1.yaml persistentvolume/pv-01 created [root@k8s-master-01 pv] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEmysql-pv-volume 20Gi RWO Retain Bound default/mysql-pv-claim manual 2d2hpv-01 2Gi RWO Recycle Available 6s
As above, pv-01 has been created successfully with a status of Available, which means that pv-01 is ready to be used by pvc.
There may be four different stages in the life cycle of pv:
Available (available)
Bound (bound)
Released (released)
Failed (failed)
Create pvc
The contents of the corresponding yaml file are as follows:
Vim pvc-nfs.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc-nfsspec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 2Gi
Then use the kubectl command to create it.
[root@k8s-master-01 pv] # kubectl apply-f pvc-nfs.yaml persistentvolumeclaim/pvc-nfs created [root@k8s-master-01 pv] # kubectl get pv PvcNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpersistentvolume/mysql-pv-volume 20Gi RWO Retain Bound default/mysql-pv-claim manual 2d3hpersistentvolume/pv-01 2Gi RWO Retain Bound default/pvc-nfs 19sNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpersistentvolumeclaim/mysql-pv-claim Bound mysql-pv-volume 20Gi RWO manual 2d3hpersistentvolumeclaim/pvc-nfs Bound pv-01 2Gi RWO 7s
You can see that the pvc-nfs has been created successfully and is in the bound state, and the pv is also in the bound state.
Use pvc
Now that we have completed the creation and binding of pv and pvc, let's create a deployment to use the pvc above
Vim deployment-pvc.yamlapiVersion: v1kind: Servicemetadata: name: svc-nginx-demospec: ports:-port: 80 protocol: TCP targetPort: 80 nodePort: 31080 selector: app: liheng type: NodePort---apiVersion: apps/v1kind: Deploymentmetadata: name: deployment-pvc-demo labels: app: deployment-pvc-demo annotations: liheng86876/created-by: "LIHENG" spec: replicas: 3 selector: matchLabels: app: liheng template: metadata: Labels: app: liheng spec: containers:-name: web-test image: nginx imagePullPolicy: IfNotPresent ports:-name: http containerPort: 80 volumeMounts:-name: html mountPath: / usr/share/nginx/html/ volumes:-name: html persistentVolumeClaim: claimName: pvc-nfs
Then use the kubectl command to create it.
[root@k8s-master-01 pv] # kubectl get deploy,pod SvcNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/deployment-pvc-demo 3/3 3 3 2m6sNAME READY STATUS RESTARTS AGEpod/deployment-pvc-demo-77859488fc-bmlhd 1/1 Running 0 2m5spod/deployment-pvc-demo-77859488fc-c8xkn 1/1 Running 0 2m5spod/deployment-pvc-demo-77859488fc-pz6g9 1 80:31080/TCP 2m6s 1 Running 0 2m5sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEservice/kubernetes ClusterIP 10.0.0.1 443/TCP 90dservice/svc-nginx-demo NodePort 10.0.0.194 80:31080/TCP 2m6s
You can see that pod is already in the running state, and we can view the details of deployment and svc with the following command
[root@k8s-master-01 pv] # kubectl describe deploy deployment-pvc-demoName: deployment-pvc-demoNamespace: defaultCreationTimestamp: Mon 17 Feb 2020 18:04:26 + 0800Labels: app=deployment-pvc-demoAnnotations: deployment.kubernetes.io/revision: 1 kubectl.kubernetes.io/last-applied-configuration: {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"annotations": {"liheng86876/created-by": "LIHENG"} "labels": {"app": "deployment-pvc-..." Liheng86876/created-by: LIHENGSelector: app=lihengReplicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailableStrategyType: RollingUpdateMinReadySeconds: 0RollingUpdateStrategy: 25% max unavailable 25% max surgePod Template: Labels: app=liheng Containers: web-test: Image: nginx Port: 80/TCP Host Port: 0/TCP Environment: Mounts: / usr/share/nginx/html/ from html (rw) Volumes: html: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: pvc-nfs ReadOnly: falseConditions: Type Status Reason -Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailableOldReplicaSets: NewReplicaSet: deployment-pvc-demo-77859488fc (3 replicas created 3 replicas created) Events: Type Reason Age From Message -Normal ScalingReplicaSet 4m48s deployment-controller Scaled up replica set deployment-pvc-demo-77859488fc to 3 [root@k8s-master-01 pv] # kubectl describe svc svc-nginx-demoName: svc-nginx-demoNamespace: defaultLabels: Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion": "v1" "kind": "Service", "metadata": {"annotations": {}, "name": "svc-nginx-demo", "namespace": "default"} "spec": {"ports": [{"nodePor...Selector: app=lihengType: NodePortIP: 10.0.0.194Port: 80/TCPTargetPort: 80/TCPNodePort: 31080/TCPEndpoints: 10.244.0.134virtual 80 Magi 10.244.0. 135:80,10.244.1.129:80Session Affinity: NoneExternal Traffic Policy: ClusterEvents:
Access test
We can access our Nginx service through the IP:31080 port of any node.
Why is there 403? That's because there are no files in our nfs shared directory.
[root@localhost nfs] # pwd/data/nfs [root@localhost nfs] # ls [root@localhost nfs] # We create an index.html and then visit the test [root@localhost nfs] # echo "Welcome K8s" > index.html
Then refresh the page:
We can see that the page has been accessed normally.
Data persistence test
Test method:
1. We delete the nginx application we created, then recreate the nginx application, and then access the test
Result: the data in the backend nfs storage will not be lost, and the page access will be normal.
2. We delete all the nginx applications, pv and pvc created, then recreate them, and then access the test.
Result: the data in the backend nfs storage will not be lost, and the page access will be normal.
We can also create pv is to set the persistentVolumeReclaimPolicy parameters to Recycle and Delete, and then do the above tests, here will not test one by one, interested students can study in depth.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.