In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
How Kubernetes manages storage resources.
First we will learn about Volume and how Kubernetes provides storage for containers in the cluster through Volume; then we will practice several commonly used Volume types and understand their respective application scenarios; finally, we will discuss how Kubernetes separates the responsibilities of cluster administrators and cluster users through Persistent Volume and Persistent Volume Claim, and practices the static and dynamic provisioning of Volume.
Volume
In this section, we discuss Kubernetes's storage model, Volume, and learn how to map various persistent stores to containers.
We often say that containers and Pod are temporary.
The implication is that they may have a short life cycle and are frequently destroyed and created. When the container is destroyed, the data stored in the internal file system of the container will be erased.
To persist the container's data, you can use Kubernetes Volume.
The life cycle of Volume is independent of containers. Containers in Pod may be destroyed and rebuilt, but Volume will be retained.
In essence, Kubernetes Volume is a directory, similar to Docker Volume. When the Volume is mount to all containers in the Pod,Pod, the Volume can be accessed. Kubernetes Volume also supports a variety of backend types, including emptyDir, hostPath, GCE Persistent Disk, AWS Elastic Block Store, NFS, Ceph, etc. For a complete list, please see https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes
Volume provides an abstraction of various backend. When using Volume to read and write data, the container does not need to care whether the data is stored in the file system of the local node or on the cloud disk. To it, all types of Volume are just a directory.
We will start with the simplest emptyDir to learn Kubernetes Volume.
EmptyDir
EmptyDir is the most basic Volume type. As its name suggests, an emptyDir Volume is an empty directory on Host.
EmptyDir Volume is persistent for containers, but not for Pod. When the Pod is deleted from the node, the contents of the Volume are also deleted. However, if only the container is destroyed and the Pod is still there, the Volume is not affected.
In other words: the life cycle of emptyDir Volume is the same as that of Pod.
All containers in Pod can share Volume, and they can specify their own mount paths. Here is an example to practice emptyDir. The configuration file is as follows:
ApiVersion: v1kind: Podmetadata: name: producer-consumerspec: containers:-image: busybox name: producer volumeMounts:-mountPath: / producer_dir name: shared-volume args:-/ bin/sh-- c-echo "hello world" > / producer_dir/hello Sleep 30000-image: busybox name: consumer volumeMounts:-mountPath: / consumer_dir name: shared-volume args:-/ bin/sh-- c-cat / consumer_dir/hello; sleep 30000 volumes:-name: shared-volume emptyDir: {}
Here we simulate a producer-consumer scenario. Pod has two containers, producer and consumer, which share a Volume. Producer is responsible for writing data to Volume, while consumer reads data from Volume.
The bottom volumes of the ① file defines a Volume shared-volume of type emptyDir.
The ② producer container will shared-volume mount to the / producer_dir directory.
③ producer writes the data to the file hello through echo.
The ④ consumer container will shared-volume mount to the / consumer_dir directory.
⑤ consumer reads data from the file hello through cat.
Create the Pod by executing the following command:
# kubectl apply-f emptyDir.yamlpod/producer-consumer created# kubectl get podNAME READY STATUS RESTARTS AGEproducer-consumer 2 emptyDir.yamlpod/producer-consumer created# kubectl get podNAME READY STATUS RESTARTS AGEproducer-consumer 2 Running 0 87s# kubectl logs producer-consumer consumer hello world
Kubectl logs shows that container consumer successfully reads the data written by producer, verifying that the two containers share emptyDir Volume.
Because emptyDir is a directory in the Docker Host file system, the effect is equivalent to executing docker run-v / producer_dir and docker run-v / consumer_dir. Looking at the detailed configuration information of the container through docker inspect, we find that both containers mount the same directory:
{"Type": "bind", "Source": "/ var/lib/kubelet/pods/188767a3-cf18-4bf9-a89b-b0c4cf4124bb/volumes/kubernetes.io~empty-dir/shared-volume", "Destination": "/ consumer_dir", "Mode": "", "RW": true "Propagation": "rprivate"} "Mounts": [{"Type": "bind", "Source": "/ var/lib/kubelet/pods/188767a3-cf18-4bf9-a89b-b0c4cf4124bb/volumes/kubernetes.io~empty-dir/shared-volume", "Destination": "/ producer_dir" "Mode": "," RW ": true," Propagation ":" rprivate "}
Here / var/lib/kubelet/pods/3e6100eb-a97a-11e7-8f72-0800274451ad/volumes/kubernetes.io~empty-dir/shared-volume is the real path of emptyDir on Host.
EmptyDir is a temporary directory created on Host, and its advantage is that it can easily provide shared storage for containers in Pod without additional configuration. But it doesn't have persistence, and if Pod doesn't exist, emptyDir will be gone. Based on this feature, emptyDir is particularly suitable for scenarios where containers in Pod need temporary shared storage space, such as the previous producer-consumer use case.
HostPath Volume .
The role of hostPath Volume is to mount a directory that already exists in the Docker Host file system to the container of Pod. Most applications will not use hostPath Volume because it actually increases the coupling between Pod and nodes and limits the use of Pod. However, applications that need access to Kubernetes or Docker internal data (configuration files and binary libraries) need to use hostPath.
For example, kube-apiserver and kube-controller-manager are such applications, through
Kubectl edit-namespace=kube-system pod kube-apiserver-k8s-master
To view the configuration of kube-apiserver Pod, here is the relevant part of Volume:
VolumeMounts:-mountPath: / etc/ssl/certs name: ca-certs readOnly: true-mountPath: / etc/pki name: etc-pki readOnly: true-mountPath: / etc/kubernetes/pki name: k8s-certs readOnly: true volumes:-hostPath: path: / etc/ssl/certs type: DirectoryOrCreate name: ca-certs-hostPath: path: / etc/pki type: DirectoryOrCreate Name: etc-pki-hostPath: path: / etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs
Three hostPath volume k8s, certs, and pki are defined here, corresponding to the Host directories / etc/kubernetes, / etc/ssl/certs, and / etc/pki, respectively.
If the Pod is destroyed, the directory corresponding to the hostPath will also be retained, so from this point of view, hostPath is more persistent than emptyDir. But once Host crashes, hostPath becomes inaccessible.
A really persistent Volume.
External Storage Provider
If Kubernetes is deployed on public clouds such as AWS, GCE, Azure, etc., you can directly use cloud disk as Volume. Here is an example of AWS Elastic Block Store:
To use ESB volume in Pod, you must first create it in AWS and then reference it through volume-id. For more information on how to use other cloud disks, please refer to the official documents of various public cloud vendors.
Kubernetes Volume can also use mainstream distributed storage, such as Ceph, GlusterFS, etc. Here is an example of Ceph:
The / some/path/in/side/cephfs directory of Ceph is mount to the container path / test-ceph.
Compared to emptyDir and hostPath, the most important feature of these Volume types is that they do not rely on Kubernetes. The underlying infrastructure of Volume is managed by independent storage systems and is separate from the Kubernetes cluster. After the data is persisted, even if the entire Kubernetes crashes, it will not be compromised.
Of course, a storage system like operation and maintenance is not usually a simple task, especially when there are high requirements for reliability, high availability and scalability.
Volume provides a very good data persistence solution, but there are still deficiencies in manageability. In the next section we will learn about a more manageable storage scheme: PersistentVolume & PersistentVolumeClaim.
PV & PVC
Volume provides a very good data persistence solution, but there are still deficiencies in manageability.
Taking the previous example of AWS EBS, to use Volume,Pod, you must know the following information in advance:
The current Volume is from AWS EBS. The EBS Volume has been created ahead of time and knows the exact volume-id.
Pod is usually maintained by the application developer, while Volume is usually maintained by the storage system administrator. Developers need to get the above information:
Or ask the administrator. Or you are the administrator yourself.
This creates a management problem: the responsibilities of application developers and system administrators are coupled. It is acceptable if the system is small or for a development environment. However, when the size of the cluster becomes larger, especially for the generation environment, considering efficiency and security, this has become a problem that must be solved.
The solutions given by Kubernetes are PersistentVolume and PersistentVolumeClaim.
PersistentVolume (PV) is a piece of storage space in an external storage system that is created and maintained by an administrator. Like Volume, PV is persistent and its life cycle is independent of Pod.
PersistentVolumeClaim (PVC) is an application for PV (Claim). PVC is usually created and maintained by ordinary users. When you need to allocate storage resources for Pod, users can create a PVC that indicates the capacity of the storage resources and information such as access mode (such as read-only). Kubernetes will find and provide PV that meets the criteria.
With PersistentVolumeClaim, users only need to tell Kubernetes what kind of storage resources they need, regardless of the underlying details such as where the real space is allocated and how to access it. The underlying information of the Storage Provider is left to the administrator, and only the administrator should care about the details of creating the PersistentVolume.
Kubernetes supports many types of PersistentVolume, such as AWS EBS, Ceph, NFS, etc. For a complete list, please see https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes
NFS PersistentVolume
Practice PV and PVC through NFS.
As a preparation, we have set up a NFS server on the k8s-master node with the directory / nfsdata:
[root@k8s-master ~] # showmount-eExport list for k8s-master:/nfsdata 192.168.168.0 take 24
Create a PV mypv1 below, and the configuration file nfs-pv1.yml is as follows:
ApiVersion: v1kind: PersistentVolumemetadata: name: mypv1spec: capacity: storage: 1Gi accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: nfs nfs: path: / nfsdata/pv1 server: 192.168.168.10
① capacity specifies that the capacity of the PV is 1G.
② accessModes specifies ReadWriteOnce as the access mode, and the supported access modes are:
ReadWriteOnce-PV can mount to a single node in read-write mode.
ReadOnlyMany-PV can mount to multiple nodes in read-only mode.
ReadWriteMany-PV can mount to multiple nodes in read-write mode.
③ persistentVolumeReclaimPolicy specifies that when the recycling policy of PV is Recycle, the supported policies are:
Retain-requires manual recycling by the administrator.
Recycle-clears the data in PV, which is equivalent to performing rm-rf / thevolume/*.
Delete? deletes the corresponding storage resources on the Storage Provider, such as AWS EBS, GCE PD, Azure Disk, OpenStack Cinder Volume, and so on.
④ storageClassName specifies that the class of PV is nfs. Equivalent to setting a classification for PV, PVC can specify the PV for the class to apply for the corresponding class.
⑤ specifies the directory where PV corresponds on the NFS server.
Create a mypv1:
[root@k8s-master ~] # kubectl apply-f nfs-pv1.yml persistentvolume/mypv1 created [root@k8s-master ~] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEmypv1 1Gi RWO Recycle Available nfs 7s
STATUS is Available, which means that mypv1 is ready and can be applied for by PVC.
Next, create PVC mypvc1, and the configuration file nfs-pvc1.yml is as follows:
Kind: PersistentVolumeClaimapiVersion: v1metadata: name: mypvc1spec: accessModes:-ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nfs
PVC is very simple, only need to specify the capacity of PV, access mode and class.
Create a mypvc1:
[root@k8s-master ~] # kubectl apply-f nfs-pvc1.yml persistentvolumeclaim/mypvc1 created [root@k8s-master ~] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmypvc1 Bound mypv1 1Gi RWO nfs 6s [root@k8s-master ~] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEmypv1 1Gi RWO Recycle Bound default/mypvc1 nfs 2m18s [root@k8s-master ~] #
From the output of kubectl get pvc and kubectl get pv, you can see that mypvc1 has been Bound to mypv1, and the application is successful.
Next, you can use storage in Pod, and the Pod configuration file pod1.yml is as follows:
Kind: PodapiVersion: v1metadata: name: mypod1spec: containers:-name: mypod1 image: busybox args:-/ bin/sh-- c-sleep 30000 volumeMounts:-mountPath: "/ mydata" name: mydata volumes:-name: mydata persistentVolumeClaim: claimName: mypvc1
Similar to the format using normal Volume, the Volume requested using mypvc1 is specified through persistentVolumeClaim in volumes.
Create a mypod1:
[root@k8s-master ~] # kubectl apply-f pod1.ymlpod/mypod1 created [root@k8s-master ~] # kubectl get pod-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmypod1 1 Running 0 3m19s 10.244.1.2 k8s-node1
Verify that PV is available:
[root@k8s-master ~] # kubectl exec mypod1 touch / mydata/hello [root@k8s-master ~] # ll / nfsdata/pv1/total 0What RW Oct RMI-1 root root 0 Oct 12 16:36 hello
As you can see, the file / mydata/hello created in Pod has indeed been saved to the NFS server directory / nfsdata/pv1.
If you no longer need to use PV, you can delete PVC to recycle PV.
How does MySQL use PV and PVC?
This section demonstrates how to provide persistent storage for a MySQL database by:
Create PV and PVC.
Deploy MySQL.
Add data to MySQL.
Simulating node downtime, Kubernetes automatically migrates MySQL to other nodes.
Verify data consistency.
First, create PV and PVC with the following configuration:
Mysql-pv.yml
ApiVersion: v1kind: PersistentVolumemetadata: name: mysql-pvspec: accessModes:-ReadWriteOnce capacity: storage: 1Gi persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: / nfsdata/mysql-pv server: 192.168.77.10
Mysql-pvc.yml
Kind: PersistentVolumeClaimapiVersion: v1metadata: name: mysql-pvcspec: accessModes:-ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nfs
Create mysql-pv and mysql-pvc:
Next, deploy MySQL with the following configuration file:
ApiVersion: v1kind: Servicemetadata: name: mysqlspec: ports:-port: 3306 selector: app: mysql---apiVersion: apps/v1beta1kind: Deploymentmetadata: name: mysqlspec: selector: matchLabels: app: mysql template: metadata: labels: app: mysqlspec: containers:-image: mysql:5.6 name: mysql env:-name: MYSQL_ROOT_PASSWORD value : password ports:-containerPort: 3306 name: mysql volumeMounts:-name: mysql-persistent-storage mountPath: / var/lib/mysql volumes:-name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pvc
The PV mysql-pv of PVC mysql-pvc Bound will be mount to the data directory var/lib/mysql of MySQL.
MySQL is deployed to k8s-node2, and the client accesses Service mysql as follows:
# kubectl run-it-rm-image=mysql:5.6-restart=Never mysql-client-mysql- h mysql- ppassword
Update the database:
[root@k8s-master] # kubectl run-it-- rm-- image=mysql:5.6-- restart=Never mysql-client-- mysql- h mysql- ppasswordIf you don't see a command prompt, try pressing enter.mysql > use mysql Reading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with-ADatabase changedmysql > create table my_id (id int (4)); Query OK, 0 rows affected (0.09 sec) mysql > insert into my_id values Query OK, 1 row affected (0.00 sec) mysql > select * from my_id;+-+ | id | +-+ | 111 | +-+ 1 row in set (0.00 sec)
① switches to database mysql.
② creates the database table my_id.
③ inserts a piece of data.
④ confirms that the data has been written.
Shut down k8s-node2 to simulate node downtime.
# systemctl poweroff
After a while, Kubernetes migrates MySQL to k8s-node1.
[root@k8s-master ~] # kubectl get podNAME READY STATUS RESTARTS AGEmysql-84bdf65dd5-bjz8b 1 kubectl get podNAME READY STATUS RESTARTS AGEmysql-84bdf65dd5-bjz8b 1 Terminating 0 22mmysql-84bdf65dd5-ddlhc 1 Running 0 34s
Verify the consistency of the data:
[root@k8s-master] # kubectl run-it-- rm-- image=mysql:5.6-- restart=Never mysql-client-- mysql- h mysql- ppassword If you don't see a command prompt, try pressing enter.mysql > use mysqlReading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with-ADatabase changedmysql > select * from my_id;+-+ | id | +-+ | 111 | +-+ 1 row in set (0.01 sec) mysql > quit
The MySQL service is restored and the data is intact.
Volume of emptyDir and hostPath types is convenient, but not durable, and Kubernetes supports Volume for a variety of external storage systems.
PV and PVC separate the responsibilities of administrators and ordinary users and are more suitable for production environments.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.