In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Containers are lifecycle in both docker and K8S, so data volumes can be persisted.
The main problems solved by data volumes are:
1. Data persistence: when we write data, the files are temporary. When the container crashes, host will kill the container, then recreate the container from the mirror, and the data will be lost.
two。 Data sharing: when you run a container in the same Pod, there is a need to share files.
Type of data volume:
1.emptyDir
The emptyDir data volume is similar to the docker manager volume of docker data persistence. When the data volume is initially allocated, it is an empty directory, and containers in the same Pod can read and write to the directories in the container and share data.
Scene features: the same pod, different containers, shared data volumes
If the container is deleted, the data still exists, and if the Pod is deleted, the data will also be deleted
Test:
* * vim emptyDir.yaml**apiVersion: v1kind: Podmetadata: name: producer-consumerspec: containers:-image: busybox name: producer volumeMounts:-mountPath: / producer_dir# the path here refers to the path within the container name: shared-volume# specifies the local directory name args:# defines what to do after the container runs-/ bin/sh-c-echo "hello k8s" > / producer_dir/hello Sleep 30000-image: busybox name: consumer volumeMounts:-mountPath: / consumer_dir name: shared-volume args:-/ bin/sh-- c-cat / consumer_dir/hello; sleep 30000 volumes:-name: shared-volume# the value here needs to correspond to the mountPath name value of Pod above emptyDir: {} # defines the type of data persistence, that is, an empty directory
Kubectl apply-f emptyDir.yaml # execution file
Docker inspect (view container details): Mount mount point
You can enter the directory to view the data in the host computer.
Kubectl get pod-o wide (- w): you can view pod information in detail
Add-w: can be viewed in real time. And know which node the container is running on.
Kubectl logs {pod name} consumer can view the data in the directory.
According to the test, you can see whether the container on the node and the mount directory are the same. If it is the same, the environment is correct. You can delete the container to see if the data is lost, and delete the master node pod to see if the data is still there.
According to the test, emptyDir data persistence can only be used as temporary storage.
2.hostPath Volume
1 "Mount a file or directory on the file system of the node where the Pod is located into the container.
2 "is similar to bind mount for docker data persistence. If the Pod is deleted, the data is retained. It's better than emptyDir, but once host crashes, hostPath becomes inaccessible.
3 "there are not many scenarios that use this kind of data persistence because it increases the coupling between the Pod and the node.
3.Persistent Volume: pv (persistent volume) pre-prepared, data persistence storage directory. PersistentVolumeClaim: PVC (persistent Volume usage statement | Application) pvc is a request stored by the user. Similar to pod. Pod consumes node resources and pvc consumes storage resources. Pod can request specific levels of resources (cpu, memory). Pvc can request a specific size and access mode based on permissions.
To do PV based on NFS service:
Install NFS service and rpcbind service: 1. [root@master yaml] # yum install-y nfs-utils rpcbind # note that NFS service should be installed in all three. 2. [root@master yaml] # vim / etc/exports file add / nfsdata * (rw,sync,no_root_squash) 4. [root@master yaml] # mkdir / nfsdata 5. [root@master yaml] # systemctl start rpcbind 6. [root@master yaml] # systemctl start nfs-server.service 7. [root@master yaml] # showmount-e Export list for master: / nfsdata *
Create a PV resource object:
Vim nfs-pv.yamlapiVersion: v1kind: PersistentVolumemetadata: name: testspec: capacity given by capacity:# storage: access mode supported by 1Gi accessModes:#PV-ReadWriteOnce# here means that persistentVolumeReclaimPolicy: Recycle#pv can only be mounted to a single node in a read-write manner: the recycling policy of storageClassName: nfs# indicates that the name of the storage class defined by storageClassName: nfs# should be the same as the storage class name defined above Path: / nfsdata/pv1# specify the directory of NFS server: 192.168.1." specify the IP execution nfs-pv.yaml file of the NFS server: * * `kubectl apply-f nfs-pv.yaml `* * persistentvolume/test created * * `kubectl get pv` * * NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE test 1Gi RWO Recycle * * Available * * Nfs 7s * * Note here that the STATUS status is Available before it can be used. **
Access modes supported by pv:
ReadWriteOnce: the access mode is that you can only mount to a single node in read-write mode.
ReadWriteMany: the access mode is that you can only mount to multiple nodes in a read-write manner.
ReadOnlyMany: the access mode is that you can only mount to multiple nodes as read-only.
Recycling strategy for PV storage space:
Recycle: automatically clears data.
Retain: requires manual recycling by the administrator.
Delete: dedicated to cloud storage.
The correlation between PV and PVC: through storageClassName and accessModes
Create PVC: vim nfs-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: testspec: accessModes:-ReadWriteOnce# to define the access mode, which must be consistent with the Pv definition. Resources: requests: storage: 1Gi# request capacity size storageClassName: nfs# storage class name must be consistent with the pv definition.
Run and view PVC and PV:
Kubectl apply-f nfs-pvc.yaml
After the association, you will see, Bound:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.