In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Environment description:
Hostname operating system version ipdocker versionkubelet version configuration remarks masterCentos 7.6.1810172.27.9.131Docker 18.09.6V1.14.22C2Gmaster host node01Centos 7.6.1810172.27.9.135Docker 18.09.6V1.14.22C2Gnode node node02Centos 7.6.1810172.27.9.136Docker 18.09.6V1.14.22C2Gnode node centos7Centos 7.3.1611172.27.9.181 × × 1C1Gnfs server
For more information on k8s cluster deployment, please see Centos7.6 deployment k8s (v1.14.2) cluster.
For more information on K8s learning materials, see: basic concepts, kubectl commands and data sharing.
For more information on k8s high availability cluster deployment, please see Centos7.6 deployment k8s v1.16.4 high availability cluster (active / standby mode)
1. Volume1. Concept
Kubernetes volumes are an integral part of pod, so they are defined in the pod specification like containers. They are not separate Kubernetes objects, nor can they be created or deleted separately. A volume can be used by all containers in pod, but it must first be mounted in each container that needs to access it. In each container, the volume can be mounted anywhere on its file system.
two。 Why do you need Volume
The lifetime of files on the disk of the container is short, which leads to some problems when running important applications in the container. First, when the container crashes, kubelet will restart it, but the files in the container will be lost-- the container will restart in a clean state (the original state of the mirror). Second, when you run multiple containers at the same time in Pod, you usually need to share files between these containers. The Volume abstraction in Kubernetes solves these problems very well.
3. Volume type
Currently, Kubernetes supports the following Volume types:
This paper will test emptyDir,hostPath, shared memory NFS,PV and PVC respectively.
2. EmptyDir1. EmptyDir concept
emptyDir is the most basic Volume type, a simple empty directory for storing temporary data. If Pod sets the emptyDir type when Volume,Pod is assigned to Node, emptyDir will be created. As long as Pod runs on Node, emptyDir will exist (container hanging will not cause emptyDir to lose data), but if Pod is deleted from Node (Pod is deleted, or Pod is migrated), emptyDir will also be deleted and permanently lost.
will use emptyDir volumes to realize file sharing between two containers in the same pod
two。 Create pod emptyDir-fortune [root@master ~] # more emptyDir-pod.yaml apiVersion: v1kind: Podmetadata: labels: app: prod # pod tag name: emptydir-fortunespec: containers:-image: loong576/fortune name: html-generator volumeMounts: # Mount the volume named html to the container's / var/htdocs directory-name: html mountPath: / var/ Htdocs-image: nginx:alpine name: web-server volumeMounts: # Mount the same volume to the container / usr/share/nginx/html directory and set it to read-only-name: html mountPath: / usr/share/nginx/html readOnly: true ports:-containerPort: 80 protocol: TCP volumes:-name: html # volume named html EmptyDir volumes are mounted to the above two containers emptyDir: {} [root@master ~] # kubectl apply-f emptyDir-pod.yaml pod/emptydir-fortune created [root@master ~] # kubectl get po-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESemptydir-fortune 2 node02 2 Running 09s 10.244.2.140 node02
Create a pod emptydir-fortune. The pod has two containers, and the emptyDir volume is mounted at the same time. The container html-generator writes random content to the volume, and verifies whether the file is shared by accessing the container web-server.
2.1 loong576/fortune Mirror
Root@master ~] # more Dockerfile
[root@master] # more fortune/Dockerfile FROM ubuntu:latestRUN apt-get update; apt-get-y install fortuneADD fortuneloop.sh / bin/fortuneloop.shE*TRYPOINT / bin/fortuneloop.sh
The base image of the image is ubuntu, and the fortuneloop.sh script will be executed when the image starts.
Fortuneloop.sh script:
[root@master ~] # more fortuneloop.sh #! / bin/bashtrap "exit" SIGINTmkdir / var/htdocswhile: do echo $(date) Writing fortune to / var/htdocs/index.html / usr/games/fortune > / var/htdocs/index.html sleep 10done
The main purpose of this script is to output random phrases to the index.html file every 10 seconds.
3. Visit nginx3.1 to create service [root@master ~] # more service-fortune.yaml apiVersion: v1kind: Servicemetadata: name: my-service # service name spec: type: NodePort selector: app: prod # pod tag, and then navigate to pod emptydir-fortune ports:-protocol: TCP nodePort: 30002 # node listening port Expose static port 30002 for external service port: 8881 # ClusterIP listening port targetPort: 80 # container port sessionAffinity: ClientIP # whether Session is supported All access requests from the same client are forwarded to the same backend Pod [root@master ~] # kubectl apply-f service-fortune.yaml service/my-service created [root@master ~] # kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEkubernetes ClusterIP 10.96.0.1 443/TCP 3d17hmy-service NodePort 10.102.191.57 8881:30002/TCP 9s3.2 nginx visits [root@master ~] # curl 10.102.191.57:8881Writing is easy All you do is sit staring at the blank sheet of paper untildrops of blood form on your forehead. -- Gene Fowler [root@master] # curl 172.27.9.135:30002Don't Worry, Be Happy. -- Meher Baba
Conclusion:
The container nginx successfully reads the contents written to and stored by the container fortune, and the emptyDir volume can realize file sharing among containers.
The life cycle of emptyDir volumes is associated with the life cycle of pod, so when you delete pod, the contents of the volume will be lost. HostPath1. Concept
hostPath allows you to mount the file system on Node to Pod. If Pod needs to use files on Node, you can use hostPath. Pod running on the same node and using the same path in its hostPath volume can see the same file.
two。 Create pod hostpath-nginx2.1 create mount directory
Create a mount directory on the node node, and do the following on master and each node
[root@master ~] # mkdir / data & & cd / data & & echo `hostname` > index.html2.2 create pod [root@master ~] # more hostPath-pod.yaml apiVersion: v1kind: Podmetadata: labels: app: prod name: hostpath-nginx spec: containers:-image: nginx name: nginx volumeMounts:-mountPath: / usr/share/nginx/html # Container mount point name: nginx-volume # mount Volume nginx-volume volumes:-name: nginx-volume # Volume name hostPath: path: / data # File system on node to be mounted [root@master ~] # kubectl apply-f hostPath-pod.yaml pod/hostpath-nginx created [root@master ~] # kubectl get po-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESemptydir-fortune 2/2 Running 0 40m 10.244.2.140 node02 hostpath-nginx 1/1 Running 0 16s 10.244.1.140 node01 3. Visit pod hostpath-nginx [root@master ~] # curl 10.244.1.140node01
Conclusion:
Pod runs on node01 and accesses the index.html content of 'node01', for the mounted file system / data. The container successfully reads the contents of the mounted node file system.
Use hostPath only when you need to read or write system files on a node, and do not use them to persist data across the pod.
HostPath can achieve persistent storage, but it can also lead to data loss when the node node fails. 4. NFS shared memory 1. Concept
NFS is the abbreviation of Network File System, that is, the network file system. NFS can be mounted to Pod through simple configuration in Kubernetes, while data in NFS can be saved permanently, and NFS supports simultaneous write operations.
emptyDir can provide file sharing between different containers, but cannot be stored. HostPath can provide file sharing and storage for different containers, but due to node restrictions, it cannot be shared across nodes. In this case, network storage (NAS) is required, which is convenient for storage containers and can be accessed from any cluster node. This article takes NFS as an example for testing.
2. Construction and configuration of nfs
For more information on nfs construction, please see NFS server building and client connection configuration under Centos7.
After completing the construction of the nfs server and the installation of the client nfs software, you can check whether the nfs service is normal at the master and each node node
[root@master] # showmount-e 172.27.9.181Export list for 172.27.9.181:/backup 172.27.9.0 Universe 24
Master, node01 and node02 nodes all execute showmount commands to verify whether the nfs service is normal. / backup is the shared directory provided by the nfs server.
The NFS content tested in this article:
3. Create a new pod mongodb-nfs [root@master ~] # more mongodb-pod-nfs.yaml apiVersion: v1kind: Podmetadata: name: mongodb-nfsspec: containers:-image: mongo name: mongodb volumeMounts:-name: nfs-data # mounted volume name Consistent with the above mongodb-data: / data/db # MongoDB data storage path ports:-containerPort: 27017 protocol: TCP volumes:-name: nfs-data # Volume name nfs: server: 172.27.9.181 # nfs server ip path: / backup # nfs server Shared directory of [root@master ~] # kubectl apply-f mongodb-pod-nfs.yaml pod/mongodb-nfs created [root@master ~] # kubectl get po-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmongodb-nfs 1 node02 1 Running 0 23s 10.244.2.142
Notice that the ip of pod is 10.244.2.142.
4. Nfs shared Storage Test 4.1 write data to MongoDB [root@master ~] # kubectl exec-it mongodb-nfs mongo > use loongswitched to db loong > db.foo.insert ({name:'loong576'}) WriteResult ({"nInserted": 1})
Switch to db loong and insert JSON document (name:'loong576')
4.2 View the written data > db.foo.find () {"_ id": ObjectId ("5d6e17b018651a21e0063641"), "name": "loong576"}
4.3 Delete pod and rebuild [root@master ~] # kubectl delete pod mongodb-nfs pod "mongodb-nfs" deleted [root@master ~] # kubectl apply-f mongodb-pod-nfs.yamlpod/mongodb-nfs created [root@master ~] # kubectl get po-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmongodb-nfs 1 node02 1 Running 0 22s 10.244.2.143
Delete the pod mongodb-nfs and rebuild, and when the podip changes to 10.244.2.143, visit MongoDB again to verify that the previously written document still exists.
4.4 New pod reads shared storage data [root@master ~] # kubectl exec-it mongodb-nfs mongo > use loongswitched to db loong > db.foo.find () {"_ id": ObjectId ("5d6e17b018651a21e0063641"), "name": "loong576"}
Even if the pod is deleted and rebuilt, the shared data can still be accessed.
Conclusion:
NFS shared storage can persist data NFS shared storage can provide data sharing across nodes. 5. PV and PVC1. Concept
PersistentVolume (persistent volume, referred to as PV) and Persistent VolumeClaim (persistent volume declaration, referred to as PVC) make K8s cluster have the ability of logical abstraction of storage, so that the configuration of actual background storage technology can be ignored in the logic of configuring Pod, and the configuration is left to the configuration of PV, that is, the manager of the cluster. The relationship between stored PV and PVC is very similar to that of computing Node and Pod; PV and Node are resource providers, which change according to the infrastructure of the cluster and are configured by the K8s cluster administrator; while PVC and Pod are the consumers of the resources, which change according to the needs of the business service, and are configured by the consumer of the K8s cluster, that is, the service administrator.
when cluster users need to use persistent storage in their pod, they first create a PVC list, specifying the required minimum capacity requirements and access mode, and then the user submits the pending volume declaration list to the Kubernetes API server, and Kubernetes will find a matching PV and bind it to the PVC. PVC can be used as a volume in pod, and other users cannot use the same PV unless it is first released by removing the PVC binding.
two。 Create a PV2.1 nfs configuration
Nfs server shared directory configuration:
[root@centos7] # exportfs / backup/v1 172.27.9.0/24/backup/v2 172.27.9.0/24/backup/v3 172.27.9.0 Universe 24
Master and each node node check the nfs configuration:
[root@master ~] # showmount-e 172.27.9.181Export list for 172.27.9.181:/backup/v3 172.27.9.0/24/backup/v2 172.27.9.0/24/backup/v1 172.27.9.0 PV 242.2 create [root@master ~] # more pv-nfs.yaml apiVersion: v1kind: PersistentVolumemetadata: name: pv001spec: capacity: storage: 2Gi # specify PV capacity is 2G volumeMode: Filesystem # volume mode The default is Filesystem, or it can be set to 'Block' to support the original block device accessModes:-ReadWriteOnce # access mode. The volume can be mounted by a single node in read / write mode: persistentVolumeReclaimPolicy: Retain # recycling policy, Retain (reserved), indicating manual recycling of storageClassName: nfs # class name, PV can have a class A PV of a particular category can only be bound to the PVC nfs requesting that category: # specify the NFS share directory and IP information path: / backup/v1 server: 172.27.9.181---apiVersion: v1kind: PersistentVolumemetadata: name: pv002spec: capacity: storage: 2Gi # specify the PV capacity of 2G volumeMode: Filesystem # volume mode The default is Filesystem, or it can be set to 'Block' to support the original block device accessModes:-ReadOnlyMany # access mode. The volume can be mounted by multiple nodes in read-only mode: persistentVolumeReclaimPolicy: Retain # recycling policy, Retain (reserved), indicating manual recycling of storageClassName: nfs # class name, PV can have a class A PV of a specific category can only be bound to the PVC nfs that requests that category: # specify the NFS share directory and IP information path: / backup/v2 server: 172.27.9.181---apiVersion: v1kind: PersistentVolumemetadata: name: pv003spec: capacity: storage: 1Gi # specify the PV capacity of 1G volumeMode: Filesystem # volume mode The default is Filesystem, or it can be set to 'Block' to support the original block device accessModes:-ReadWriteOnce # access mode. The volume can be mounted by a single node in read / write mode: persistentVolumeReclaimPolicy: Retain # recycling policy, Retain (reserved), indicating manual recycling of storageClassName: nfs # class name, PV can have a class A PV of a particular category can only be bound to the PVC nfs requesting that category: # specify the NFS share directory and IP information path: / backup/v3 server: 172.27.9.181 [root@master ~] # kubectl apply-f pv-nfs.yaml persistentvolume/pv001 createdpersistentvolume/pv002 createdpersistentvolume/pv003 created [root@master ~] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLA*S REASON AGEpv001 2Gi RWO Retain Available nfs 26spv002 2Gi ROX Retain Available nfs 26spv003 1Gi RWO Retain Available nfs 26s
Create pv001, pv002, and pv003, corresponding to the shared directories / backup/v1, / backup/v2, / backup/v2 of nfs, respectively.
A volume can be in one of the following states:
Available (available), a piece of free resource has not been bound by any declaration Bound (bound), the volume has been declared bound Released (released), the declaration has been deleted, but the resource has not been redeclared by the cluster Failed (failed), the automatic collection of the volume failed
There are three access modes for PV:
The first, ReadWriteOnce: is the most basic way to read and write, but only supports mounting by a single Pod. The second, ReadOnlyMany: can be mounted by multiple Pod in a read-only manner. Third, ReadWriteMany: this storage can be shared by multiple Pod in a read-write manner. Not every kind of storage supports these three ways, such as sharing. At present, there is less support, and the more commonly used one is NFS.
PV does not belong to any namespace, it is a cluster-level resource like nodes, different from pod and PVC.
3. Create PVC3.1 PVC create the name declared by [root@master ~] # more pvc-nfs.yaml kind: PersistentVolumeClaimapiVersion: v1metadata: name: mypvc #. When used as a pod volume, you will use spec: accessModes:-ReadWriteOnce # access volume mode, filter one of the PV conditions volumeMode: Filesystem # volume mode, consistent with PV Instructs the volume to use resources: # as a file system or block device to declare that a specific number of resources can be requested, and filter one of the PV criteria requests: storage: 2Gi storageClassName: nfs # to request a specific class, consistent with PV Otherwise, you cannot bind [root@master ~] # kubectl apply-f pvc-nfs.yaml persistentvolumeclaim/mypvc created [root@master ~] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmypvc Bound pv001 2Gi RWO nfs 22s
Create PVC mypvc, interview questionnaire mode is ReadWriteOnce, size is 2G ROX WO, ROX, RWX, RWO indicates the number of worker nodes that can use the volume at the same time, not the number of pod.
3.2View selected PV
PVC filter criteria:
PVaccessModesstoragepv001 √√ pv002 × √ pv003 √ ×
PV View:
[root@master ~] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLA*S REASON AGEpv001 2Gi RWO Retain Bound default/mypvc nfs 12mpv002 2Gi ROX Retain Available nfs 12mpv003 1Gi RWO Retain Available nfs 12m
Pv001 is selected, which conforms to the PVC definition, pv002 access pattern does not match, pv003 size does not match.
4. Use pod [root @ master ~] # more mongodb-pod-pvc.yaml apiVersion: v1kind: Podmetadata: name: mongodb-pvc spec: containers:-image: mongo name: mongodb volumeMounts:-name: pvc-data mountPath: / data/db ports:-containerPort: 27017 protocol: TCP volumes:-name: pvc-data persistentVolumeClaim: claimName: mypvc # and pvc declared Name consistent [root@master ~] # kubectl apply-f mongodb-pod-pvc.yaml pod/mongodb-pvc created [root@master ~] # kubectl get po-o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmongodb-pvc 1 Running 0 16s 10.244.2.144 node02
Create a pod mongodb-pvc, use PVC mypvc, and test the same shared storage test as nfs in four-4. I won't repeat it.
All scripts and configuration files in this article have been uploaded: K8s practice (7): storage volume and data persistence (Volumes and Persistent Storage)
My blog will soon be synchronized to Tencent Cloud + Community, and you are invited to join us: https://cloud.tencent.com/developer/support-plan?invite_code=1xq751tgzgk0m
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.