In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article shows you how to use emptyDir,hostPath,nfs,pv,pvc to do storage in K8s. The content is concise and easy to understand, which will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
There are three ways to store volumes: emptyDir,gitRepo,hostPath
EmptyDir: a pod creates two containers, one pod provides request service, the other pod provides file storage, pod deletes, and the storage volume is deleted.
GitRepo: provides storage using docker mirrors
HostPath: host path, pod deleted, storage volume still there (path to be created in multiple node nodes)
Nfs: using shared storage (multiple pod to create multiple directories in shared storage)
Help:
[root@k8s1 ~] # kubectl explain pods.spec.volumes.persistentVolumeClaim-- pvc help
[root@k8s1 ~] # kubectl explain pods.spec.volumes-- View help
[root@k8s1 ~] # kubectl explain pv-- pv help
1. Use emptyDir for storage (two pod, one for storage and one for service)
[root@k8s1 ~] # vim 11.yaml
ApiVersion: v1
Kind: Pod
Metadata:
Name: pod-demo-- define a pod
Namespace: default
Labels:
App: myapp
Tier: frontend
Spec:
Containers:
-name: myapp-- define a container
Image: ikubernetes/myapp:v1
ImagePullPolicy: IfNotPresent
Ports:
-name: http
ContainerPort: 80
VolumeMounts:
-name: html
MountPath: / usr/share/nginx/html-- the myapp container html volume is mounted to / usr/share/nginx/html (is the default path for nginx)
-name: busybox
Image: busybox:latest
ImagePullPolicy: IfNotPresent
VolumeMounts:
-name: html-- the busybox container mounts the html volume to / data
MountPath: / data/
Command: ["/ bin/sh", "- c", "while true;do echo $(date) > > / data/index.html;sleep 2done"]
Volumes:-- define a html volume
-name: html
EmptyDir: {}
[root@k8s1] # kubectl apply-f 11.yaml
Pod/pod-demo created
[root@k8s1] # kubectl get pods-o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
Pod-demo 2/2 Running 0 103s 10.244.1.13 k8s2
[root@k8s1] # kubectl exec-it pod-demo-c busybox-/ bin/sh
/ # cat / data/index.html
Fri Feb 22 09:39:53 UTC 2019
Fri Feb 22 09:39:55 UTC 2019
Fri Feb 22 09:39:57 UTC 2019
Fri Feb 22 09:39:59 UTC 2019
[root@k8s1 ~] # curl http://10.244.1.13
Fri Feb 22 09:39:53 UTC 2019
Fri Feb 22 09:39:55 UTC 2019
Fri Feb 22 09:39:57 UTC 2019
Fri Feb 22 09:39:59 UTC 2019
Fri Feb 22 09:40:01 UTC 2019
Fri Feb 22 09:40:03 UTC 2019
Fri Feb 22 09:40:05 UTC 2019
[root@k8s1 ~] #
two。 Use hostPath for storage (if the node node is down, the data of pod accessing the down node will not exist)
Node1 node:
[root@k8s2] # mkdir-p / data/pod
[root@k8s2 ~] # cat / data/pod/index.html-to distinguish between node nodes, write the contents of the file differently
Node1
[root@k8s2 ~] #
Node2 node:
[root@k8s3] # mkdir-p / data/pod
[root@k8s3 ~] # cat / data/pod/index.html
Node2
[root@k8s3 ~] #
Master node:
[root@k8s1 ~] # vim 12.yaml
ApiVersion: v1
Kind: Pod
Metadata:
Name: pod-vol-hostpath
Namespace: default
Spec:
Containers:
-name: myapp
Image: ikubernetes/myapp:v1
VolumeMounts:
-name: html-- using html volume storage
MountPath: / usr/share/nginx/html-- nginx web page root directory
Volumes:
-name: html
HostPath:
Path: / data/pod/-- the path of the html volume (create a new directory for the corresponding node node, and create a new node for pod)
Type: DirectoryOrCreate
[root@k8s1] # kubectl get pods-o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
Pod-demo 2/2 Running 0 64m 10.244.1.13 k8s2
Pod-vol-hostpath 1/1 Running 0 4s 10.244.2.22 k8s3
[root@k8s1 ~] # curl http://10.244.2.22-- pod is on the node2 node, so you are visiting the web page of node2. If you are in node1, it is the content of node1.
Node2
[root@k8s1 ~] #
3. Using nfs shared Storage
Nfs storage:
[root@liutie1 ~] # mkdir / data/v6
[root@liutie1 ~] # vim / etc/exports
/ data/v6 172.16.8.0 Compact 24 (rw,no_root_squash)
[root@liutie1 ~] # systemctl restart nfs
[root@liutie1 ~] # exportfs-arv
Exporting 172.16.8.0/24:/data/v6
[root@liutie1] # showmount-e
Export list for liutie1:
/ data/v6 172.16.8.0/24
[root@liutie1 ~] #
K8s node:
[root@k8s1 ~] # mkdir / data/v6-create a shared directory
[root@k8s1 ~] # mount.nfs 172.16.8.108:/data/v6 / data/v6-- Test manual mount
[root@k8s1 ~] # umount / data/v6
[root@k8s1 ~] # vim nfs.yaml
ApiVersion: v1
Kind: Pod
Metadata:
Name: pod-vol-nfs
Namespace: default
Spec:
Containers:
-name: pod-nfs
Image: ikubernetes/myapp:v1
VolumeMounts:
-name: html1
MountPath: / usr/share/nginx/html
Volumes:
-name: html1
Nfs:
Path: / data/v6
Server: 172.16.8.108
[root@k8s1] # kubectl apply-f nfs.yaml
Pod/pod-vol-nfs created
[root@k8s1] # kubectl get pods-o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
Pod-vol-nfs 1/1 Running 0 2m21s 10.244.1.78 k8s2
[root@k8s1 ~] #
Create a file in the nfs store
[root@liutie1 ~] # cd / data/v6/
[root@liutie1 v6] # cat index.html
Nfs store
[root@liutie1 v6] #
Open the web page in the k8s node
[root@k8s1 ~] # curl 10.244.1.78-ip address of pod
Nfs store
[root@k8s1 ~] #
4. Use nfs shared storage (fixed size)
Nfs server:
[root@liutie1 ~] # mkdir / data/v {1pm 2je 3pm 4pm 5}-- create a new directory on the storage
[root@liutie1 ~] # yum install nfs*-y-install nfs
[root@liutie1 ~] # vim / etc/exports-- shared directory
/ data/v1 172.16.8.0 Compact 24 (rw,no_root_squash)
/ data/v2 172.16.8.0 Compact 24 (rw,no_root_squash)
/ data/v3 172.16.8.0 Compact 24 (rw,no_root_squash)
/ data/v4 172.16.8.0 Compact 24 (rw,no_root_squash)
/ data/v5 172.16.8.0 Compact 24 (rw,no_root_squash)
[root@liutie1 ~] # exportfs-arv
Exporting 172.16.8.0/24:/data/v5
Exporting 172.16.8.0/24:/data/v4
Exporting 172.16.8.0/24:/data/v3
Exporting 172.16.8.0/24:/data/v2
Exporting 172.16.8.0/24:/data/v1
[root@liutie1] # showmount-e
Export list for liutie1:
/ data/v5 172.16.8.0/24
/ data/v4 172.16.8.0/24
/ data/v3 172.16.8.0/24
/ data/v2 172.16.8.0/24
/ data/v1 172.16.8.0/24
[root@liutie1 ~] #
Node nodes:
[root@k8s2 ~] # yum install nfs-common nfs-utils-y-all node nodes must have nfs-utils packages installed, otherwise errors will occur
Master node:
[root@k8s1 ~] # yum install-y nfs-utils
[root@k8s1 ~] # kubectl explain PersistentVolume-help information
[root@k8s1 ~] # vim pv.yaml-- convert remote nfs directories to pv
ApiVersion: v1
Kind: PersistentVolume
Metadata:
Name: pv001
Labels:
Name: pv001
Spec:
Nfs:
Path: / data/v1
Server: 172.16.8.108
AccessModes: ["ReadWriteMany", "ReadWriteOnce"]
Capacity:
Storage: 5Gi
-
ApiVersion: v1
Kind: PersistentVolume
Metadata:
Name: pv002
Labels:
Name: pv002
Spec:
Nfs:
Path: / data/v2
Server: 172.16.8.108
AccessModes: ["ReadWriteMany", "ReadWriteOnce"]
Capacity:
Storage: 15Gi
-
ApiVersion: v1
Kind: PersistentVolume
Metadata:
Name: pv003
Labels:
Name: pv003
Spec:
Nfs:
Path: / data/v3
Server: 172.16.8.108
AccessModes: ["ReadWriteMany", "ReadWriteOnce"]
Capacity:
Storage: 1Gi
-
ApiVersion: v1
Kind: PersistentVolume
Metadata:
Name: pv004
Labels:
Name: pv004
Spec:
Nfs:
Path: / data/v4
Server: 172.16.8.108
AccessModes: ["ReadWriteMany", "ReadWriteOnce"]
Capacity:
Storage: 20Gi
-
ApiVersion: v1
Kind: PersistentVolume
Metadata:
Name: pv005
Labels:
Name: pv005
Spec:
Nfs:
Path: / data/v5
Server: 172.16.8.108
AccessModes: ["ReadWriteMany", "ReadWriteOnce"]
Capacity:
Storage: 13Gi
[root@k8s1 ~] # kubectl apply-f pv.yaml-generate pv
Persistentvolume/pv001 created
Persistentvolume/pv002 created
Persistentvolume/pv003 created
Persistentvolume/pv004 created
Persistentvolume/pv005 created
[root@k8s1 ~] # kubectl get pv-- View pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
Pv001 5Gi RWO,RWX Retain Available 2m40s
Pv002 15Gi RWO,RWX Retain Available 2m40s
Pv003 1Gi RWO,RWX Retain Available 2m40s
Pv004 20Gi RWO,RWX Retain Available 2m40s
Pv005 13Gi RWO,RWX Retain Available 2m40s
[root@k8s1 ~] # vim pvc.yaml-create a pvc,pvc with a size of 6G
ApiVersion: v1
Kind: PersistentVolumeClaim
Metadata:
Name: mypvc
Namespace: default-- pvc that defines a mypvc name
Spec:
AccessModes: ["ReadWriteMany"]
Resources:
Requests:
Storage: 6Gi
-
ApiVersion: v1
Kind: Pod-- defines a pod,pod that uses pvc
Metadata:
Name: pod-vol-pvc
Namespace: default
Spec:
Containers:
-name: myapp
Image: ikubernetes/myapp:v1
VolumeMounts:
-name: html-- using mypvc storage
MountPath: / usr/share/nginx/html
Volumes:
-name: html
PersistentVolumeClaim:
ClaimName: mypvc-- reference the mypvc above
[root@k8s1] # kubectl apply-f pvc.yaml
Persistentvolumeclaim/mypvc created
Pod/pod-vol-pvc created
[root@k8s1 ~] # kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
Pv001 5Gi RWO,RWX Retain Available 8m31s
Pv002 15Gi RWO,RWX Retain Available 8m31s
Pv003 1Gi RWO,RWX Retain Available 8m31s
Pv004 20Gi RWO,RWX Retain Available 8m31s
Pv005 13Gi RWO,RWX Retain Bound default/mypvc 8m31s-- Bound means to use the
[root@k8s1 ~] # kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
Mypvc Bound pv005 13Gi RWO,RWX 2m31s-mypvc storage volume using pv005
[root@k8s1 ~] # kubectl get pods
NAME READY STATUS RESTARTS AGE
Pod-demo 2/2 Running 0 141m
Pod-vol-hostpath 1/1 Running 0 77m
Pod-vol-pvc 1/1 Running 0 4s
[root@k8s1 ~] # kubectl describe pods pod-vol-pvc-- View details
The above is how to use emptyDir,hostPath,nfs,pv,pvc for storage in K8s. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.