In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Today, I will talk to you about the common storage types in Kubernetes, which may not be well understood by many people. in order to make you understand better, the editor has summarized the following for you. I hope you can get something according to this article.
The following will introduce several common storage types.
By default, the life cycle of files mounted on disk by Pod is the same as that of Pod. If Pod crashes, kubelet will restart it, which will cause files in Pod to be lost because Pod will restart in the original state of the image. In practical applications, developers often need to retain the data in the container. For example, if MySql is deployed in Kubernetes, all the above data cannot be lost because the MySql container is hung up and restarted. Secondly, when running multiple containers in Pod at the same time, files may need to be shared among these containers. Sometimes, developers need to preset the configuration file to make it take effect in the container, for example, if you customize the mysql.cnf file, you need to load this configuration file when MySql starts. These are all problems that will be solved in actual combat today.
SECRET
The Secret object allows you to store and manage sensitive information such as passwords, OAuth tokens, and ssh keys. Putting such information into an secret can better control its use and reduce the risk of accidental exposure.
▌ usage scenario
Authentication profile mount
Example of using ▌
The image constructed by push in CI can store the config.json file of docker authentication in the secret object, and then mount it to the Pod of CI, thus carrying out permission authentication.
First create a secret
$kubectl create secret docker-registry docker-config-- docker-server= https://hub.docker.com-- docker-username=username-- docker-password=passwordsecret/docker-config created
Create a new docker-pod.yaml file and paste the following information:
ApiVersion: v1kind: Podmetadata: name: dockerspec: containers:-name: docker image: docker command:-sleep-"3600" volumeMounts:-name: config mountPath: / root/.docker/ volumes:-name: config secret: secretName: docker-config items:-key: .dockerconfigjson path: config.json mode: 0644
Docker Pod mount secret
$kubectl apply-f docker-pod.yamlpod/docker created
Check the mount effect
$kubectl exec docker-- cat / root/.docker/config.json {"auths": {"https://hub.docker.com":{"username":"username","password":"password","auth":"dXNlcm5hbWU6cGFzc3dvcmQ="}}}
Clean up the environment
$kubectl delete pod docker$ kubectl delete secret docker-configConfigMap
Many applications read configuration information from configuration files, command-line arguments, or environment variables. This configuration information needs to be decoupled from docker image ConfigMap API provides us with a mechanism to inject configuration information into the container, and ConfigMap can be used to save a single property or an entire configuration file.
▌ usage scenario
Configuration information file mount
Example of using ▌
Use data from ConfigMap to configure Redis caching
Create an example-redis-config.yaml file and paste the following information:
ApiVersion: v1kind: ConfigMapmetadata: name: example-redis-configdata: redis-config: | maxmemory 2b maxmemory-policy allkeys-lru
Create ConfigMap
$kubectl apply-f example-redis-config.yamlconfigmap/example-redis-config created
Create an example-redis.yaml file and paste the following information:
ApiVersion: v1kind: Podmetadata: name: redisspec: containers:-name: redis image: kubernetes/redis:v1 env:-name: MASTER value: "true" ports:-containerPort: 6379 resources: limits: cpu: "0.1" volumeMounts:-mountPath: / redis-master-data name: data-mountPath: / redis-master name: config volumes:-name: data EmptyDir: {}-name: config configMap: name: example-redis-config items:-key: redis-config path: redis.conf
Redis Pod mount ConfigMap test
$kubectl apply-f example-redis.yamlpod/redis created
Check the mount effect
$kubectl exec-it redis redis-cli$ 127.0.0.1 maxmemory 6379 > CONFIG GET maxmemory1) "maxmemory" 2) "2097152" $127.0.0.1 CONFIG GET maxmemory-policy1) "maxmemory-policy" 2) "allkeys-lru"
Clean up the environment
$kubectl delete pod redis$ kubectl delete configmap example-redis-configEmptyDir
When the Pod of an emptyDir volume is created on a node, a new empty directory will be created on that node. As long as the Pod runs on this node, the directory will always exist. All containers in the Pod can mount the changed directory to different mount points, but can read and write files in the emptyDir. When Pod is deleted for whatever reason, emptyDir's data is deleted forever (a Container Crash does not delete Pod at that node, so data is not lost at Container crash). By default, emptyDir supports any type of back-end storage: disk, ssd, network storage. You can also mount a tmpfs (RAM-backed filesystem) by default by setting emptyDir.medium to Memory,kubernetes, because it is RAM Backed, so tmpfs is usually very fast. However, when the container is restarted or crash, the data is lost.
▌ usage scenario
All containers in the same Pod share storage.
Example of using ▌
Generate the hello file in container an and output the contents of the file through container b
Create a test-emptydir.yaml file and paste the following information:
ApiVersion: v1kind: Podmetadata: name: test-emptydirspec: containers:-image: alpine name: container-a command:-/ bin/sh args:-- c-echo'I am container-a' > > / cache-a/hello & & sleep 3600 volumeMounts:-mountPath: / cache-a name: cache-volume-image: alpine name: container-b command:-sleep-"3600" VolumeMounts:-mountPath: / cache-b name: cache-volume volumes:-name: cache-volume emptyDir: {}
Create Pod
Kubectl apply-f test-emptydir.yamlpod/test-emptydir created
test
$kubectl exec test-emptydir-c container-b-- cat / cache-b/helloI am container-a
Clean up the environment
$kubectl delete pod test-emptydirHostPath
Mount the corresponding directory of the host directly to the container running on the node. When using this type of volume, you need to pay attention to the following aspects:
Pod created with the same template may result in different results because different nodes have different directory information.
If kubernetes adds the scheduling of known resources, the scheduling will not consider the resources used by hostPath
If a directory already exists on the host directory, it can only be written by root, so the container needs root permission to access the directory, or modify the directory permission.
▌ usage scenario
The running container needs to access the information of the host, such as Docker internal information / var/lib/docker directory. Running cadvisor in the container requires access to / dev/cgroups.
Example of using ▌
Use Docker socket binding mode to list host images.
Create a test-hostpath.yaml file and paste the following information:
ApiVersion: v1kind: Podmetadata: name: test-hostpathspec: containers:-image: docker name: test-hostpath command:-sleep-"3600" volumeMounts:-mountPath: / var/run/docker.sock name: docker-sock volumes:-name: docker-sock hostPath: path: / var/run/docker.sock type: Socket
Create test-hostpath Pod
$kubectl apply-f test-hostpath.yamlpod/test-hostpath created
Whether the test was successful
$kubectl exec test-hostpath docker imagesREPOSITORY IMAGE ID CREATED SIZEdocker 639de9917ae1 13 days ago 171MB...NFS storage volume
NFS volumes allow existing NFS (Network File system) shares to be mounted to your container. Unlike emptyDir, when you delete a Pod, the contents of the nfs volume are retained and the volume is simply unmounted. This means that nfs volumes can be pre-populated and data can be shared between pod. NFS can be mounted by multiple writers at the same time.
Important: you must have your own NFS server before you can use it.
▌ usage scenario
Pod of different nodes uses unified nfs to share directories
Example of using ▌
Create a test-nfs.yaml file and paste the following information:
ApiVersion: apps/v1kind: Deploymentmetadata: name: test-nfsspec: selector: matchLabels: app: store replicas: 2 template: metadata: labels: app: store spec: volumes:-name: data nfs: server: nfs.server.com path: / affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:-labelSelector: MatchExpressions:-key: app operator: In values:-store topologyKey: "kubernetes.io/hostname" containers:-name: alpine image: alpine command:-sleep-"3600" volumeMounts:-mountPath: / data name: data
Create a test deployment
$kubectl apply-f test-nfs.yamldeployment/test-nfs created
View the operation of pod
$kubectl get po-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEtest-nfs-859ccfdf55-kkgxj 1 Running 0 1m 10.233.68.245 uat05 test-nfs-859ccfdf55-aewf8 1 Running 0 1m 10.233.67.209 uat06
Enter Pod for testing
# enter the pod of the uat05 node $kubectl exec-it test-nfs-859ccfdf55-kkgxj sh# to create the file $echo "uat05" > / data/uat05# exit the pod$ edit# of the uat05 node and enter the pod of the uat06 node $kubectl exec-it test-nfs-859ccfdf55-aewf8 sh# to view the file contents $cat / data/uat05uat05
Clean up the environment
$kubectl delete deployment test-nfsPersistentVolumeClaim
In all the above examples, we mount the storage directly to the pod, so how do we manage these storage resources in kubernetes? This is what Persistent Volume and Persistent Volume Claims provide.
The ● PersistentVolume subsystem provides users and administrators with an API that abstracts the details of how to provide storage. To do this, we introduce two new API resources: PersistentVolume and PersistentVolumeClaim.
PersistentVolume (PV) is the storage set up by the administrator and is part of the cluster. Just as a node is a resource in a cluster, PV is a resource in a cluster. PV is a volume plug-in such as Volume, but has a lifecycle independent of Pod that uses PV. This API object contains the implementation of Volume, that is, NFS, iSCSI, or cloud vendor-specific storage systems.
PersistentVolumeClaim (PVC) is a request stored by the user. It is similar to Pod. Pod consumes node resources and PVC consumes PV resources. Pod can request specific levels of resources (CPU and memory). The declaration can request a specific size and access mode (for example, it can be mounted in read / write once or read-only multiple modes). Although PersistentVolumeClaims allows users to use abstract storage resources, users need PersistentVolume with different properties (such as performance) to solve different problems. The cluster administrator needs to be able to provide a variety of PersistentVolume, which can vary in size and access mode, but do not need to disclose the details of implementing these volumes to the user. For these requirements, StorageClass resources can be implemented.
● in the actual usage scenario, the creation and use of PV are usually not the same person. Here is a typical application scenario: the administrator creates a PV pool, and the developer defines the storage size and access mode required by the Pod in the Pod and PVC,PVC, and then the PVC automatically matches the most appropriate PV to the PV pool for Pod to use.
Example of using ▌
Create PersistentVolume
ApiVersion: v1kind: PersistentVolumemetadata: name: mypvspec: capacity: storage: 5Gi volumeMode: Filesystem accessModes:-ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: slow mountOptions:-hard-nfsvers=4.0 nfs: path: / tmp server: 172.17.0.2
Create PersistentVolumeClaim
Kind: PersistentVolumeClaimapiVersion: v1metadata: name: myclaimspec: accessModes:-ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 5Gi volumeName: mypv
Create Pod binding PVC
Kind: PodapiVersion: v1metadata: name: mypodspec: containers:-name: myfrontend image: nginx volumeMounts:-mountPath: "/ var/www/html" name: mypd volumes:-name: mypd persistentVolumeClaim: claimName: myclaim
Check the operation of pod to verify the binding result
$kubectl get po-o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODEmypod 1 Running 0 1m 10.233.68.249 uat05 $kubectl exec-it mypod sh$ ls / var/www/html
Clean up the environment
$kubectl delete pv mypv$ kubectl delete pvc myclaim$ kubectl delete po mypod
Use configMap to cache the redis so that the configuration in the restart configMap will still take effect even if the redis container hangs. Then emptyDir is used to share the directories of multiple containers in the same Pod. In practical applications, developers usually use initContainers to preprocess files, and then pass them to Containers through emptyDir. Then use hostPath to access host resources, when the network io can not meet the file read and write requirements, you can consider that the fixed application runs on only one node and then use hostPath to solve the file read and write speed requirements.
The examples of NFS and PersistentVolumeClaim are essentially shared directories of nfs servers mounted by test containers, but these resources are generally only in the hands of administrators, so developers are not so friendly if they want to obtain these resources, and dynamic storage classes (StorageClass) can solve such problems.
After reading the above, do you have any further understanding of the common storage types in Kubernetes? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.