Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of k8s Storage Volume in docker

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article is about the sample analysis of k8s storage volumes in docker. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

Because pod has a life cycle, as soon as pod is restarted, the data in it will be gone. So we need persistent data storage.

In K8s, the storage volume does not belong to the container, but to the pod. This means that containers in the same pod can share a storage volume.

The storage volume can be a directory on the host or an external device mounted on the host.

Storage Volume Typ

EmptyDIR storage volume: as soon as pod is restarted, the storage volume is also deleted. This is called emptyDir storage volume. Generally used as temporary space or cache relationship

HostPath storage volume: the directory on the host is used as a storage volume, which does not really achieve data persistence.

SAN (iscsi) or NAS (nfs, cifs): network storage device

Distributed storage (ceph,glusterfs,cephfs,rbd):

Cloud storage (Amazon's EBS,Azure Disk, Aliyun): this general K8s is also deployed on the cloud.

Key data must be backed up in different places, otherwise as soon as the data is deleted, no matter how many copies are useless.

[root@master ingress] # kubectl explain pods.spec.volumeshostPath

Function: use the host directory as the storage volume, which does not really achieve data persistence.

[root@master ~] # kubectl explain pods.spec.volumes.hostPath.typeKIND: PodVERSION: v1FIELD: type DESCRIPTION: Type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath "

View help: https://kubernetes.io/docs/concepts/storage/volumes#hostpath

Type description of hostPath.type:

DirectoryOrCreate: it means that the path we want to mount is an existing directory on the host, and create a new directory if it doesn't exist.

Directory: a directory must exist on the host. If it does not exist, an error will be reported.

FileOrCreate: it means that the file is mounted. If it doesn't exist, mount a file. Files can also be mounted as storage.

File: indicates that the file to be mounted must exist in advance, otherwise an error will be reported.

Socket: indicates that it must be a file of type Socket.

CharDevice: represents a device file of a character type.

BlockDevice: represents a device file of block type.

Example:

[root@master volumes] # cat pod-hostpath-vol.yaml

ApiVersion: v1kind: Podmetadata: name: pod-vol-hostpath namespace: defaultspec: containers:-name: myapp image: ikubernetes/myapp:v1 volumeMounts:-name: html # the name of the storage volume is html mountPath: / usr/share/nginx/html/ # Mount path volumes:-name: html hostPath: path: / data/pod/volume1 type: DirectoryOrCreate [root@master volumes] # kubectl apply-f pod-hostpath-vol.yaml pod/pod-vol-hostpath created

Then on the node1 node, you can see that the / data/pod/volume1 directory has been created.

[root@master volumes] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODEclient 0 bind 1 Error 0 15d 10.244.2.4 node2pod-vol-hostpath 1 Running 0 4m 10.244.1.105 node1

When the node1 node goes down, pod floats to the node2 node and uses the / data/pod/volume1 directory on the node2 node. This is a problem because the directory on the node2 node does not synchronize the data of the directory on the node1 node, so the data inconsistency occurs.

The solution to this problem is to use a similar nfs method so that two node nodes share one storage.

Using nfs for shared storage

For convenience, I use master nodes as nfs storage.

[root@master ~] # yum-y install nfs-utils [root@master ~] # mkdir / data/volumes [root@master ~] # cat / etc/exports#no_root_squash: a user who logs in to a NFS host to use a shared directory, if it is root, then he has root permission for this shared directory! This project is "extremely unsafe" and is not recommended! # root_squash: when logging in to the NFS host to use the shared directory, if the user is root, the user's permissions will be compressed to anonymous users, and usually his UID and GID will become the identity of the nobody system account; / data/volumes 172.16.0.0 systemctl start nfs 16 (rw,no_root_squash) [root@master ~] # user

Nfs-utils packages are also installed on node1 and node2

[root@node1 ~] # yum-y install nfs-utils

Mount on node1 and node2:

[root@node1] # mount-t nfs 172.16.1.100:/data/volumes / mnt

On master:

[root@master ~] # kubectl explain pods.spec.volumes.nfs [root@master volumes] # cat pod-vol-nfs.yaml apiVersion: v1kind: Podmetadata: name: pod-vol-nfs namespace: defaultspec: containers:-name: myapp image: ikubernetes/myapp:v1 volumeMounts:-name: html # the name of storage volume is html mountPath: / usr/share/nginx/html/ # mount path The path in the myapp container volumes:-name: html nfs: path: / data/volumes server: 172.16.1.100 # nfs server ip [root@master volumes] # kubectl apply-f pod-vol-nfs.yaml [root@master volumes] # kubectl get pods-o wideNAME READY STATUS RESTARTS AGE IP NODEpod-vol-nfs 1 Running 0 1m 10.244.1.106 node1 [root@master volumes] # cat / data/volumes/index.htmlhello world [root@master volumes] # curl 10.244.1.106 # container iphello world

Visible containers use shared storage provided by nfs.

However, nfs itself does not have redundancy, so if nfs goes down, the data is lost. Therefore, we generally use glusterfs or cephfs distributed storage.

Pvc and pv

Users only need to mount the pvc to the container without paying attention to the technology used to implement the storage volume. The relationship between pvc and pv is similar to that between pod and node, and the former consumes the resources of the latter. Pvc can request a storage resource of a specified size from pv and set the access mode.

When defining pod, we just need to state how big a storage volume we want. The pvc storage volume must be directly bound to the pvc of the current namespace. Pvc must establish a binding relationship with pv. And pv is the real space on some storage device.

[root@master volumes] # kubectl explain pods.spec.volumes.persistentVolumeClaim [root@master volumes] # kubectl explain pvc

There is an one-to-one correspondence between pvc and pv. Once a pv is bound by a pvc, then the pv cannot be bound by other pvc.

A pvc can be accessed by multiple pod.

Create the following directories on the storage machine (here I use the master node for storage, and a separate machine can be used for storage in production):

[root@master volumes] # mkdir v {1Power2, 3jor4, 5} [root@master volumes] # cat / etc/exports#no_root_squash: a user who logs in to a NFS host and uses a shared directory, if it is root, then he has root permission for this shared directory! This project is "extremely unsafe" and is not recommended! # root_squash: when logging in to the NFS host to use the shared directory, if the user is root, the user's permissions will be compressed to anonymous users. Usually, his UID and GID will become the identity of the nobody system account. / data/volumes/v1 172.16.0.0 data/volumes/v1 16 (rw,no_root_squash) / data/volumes/v2 172.16.0.0 rw,no_root_squash 16 (rw,no_root_squash) / data/volumes/v3 172.16.0.0 rw,no_root_squash 16 (rw,no_root_squash) / data/volumes/v5 172.16.0.0 rw,no_root_squash 16 (rw) No_root_squash) [root@master volumes] # exportfs-arv # No need to restart nfs service The configuration file will take effect exporting 172.16.0.0/16:/data/volumes/v5exporting 172.16.0.0/16:/data/volumes/v4exporting 172.16.0.0/16:/data/volumes/v3exporting 172.16.0.0/16:/data/volumes/v2exporting 172.16.0.0/16:/data/volumes/v1 [root@master volumes] # showmount-eExport list for master:/data/volumes/v5 172.16.0.0/16/data / volumes/v4 172.16.0.0/16/data/volumes/v3 172.16.0.0/16/data/volumes/v2 172.16.0.0/16/data/volumes/v1 172.16.0.0 kubectl explain pv.specFIELDS 16 [root@master volumes] # kubectl explain pv.spec.nfs [root@master ~] # accessModes AccessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes

Visit https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes for help.

The accessModes modes are:

ReadWriteOnce: one way to read and write, can be abbreviated to RWO

ReadOnlyMany: multi-channel read-only, which can be abbreviated to ROX

ReadWriteMany: multi-channel read and write, can be abbreviated to RWX

Different types of storage volumes support different accessModes.

[root@master volumes] # cat pv-demo.yaml apiVersion: v1kind: PersistentVolumemetadata: name: pv001 # Note: do not add namespaces when defining pv, because pv belongs to the entire cluster, not to a namespace. But pvc belongs to a namespace labels: name: pv001spec: nfs: path: / data/volumes/v1 server: 172.16.1.100 accessModes: ["ReadWriteMany", "ReadWriteOnce"] capacity: # allocate disk space storage: 1Gi---apiVersion: v1kind: PersistentVolumemetadata: name: pv002 # Note: do not add namespaces when defining pv, because pv belongs to the entire cluster, not to a namespace. But pvc belongs to a namespace labels: name: pv002spec: nfs: path: / data/volumes/v2 server: 172.16.1.100 accessModes: ["ReadWriteOnce"] capacity: # allocate disk space storage: 2Gi---apiVersion: v1kind: PersistentVolumemetadata: name: pv003 # Note: do not add a namespace when defining pv, because pv belongs to the entire cluster, not to a namespace. But pvc belongs to a namespace labels: name: pv003spec: nfs: path: / data/volumes/v3 server: 172.16.1.100 accessModes: ["ReadWriteMany", "ReadWriteOnce"] capacity: # allocate disk space storage: 1Gi---apiVersion: v1kind: PersistentVolumemetadata: name: pv004 # Note: do not add namespaces when defining pv, because pv belongs to the entire cluster, not to a namespace. But pvc belongs to a namespace labels: name: pv004spec: nfs: path: / data/volumes/v4 server: 172.16.1.100 accessModes: ["ReadWriteMany", "ReadWriteOnce"] capacity: # allocate disk space storage: 1Gi---apiVersion: v1kind: PersistentVolumemetadata: name: pv005 # Note: do not add namespaces when defining pv, because pv belongs to the entire cluster, not to a namespace. But pvc belongs to a namespace labels: name: pv005spec: nfs: path: / data/volumes/v5 server: 172.16.1.100 accessModes: ["ReadWriteMany" "ReadWriteOnce"] capacity: # allocate disk space storage: 1Gi [root@master volumes] # kubectl apply-f pv-demo.yaml persistentvolume/pv001 createdpersistentvolume/pv002 createdpersistentvolume/pv003 createdpersistentvolume/pv004 createdpersistentvolume/pv005 created [root@master volumes] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv001 1Gi RWO RWX Retain Available 2mpv002 2Gi RWO Retain Available 2mpv003 1Gi RWO,RWX Retain Available 2mpv004 1Gi RWO RWX Retain Available 2mpv005 1Gi RWO,RWX Retain Available 2m

Recycling strategy: if a pvc stores data in pv and then pvc deletes it, how to deal with the data in pv. There are several strategies:

Reclaim_policy: that is, pvc has been deleted, but the data in pv is not good at it and still has it.

Recycle: if pvc is deleted, then delete the data in pv as well.

Delete: if you delete pvc, then delete pv as well.

Let's create the manifest file for pvc again.

[root@master ~] # kubectl explain pvc.spec [root@master ~] # kubectl explain pods.spec.volumes.persistentVolumeClaim [root@master volumes] # cat pod-vol-pvc.yaml apiVersion: v1kind: PersistentVolumeClaim # abbreviated as pvcmetadata: name: mypvc namespace: default # pvc and pod are in the same namespace spec: accessModes: [ReadWriteMany "] # must be a subset of pv policy resources: requests: storage: 1Gi # means I want a space of 1G pvc- -- apiVersion: v1kind: Podmetadata: name: pod-vol-pvc namespace: defaultspec:-name: myapp image: ikubernetes/myapp:v1 volumeMounts:-name: html # the name of the storage volume is html mountPath: / usr/share/nginx/html/ # Mount path volumes:-name: html persistentVolumeClaim: claimName: mypvc # indicates which pvc [root@master volumes] # kubectl apply-f pod-vol-pvc.yaml persistentvolumeclaim I want to use / mypvc createdpod/pod-vol-pvc created [root@master volumes] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpv001 1Gi RWO RWX Retain Available 7hpv002 2Gi RWO Retain Available 7hpv003 1Gi RWO,RWX Retain Available 7hpv004 1Gi RWO RWX Retain Bound default/mypvc 7hpv005 1Gi RWO,RWX Retain Available 7h

You can see above that pv004 is bound by the mypvc of the default namespace.

[root@master volumes] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmypvc Bound pv004 1Gi RWO RWX 33m [root@master volumes] # kubectl get pods NAME READY STATUS RESTARTS AGEclient 0/1 Error 0 16dpod-vol-pvc 1/1 Running 0 35m

In production, pv does not belong to the node node, but is independent of the node node. So, the node node is broken and the data in the pv is still there. In addition, pod belongs to the node node.

After k8s 1.10, it is safe to manually remove pv from the underlying layer.

StorageClass (storage class)

By providing different storage classes, Kubernetes cluster administrators can meet the storage needs of users with different quality of service levels, backup policies and arbitrary policies. Dynamic storage volume provisioning is implemented using StorageClass, which allows storage volumes to be created on demand. If there is no dynamic storage provisioning, the administrator of the Kubernetes cluster will have to create new storage volumes manually. By storing volumes dynamically, Kubernetes will be able to automatically create the storage it needs according to its users' needs.

The underlying layer of storageclass can be different clusters such as glusterfs,cephfs.

Configmap

Configmap and secret are two special storage volumes that do not provide storage space for pod, but provide our administrators or users with a way to inject information from the outside to the inside of the pod.

Configmap: put the configuration file on the configuration center, and then multiple pod read the configuration file in the configuration center. However, the configuration information in configmap is in clear text, so it is not secure.

Secret: the function is the same as configmap, except that the configuration file stored in the configuration center is not in clear text.

Configmap and secret are also specific to a namespace.

[root@master ~] # kubectl explain configmap [root@master ~] # kubectl explain cm # abbreviation [root@master ~] # kubectl create configmap-- help

Simply, we can use the command line to create a configmap.

[root@master] # kubectl create configmap nginx-config-- from-literal=nginx_port=80-- from-literal=server_name=myapp.zhixin.comconfigmap/nginx-config created [root@master ~] # kubectl get cmNAME DATA AGEnginx-config 23m [root@master ~] # kubectl describe cm nginx-configName: nginx-configNamespace: defaultLabels: Annotations: Data====nginx_port:----80server_name:----myapp.zhixin.com

Let's create the configmap by configuring the manifest:

[root@master configmap] # cat www.conf server {server_name myapp.zhixin.com; listen 80; root / data/web/html } [root@master configmap] # kubectl create configmap nginx-www-- from-file=www.confconfigmap/nginx-www created [root@master configmap] # kubectl get cmNAME DATA AGEnginx-config 2 3mnginx-www 1 7s [root@master configmap] # kubectl describe cm nginx-wwwName: nginx-wwwNamespace: defaultLabels: Annotations: Data====www.conf:----server {server_name myapp.zhixin.com; listen 80 Root / data/web/html;}

The configmap we created can be injected into the Pod using ENV and other methods.

We use ENV to inject configmap into pod.

[root@master configmap] # cat pod-configmap.yaml apiVersion: v1kind: Podmetadata: name: pod-cm-1 namespace: default labels: app: myapp # kv format You can also use curly braces to indicate the hierarchy to which the tier: frontend # definition belongs annotations: chenzx.com/created-by: "cluster-admin" # this is the annotation key value pair spec: containers:-name: myapp # the-sign before indicates that this is a list format You can also use square brackets to indicate image: tomcat ports:-name: http containerPort: 80 env: # this is a container attribute-name: NGINX_SERVER_PORT valueFrom: # kubectl explain pods.spec.containers.env.valueFrom configMapKeyRef: # indicates that we want to reference a configmap to get the data name: nginx-config # this is the name of configmap That is, the name key: nginx_port # obtained through kubectl get cm starts to refer to the second environment variable-name: NGINX_SERVER_NAME valueFrom: configMapKeyRef: name: nginx-config key: server_ name [root @ master configmap] # kubectl apply-f pod-configmap.yaml pod/pod-cm-1 created through the key # of kubectl describe cm nginx-config

In this way, we have created a pod for pod-cm-1, and the configuration file for this pod comes from configmap.

[root@master configmap] # kubectl get podsNAME READY STATUS RESTARTS AGEpod-cm-1 0ram 1 Running 0 15m [root@master configmap] # kubectl exec-it pod-cm-1-/ bin/sh# printenvNGINX_SERVER_PORT=80NGINX_SERVER_NAME= myapp.zhixin.com [root @ master configmap] # kubectl edit cm nginx-configconfigmap/nginx-config edited

The configuration file edited by edit will not take effect immediately in Pod. You need to restart pod to take effect.

[root@master configmap] # kubectl delete-f pod-configmap.yaml pod "pod-cm-1" deleted

Let's inject configmap into pod by configuring the mount storage volume.

[root@master configmap] # cat pod-configmap2.ymal apiVersion: v1kind: Podmetadata: name: pod-cm-2 namespace: default labels: app: myapp # kv format You can also use curly braces to indicate the hierarchy to which the tier: frontend # definition belongs annotations: chenzx.com/created-by: "cluster-admin" # this is the annotation key value pair spec: containers:-name: myapp # the-sign before indicates that this is a list format You can also use square brackets to indicate image: ikubernetes/myapp:v1 ports:-name: http containerPort: 80 volumeMounts:-name: nginxconf mountPath: / etc/nginx/conf.d/ readOnly: true volumes:-name: nginxconf configMap: name: nginx-config [root@master configmap] # kubectl apply-f pod-configmap2.ymal pod/pod-cm-2 created [root@master configmap] # kubectl get podsNAME READY STATUS RESTARTS AGEpod-cm-2 1 Running 0 1m [root@master configmap] # kubectl exec-it pod-cm-2-/ bin/sh/ # cd / etc/nginx/conf.d//etc/nginx/conf.d # lsnginx_port server_name/etc/nginx/conf.d # ls-ltotal 0lrwxrwxrwx 1 root root 17 Sep 27 05:07 nginx_port->.. data/nginx_portlrwxrwxrwx 1 root root 18 Sep 27 05:07 server_name->.. data/server_name/etc/nginx/conf.d # cat nginx_port8080

Let's inject the www.conf file we created earlier into pod:

[root@master configmap] # cat pod-configmap3.yaml apiVersion: v1kind: Podmetadata: name: pod-cm-3 namespace: default labels: app: myapp # kv format You can also use curly braces to indicate the hierarchy to which the tier: frontend # definition belongs annotations: chenzx.com/created-by: "cluster-admin" # this is the annotation key value pair spec: containers:-name: myapp # the-sign before indicates that this is a list format You can also use square brackets to indicate image: ikubernetes/myapp:v1 ports:-name: http containerPort: 80 volumeMounts:-name: nginxconf mountPath: / etc/nginx/conf.d/ readOnly: true volumes:-name: nginxconf configMap: name: nginx-www [root@master configmap] # kubectl apply-f pod-configmap3.yaml pod/pod-cm-3 created [root@master configmap] # [root@master configmap] # kubectl Get podsNAME READY STATUS RESTARTS AGEclient 0 Error 1 Error 0 16dpod-cm-3 1 Compact 1 Running 0 1m [root@master configmap] # kubectl exec-it pod-cm-3-/ bin/sh/ # cd / etc/nginx/conf.d//etc/nginx/conf.d # lswww.conf/etc/nginx/conf.d # cat www.conf server {server_name myapp.zhixin.com Listen 80; root / data/web/html;}

From the above example, you can see that we have injected the contents of www.conf into pod myapp.

[root@master configmap] # kubectl edit cm nginx-www

Change the port, and then go to the pod. Wait a little longer and you will see that the modified one takes effect in the pod.

[root@master configmap] # kubectl exec-it pod-cm-3-/ bin/sh/ # cd / etc/nginx/conf.d//etc/nginx/conf.d # cat www.conf server {server_name myapp.zhixin.com; listen 8081; root / data/web/html; [root@master configmap] # / etc/init.d/nginx reload # overload nginx to make port 8081 effective

What if we expect only some, not all, to be injected?

[root@master configmap] # kubectl explain pods.spec.volumes.configMap.items [root@master configmap] # kubectl create secret generic-help

If you inject the part through items, there will be no demonstration, so let the reader solve it on his own.

Secret

The function is the same as configmap, except that the configuration files stored in the secret configuration center are not in clear text.

[root@master configmap] # kubectl create secret-- help generic: the type used to save passwords tls: the type used to save certificates docker-registry: the type used to store docker authentication information, such as when pulling images from a private docker repository. Note: the process of dragging the image in K8s is kublete [root@master configmap] # kubectl explain pods.spec.imagePullSecrets. If you pull the image from a private repository, use imagePullSecrets to store login verification information.

Example:

[root@master configmap] # kubectl create secret generic mysql-root-password-- from-literal=password=123456secret/mysql-root-password created [root@master configmap] # kubectl get secretNAME TYPE DATA AGEdefault-token-5r85r kubernetes.io/service-account-token 3 19dmysql-root-password Opaque 1 40stomcat-ingress -secret kubernetes.io/tls 22 d [root@master configmap] # kubectl describe secret mysql-root-passwordName: mysql-root-passwordNamespace: defaultLabels: Annotations: Type: OpaqueData====password: 6 bytes

When you see the content of password, it is in the form of base64 encryption.

[root@master configmap] # kubectl get secret mysql-root-password-o yamlapiVersion: v1data: password: MTIzNDU2kind: Secretmetadata: creationTimestamp: 2018-09-27T06:01:24Z name: mysql-root-password namespace: default resourceVersion: "2482795" selfLink: / api/v1/namespaces/default/secrets/mysql-root-password uid: c3d3e8ec-c21a-11e8-bb35-005056a24ecbtype: Opaque

You can decode plaintext with the command base64 command:

[root@master configmap] # echo MTIzNDU2 | base64-d123456

It can be seen that secret is an anti-gentleman, not a villain, is a pseudo-encryption, .

Let's inject secret into pod through env.

[root@master configmap] # cat pod-secret-1.yaml apiVersion: v1kind: Podmetadata: name: pod-secret-1 namespace: default labels: app: myapp # kv format You can also use curly braces to indicate the hierarchy to which the tier: frontend # definition belongs annotations: chenzx.com/created-by: "cluster-admin" # this is the annotation key value pair spec: containers:-name: myapp # the-sign before indicates that this is a list format You can also use square brackets to indicate image: tomcat ports:-name: http containerPort: 80 env: # this is a container attribute-name: MYSQL_ROOT_PASSWORD valueFrom: # kubectl explain pods.spec.containers.env.valueFrom secretKeyRef: # indicates that we want to reference a configmap to get the data name: mysql-root-password # this is the name of configmap This is the name obtained through kubectl get secret: key: password # through the key # of kubectl describe secret mysql-root-password, start referencing the second environment variable-name: NGINX_SERVER_NAME valueFrom: configMapKeyRef: name: nginx-config key: server_name [root@master configmap] # kubectl apply-f pod-secret-1.yaml pod/pod-secret-1 created [root@master Configmap] # kubectl get podsNAME READY STATUS RESTARTS AGEpod-secret-1 1amp 1 Running 01m [root@master configmap] # kubectl exec-it pod-secret-1-/ bin/sh# printenvMYSQL_ROOT_PASSWORD=123456 Thank you for reading! This is the end of the article on "sample Analysis of K8s Storage Volume in docker". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report