Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Storage volumes for kubernetes

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Preface

Kubernetes, K8s for short, is an abbreviation made by using 8 instead of 8 characters "ubernete". Kubernetes is an open source application used to manage containerized applications on multiple hosts in the cloud platform. The goal of Kubernetes is to make the deployment of containerized applications simple and efficient. Kubernetes provides a mechanism for application deployment, planning, updating, and maintenance.

K8s supports similar storage volume functionality, but its storage volumes are bound to pod resources rather than containers. To put it simply, a storage volume is a shared directory defined on top of a pod resource and can be mounted by all containers inside it. It is associated with the storage space on an external storage device, thus independent of the container's own file system. Whether the data is persistent depends on whether the storage volume itself supports the persistence mechanism.

1.emptyDir storage volume apiVersion: v1kind: Podmetadata: name: cunchujuanspec: containers:-name: myapp # defines the first container used to display the contents of the index.html file image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent volumeMounts: # call storage volume-name: html # use the name of the storage volume as defined below mountPath: / usr/share/nginx/html/ # mount inside the container The path to-name: busybox # defines the second container used to generate index.html content image: busybox:latest imagePullPolicy: IfNotPresent volumeMounts:-name: html mountPath: / data/ command: ['/ bin/sh' '- clockwise while true Do echo $(date) > > / data/index.html;sleep 2 roles'] # this command will append time to the index.html file of the storage volume volumes: # define the storage volume-name: html # define the storage volume name emptyDir: {} # define the storage volume type

Above, we define two containers, of which the second container is to enter the date into the index.html of the storage volume. Because both containers are mounted on the same storage volume, the index.html of the first container is shared with the second container, and the curl can see that the index.html is constantly growing.

2.hostPath storage volume apiVersion: v1kind: Podmetadata: name: cs-hostpathspec: containers:-name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent volumeMounts:-name: html mountPath: / usr/share/nginx/html/ volumes:-name: html hostPath: # Storage volume type is hostPath path: / data/hostpath # path type: DirectoryOrCreate on the actual node node

First, let's take a look at the node that is dispatched to that node. This side is dispatched to the cs27 node.

Mkdir / data/hostpath/-pv # create a hostpath storage volume folder on the cs27 node

Echo hostpath Storage Volume Test > / data/hostpath/index.html # generate a home page file in the storage volume

If you visit curl, you can see that the contents of the file are generated above.

Note that the container here modifies the data so that it will be synchronized to the storage volume of hostpath, just like mounting

3.nfs shared Storage Volum

Find another host to act as nfs server

Mkdir data # create a nfs folder

Echo "nfs Test" > index.html # create test html file

Yum install-y nfs-utils # install nfs software

Vim / etc/exports # modify nfs configuration file

/ data/ 192.168.0.0 Compact 24 (rw,no_root_squash)

# configuration file path plus sharing each other's network segment

Yum install-y nfs-utils must also be installed on each node node, otherwise the driver cannot be mounted.

# run showmount-e 50.1.1.111 on the node of node to see if there is mount permission

Kubectl apply-f nfs.yaml # load container resources

Take a look at ip and visit curl to see if it is the test file of html created on nfs server.

4.NFS uses PV and PVC4.1 pv definition methods to introduce [root@k8s-master ~] # kubectl explain pv # how to view the definition of pv FIELDS: apiVersion kind metadata spec [root@k8s-master ~] # kubectl explain pv.spec # View the specifications defined by pv spec: nfs (define storage type) path (define mount volume path) server (define server name) accessModes (define access model, there are three access models Exist as a list That is, multiple access modes can be defined) ReadWriteOnce (RWO) single node read-write ReadOnlyMany (ROX) multi-node read-only ReadWriteMany (RWX) multi-node read-write capacity (define the size of PV space) storage (specify size) [root@k8s-master volumes] # kubectl explain pvc # View how PVC is defined KIND: PersistentVolumeClaimVERSION: v1FIELDS: apiVersion kind metadata spec [root@k8s-master volumes ] # kubectl explain pvc.specspec: accessModes (define access mode Must be a subset of PV's access mode) resources (define the size of the requested resource) requests: storage: 4.2 configure nfs storage

Mkdir v {1,2,3,}

Vim / etc/exports

/ data/v1 192.168.0.0 Compact 24 (rw,no_root_squash)

/ data/v2 192.168.0.0 Compact 24 (rw,no_root_squash)

/ data/v3 192.168.0.0 Compact 24 (rw,no_root_squash)

4.3 define 3 pvapiVersion: v1kind: PersistentVolumemetadata: name: pv001 labels: pv001spec: nfs: path: / data/v1 server: 50.1.1.111 accessModes: ["ReadWriteMany", "ReadWriteOnce",] capacity: storage: 1Gi---apiVersion: v1kind: PersistentVolumemetadata: name: pv002 labels: name: pv002spec: nfs: path: / data/v2 server: 50.1.1.111 accessModes: [ReadWriteMany "," ReadWriteOnce " ] capacity: storage: 2Gi---apiVersion: v1kind: PersistentVolumemetadata: name: pv003 labels: name: pv003spec: nfs: path: / data/v3 server: 50.1.1.111 accessModes: ["ReadWriteMany", "ReadWriteOnce",] capacity: storage: 5Gi4.4 create pvcapiVersion: v1kind: PersistentVolumeClaimmetadata: name: mypvc # name, declare the access type to the following call spec: accessModes: ["ReadWriteMany"] # and only match to The resources: requests: storage: 3Gi # contained in pv declares that the space of 3GB is required Can only match 3GB and meaning PV---apiVersion: v1kind: Podmetadata: name: cs-hostpathspec: containers:-name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent volumeMounts:-name: html mountPath: / usr/share/nginx/html/ volumes:-name: html persistentVolumeClaim: # Mount storage volume type pvc claimName: mypvc # specify name of PVC as "mypvc" to configure container applications: Secret and configMap

Secret: used to pass sensitive information to Pod, such as passwords, private keys, certificate files, etc. If this information is easily leaked in the container, Secret resources can allow users to store this information in urgent masses, and then mount it through Pod to achieve the effect of decoupling sensitive data from the system.

ConfigMap: it is mainly used to inject non-sensitive data into Pod. When in use, the user stores the data directly in the ConfigMap object, and then Pod implements the centralized definition and management of the container configuration file by using the ConfigMap volume for reference.

Create via-- from-literal

Kubectl create secret generic mysecret-- from-literal=username=admin-- from-literal=password=123456# create a Secret namae as mysecret

Create via-- from-file

Each file content corresponds to an information entry.

[root@cs25 ~] # echo "admin" > username [root@cs25 ~] # echo "123456" > passwordkubectl create secret generic mysecret2-- from-file=username-- from-file=password

Create via-- from-env-file

Each line of Key=Value in the file env.txt corresponds to an information entry.

[root@cs25 ~] # cat env.txt > username=admin > password=12345 > EOFkubectl create secret generic mysecret3-- from-env-file=env.txt

Through the YAML configuration file:

First use base64 command to transcode confidential data such as passwords.

Vim secret.yaml

ApiVersion: v1kind: Secretmetadata: name: mysecretyamldata: username: YWRtaW4K password: MTIzNDU2Cg== # key value

View

# Anti-decoding

Use secret

ApiVersion: v1kind: Podmetadata: name: pod-secretspec: containers:-name: pod-secret image: nginx volumeMounts:-name: foo # Mount that storage volume mountPath: "/ etc/foo" # pod mount path volumes:-name: foo # create storage volume, name, used by pod to call secret: # storage volume type secretName: mysecret # use that file

When exec enters the container cd to / etc/foo, you can see the key and value of the mysecret we created.

Redirect data

ApiVersion: v1kind: Podmetadata: name: pod-secretspec: containers:-name: pod-secret image: nginx volumeMounts:-name: foo mountPath: "/ etc/foo" volumes:-name: foo secret: secretName: mysecret items: # Custom storage path and file name-key: username path: 233/name # redirect path and file name (key) Note that the path should write the relative path-key: password path: 233/pass

At this point, the data is redirected to the "/ etc/foo/233" folder.

In this way, Secret also supports dynamic updates: after Secret updates, the data in the container is also updated.

Vim secret.yaml

Kubectl apply-f secret.yaml

Log in and check that the value of name has changed.

Environment variable passing key

ApiVersion: v1kind: Podmetadata: name: pod-secretspec: containers:-name: pod-secret image: nginx env:-name: username # passed to the environment variable inside the container valueFrom: secretKeyRef: name: mysecret # use that secret key: username # use that key-name: password valueFrom: SecretKeyRef: name: mysecret key: password

Exec logs in to echo environment variable

The data of Secret is successfully read through the environment variables SECRET_USERNAME and SECRET_PASSWORD.

It is important to note that it is convenient for environment variables to read Secret, but it does not support Secret dynamic updates.

Secret can provide Pod with sensitive data such as password, Token, private key, etc., and ConfigMap can be used for some non-sensitive data, such as application configuration information.

If all the above additions to configmap are supported, there are no examples.

Configmap, give an example.

Vim www.conf # create nging profile

Server {server_name www.233.com; listen 8860; root / data/web/html;}

Kubectl create configmap configmap-cs-from-file=www.conf

# create a configmap resource whose name is "configmap-cs", and the content is the www.conf created earlier

Vim nginx-configmap # create pod

ApiVersion: v1kind: Podmetadata: name: pod-secretspec: containers:-name: pod-secret image: nginx volumeMounts:-name: nginxconf # call the storage volume named "nginxconf" mountPath: / etc/nginx/conf.d/ volumes:-name: nginxconf # create a storage volume configMap: # Storage volume type configMap name: configmap-cs # use the resource "configmap-cs" in configmap

Kubectl apply-f nginx-configmap.yaml # load startup

Check that IP accesses the resource port I just defined, and the normal access indicates that the configuration file is in effect.

Log in to view the file

Kubectl edit configmaps configmap-cs

# command to modify resources, using the same method as vim

POD mount process

POD first mounts the storage volume "nginxconf" named "nginxconf". The content of the storage volume "configmap-cs" is generated by calling the resource "configmap-cs" in the configMap resource. This resource is generated by reading the www.conf file.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report