Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kubernetes Advanced PersistentVolume static supply to realize NFS Network Storage

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Kubernetes Advanced PersistentVolume static supply to realize NFS Network Storage

Network storage

NFS is a very early technology, stand-alone storage in the server is still very mainstream, but the only disadvantage of nfs is that there is no cluster version, clustering is still more difficult, the file system can not do, this is a big disadvantage, large-scale still need to choose some distributed storage, nfs is a network file storage server, after installing nfs, share a directory Other servers can be mounted locally through this directory, and files written locally to this directory will be synchronized to a remote server to achieve a shared storage function, which is generally used for shared storage of data, such as multiple web servers. If you definitely need to ensure the data consistency of these web servers, you will use this shared storage. If you mount nfs to multiple web servers, Under the root of the website, the website program is placed on the nfs server, in that case. Every website, every web program can read this directory, consistent data, so that multiple nodes can be guaranteed to provide consistent programs.

Take a separate server as a nfs server. Let's first set up a NFS server to store our web page root directory.

[root@nfs ~] # yum install nfs-utils-y

Expose the directory so that other servers can mount it

[root@nfs ~] # mkdir / opt/k8s [root@nfs ~] # vim / etc/exports/opt/k8s 192.168.30.0Univer 24 (rw,no_root_squash)

Add permissions to this network segment, which can be read and written

[root@nfs ~] # systemctl start nfs

Find a node to mount and test. As long as you share this directory, you will all install this client.

[root@k8s-node2 ~] # yum install nfs-utils-y [root@k8s-node2 ~] # mount-t nfs 192.168.30.27:/opt/k8s / mnt [root@k8s-node2 ~] # cd / mnt [root@k8s-node2 mnt] # df-h192.168.30.27:/opt/k8s 36G 5.8G 30G 17% / mnt [root@k8s-node2 mnt] # touch a.txt

Go to the server to check that the data has been shared.

[root@nfs ~] # cd / opt/k8s/ [root@nfs k8s] # lsa.txt

Deleting data from the nfs server will also be deleted

How to use K8s next?

We put all the web pages under this directory.

[root@nfs K8s] # mkdir wwwroot [root@k8s-master demo] # vim nfs.yamlapiVersion: apps/v1beta1kind: Deploymentmetadata: name: nfsspec: replicas: 3 template: metadata: labels: app: nginx spec: containers:-name: nginx image: nginx volumeMounts:-name: wwwroot mountPath: / usr/share/nginx/html ports:-containerPort: 80 Volumes:-name: wwwroot nfs: server: 192.168.30.27 path: / opt/k8s/wwwroot

[root@k8s-master demo] # kubectl create-f nfs.yaml

[root@k8s-master demo] # kubectl get podNAME READY STATUS RESTARTS AGEmypod 1/1 Running 0 6h7mmypod2 1/1 Running 0 6hnginx-5ddcc6cb74-lplxl 1/1 Running 0 6h53mnginx-deployment-744d977b46-8q97k 1 Running 0 48snginx-deployment-744d977b46-ftjfk 1 Running 0 48snginx-deployment-744d977b46-nksph 1 48sweb-67fcf9bf8-mrlhd 1 Running 0 48sweb-67fcf9bf8-mrlhd 1 Running 0 103m

Enter the container and check the mount to make sure it is mounted.

[root@k8s-master demo] # kubectl exec-it nginx-deployment-744d977b46-8q97k bashroot@nginx-deployment-744d977b46-8q97k:/# df-hFilesystem Size Used Avail Use% Mounted onoverlay 17G 5.6G 12G 33% / tmpfs 64m 0 64m 0% / devtmpfs 2.0G 0 2 .0G 0% / sys/fs/cgroup/dev/mapper/centos-root 17G 5.6G 12G 33% / etc/hostsshm 64M 064M 0% / dev/shm192.168.30.27:/opt/k8s/wwwroot 36G 5.8G 30G 17% / usr/share/nginx/htmltmpfs 2.0G 12K 2.0G 1% / run/secrets/kubernetes.io/serviceaccounttmpfs 2.0G 02.0G 0% / proc/acpitmpfs 2.0G 02.0G 0% / proc/scsitmpfs 2.0G 02.0G 0% / sys/firmware

We write data in the web directory of the source pod and check that our nfs server directory will also share

Root@nginx-deployment-744d977b46-8q97k:/# cd / usr/share/nginx/html/root@nginx-deployment-744d977b46-8q97k:/usr/share/nginx/html# lsroot@nginx-deployment-744d977b46-8q97k:/usr/share/nginx/html# echo "hello world" > index.htmlroot@nginx-deployment-744d977b46-8q97k:/usr/share/nginx/html# cat index.html hello world

Test View

[root@nfs k8s] # cd wwwroot/ [root@nfs wwwroot] # ls [root@nfs wwwroot] # lsindex.html [root@nfs wwwroot] # cat index.html hello world

K8s for storage choreography

Data persistence volume PersistentVolume referred to as pv/pvc is mainly used for container storage arrangement.

PersistentVolume (PV): abstraction of the creation and use of storage resources so that storage is managed as a resource in a cluster

Pv is considered by operation and maintenance and is used to manage external storage.

Static: create a pv in advance, such as creating a 100G pv,200G pv for people in need to use, that is to say, pvc connects to pv, that is, you know how much pv creates, how much space is, what the name is, and there is a certain degree of matching.

Dynamic state

PersistentVolumeClaim (PVC): so that users do not need to care about the specific Volume implementation details

To define how many gigabytes are used, for example, if a developer uses 10 gigabytes to deploy a service, then you can use pvc as a resource object to define the use of 10 gigabytes, regardless of the rest

PersistentVolume static supply

First create a container application

[root@k8s-master ~] # cd demo/ [root@k8s-master demo] # mkdir storage [root@k8s-master demo] # cd storage/ [root@k8s-master storage] # vim pod.yamlapiVersion: v1kind: Podmetadata: name: my-podspec: containers:-name: nginx image: nginx:latest ports:-containerPort: 80 volumeMounts:-name: www mountPath: / usr/share/nginx/html volumes:- Name: www persistentVolumeClaim: claimName: my-pvc

Yaml is required for the volume. The name here must correspond. Generally, the two files are put together.

[root@k8s-master storage] # vim pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: my-pvcspec: accessModes:-ReadWriteMany resources: requests: storage: 5Gi

Next, the operation and maintenance staff will come out and create the pv in advance.

[root@k8s-master storage] # vim pv.yamlapiVersion: v1kind: PersistentVolumemetadata: name: zhaochengspec: capacity: storage: 5Gi accessModes:-ReadWriteMany nfs: path: / opt/k8s/zhaochengserver: 192.168.30.27

Create the pv and mount the directory in advance

[root@k8s-master storage] # kubectl create-f pv.yaml persistentvolume/my-pv created [root@k8s-master storage] # kubectl get pvzhaocheng 5Gi RWX Retain Available 5s

I'll create another pv, create the directory in advance on the nfs server, and change the name.

[root@localhost ~] # cd / opt/k8s/ [root@localhost K8s] # mkdir zhaocheng [root@k8s-master storage] # vim pv2.yaml apiVersion: v1kind: PersistentVolumemetadata: name: zhaochengchengspec: capacity: storage: 10Gi accessModes:-ReadWriteMany nfs: path: / opt/k8s/zhaochengchengserver: 192.168.30.27 [root@k8s-master storage] # kubectl get pvzhaocheng 5Gi RWX Retain Available 13szhaochengcheng 10Gi RWX Retain Available 4s

And now create our pod and pvc, which I wrote together.

[root@k8s-master storage] # vim pod.yamlapiVersion: v1kind: Podmetadata: name: my-podspec:-name: nginx image: nginx:latest ports:-containerPort: 80 volumeMounts:-name: www mountPath: / usr/share/nginx/html volumes:-name: www persistentVolumeClaim: claimName: my-pvc---apiVersion: v1kind: PersistentVolumeClaimmetadata: name: my-pvcspec: accessModes:-ReadWriteMany resources: Requests: storage: 5Gi [root@k8s-master storage] # kubectl create-f pod.yaml

Here we will match our static pv according to the needs of our pod. We have created a 5G and a 10G to match according to our own size.

[root@k8s-master storage] # kubectl get pod,pvcpod/my-pod 1/1 Running 0 13spod/nfs-744d977b46-dh9xj 1/1 Running 0 12mpod/nfs-744d977b46-kcx6h 1/1 Running 0 12mpod/nfs-744d977b46-wqhc6 1/1 Running 0 12mpersistentvolumeclaim/my-pvc Bound zhaocheng 5Gi RWX 13s

Enter the container to check the amount of memory we use.

[root@k8s-master storage] # kubectl exec-it pod/my-pod bashroot@my-pod:/# df-ThFilesystem Type Size Used Avail Use% Mounted onoverlay overlay 17G 4.9G 13G 29% / tmpfs tmpfs 64m 064m 0% / devtmpfs tmpfs 2. 0G 02.0G 0% / sys/fs/cgroup/dev/mapper/centos-root xfs 17G 4.9G 13G 29% / etc/hostsshm tmpfs 64M 064M 0% / dev/shm192.168.30.27:/opt/k8s/zhaocheng nfs4 36G 5.8G 30G 17% / usr/share/nginx/htmltmpfs Tmpfs 2.0G 12K 2.0G 1% / run/secrets/kubernetes.io/serviceaccounttmpfs tmpfs 2.0G 02.0G 0% / proc/acpitmpfs tmpfs 2.0G 02.0G 0% / proc/scsitmpfs tmpfs 2.0G 02.0G 0% / sys/firmware

To create a web test

Root@my-pod:/# cd / usr/share/nginx/html/root@my-pod:/usr/share/nginx/html# lsroot@my-pod:/usr/share/nginx/html# echo "5G ready" > index.html root@my-pod:/usr/share/nginx/html# cat index.html 5G ready

Check our nfs server.

[root@localhost ~] # cd / opt/k8s/ [root@localhost k8s] # lswwwroot zhaocheng zhaochengcheng [root@localhost k8s] # cd zhaocheng [root@localhost zhaocheng] # cat index.html 5G ready

Bound together with our 5G.

[root@k8s-master storage] # kubectl get pvzhaocheng 5Gi RWX Retain Bound default/my-pvc 8m52szhaochengcheng 10Gi RWX Retain Available 7m51s

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report