Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to apply GlusterFS in Kubernetes

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "how to apply GlusterFS in Kubernetes". In daily operation, I believe many people have doubts about how to apply GlusterFS in Kubernetes. I have consulted all kinds of materials and sorted out simple and easy operation methods. I hope to help you answer the doubts about "how to apply GlusterFS in Kubernetes"! Next, please follow the small series to learn together!

background introduction

First of all, you need to have successfully set up the Kubernetes cluster experimental environment

If you want to use glassfs for storageclass in a production environment, this article will ensure that you do.

[Glusterfs Experimental Environment Preparation]

In this case, the experimental environment and the k8s cluster use the same host address. In the actual production case, please deploy separately. Each of the three virtual machines has a new hard disk, and the size of the hard disk is 10g.

host IP

hostname

disk

mount point

192.168.56.11

Linux-node1.example.com

/dev/sdb

/gluster_brick1

192.168.56.12

Linux-node2.example.com

/dev/sdb

/gluster_brick1

192.168.56.13

Linux-node3.example.com

/dev/sdb

/gluster_brick1

[Part 1: Glusterfs Combat]

1. Disk type modification

In the actual production case, everyone's disk is relatively large. First, modify the disk type to gpt. Here is a script for production use.

#!/ bin/bash PATH=/bin:/sbin:/usr/bin:/usr/sbinexport PATH function main(){i=3while [ $i -lt 10 ] doj=`echo $i|awk '{printf "%c",97+$i}'`parted /dev/sd$j /etc/fstabdone}

So without further ado, we continue our experiment by formatting disk/dev/sdb1 as shown below. All three machines did the following. If you don't use labels here, you can use uuid, otherwise there are pits.

# mkfs.xfs -L brick1 -f -i size=512 /dev/sdb1# mkdir /gluster_brick1# echo "LABEL=brick1 /gluster_brick1 xfs defaults 0 0" >> /etc/fstab# mount -a

3. Install glusterfs

Go to gfs official website to see glusterfs more simple, the installation of the command is not complicated, n years ago when I installed or version 3, accidentally found that version 5, the change is too fast, if you are interested can see the official website again. All three machines are glusterfs servers, and they need to be installed as follows:

# yum install centos-release-gluster # yum install glusterfs-server(Note that if it is a glsuterfs client, you only need to follow the glusterfs-client package) # systemctl enable glusterd # systemctl start glusterd

Of course, you can also use the previously configured salt to automate the installation.

# salt-ssh '*' cmd.run 'systemctl enable glusterd'

4. Guster Peer Management

When glusterd is up and running, add two more machines to any machine and add them to the trust pool. To operate in linux-node1, you need to execute the following two commands.

# gluster peer probe linux-node2 # gluster peer probe linux-node3

Verify that the addition is successful, as shown in the figure below.

[root@linux-node1 ~]# gluster peer status

5. Create volume devops

Note that in production, three copies are generally used, here are three disks, and the data stored in them are consistent, as shown in the following figure:

[root@linux-node1 ~]# gluster volume create devops replica 3 linux-node1:/gluster_brick1/b1 linux-node2:/gluster_brick1/b1 linux-node3:/gluster_brick1/b1volume create: devops: success: please start the volume to access data[root@linux-node1 ~]# gluster volume start devopsvolume start: devops: success

6. Test glusterfs storage

[root@linux-node1 ~]# mkdir /test[root@linux-node1 ~]# mount -t glusterfs linux-node1:/devops /test[root@linux-node1 ~]# for i in `seq -w 1 100`; do echo "test" >> /test/copy-test-$i; done[root@linux-node1 ~]# ls -lA /test | wc -l101

This step allows you to see 100 files; then look below:

Since there are three copies, there are already 100 files on each machine, so that's all for the glusterfs experiment.

Part 2: Using in Kubernetes Clusters

1. Use glusterfs as pv and pvc in clusters

Three files are required:

glusterfs-endpoints.yaml

pv-demo.yaml

pvc-demo.yaml

[root@linux-node1 glusterfs]# cat glusterfs-endpoints.yaml apiVersion: v1kind: Endpointsmetadata: name: glusterfs-volumesubsets:- addresses: - ip: 192.168.56.11 ports: - port: 20- addresses: - ip: 192.168.56.12 ports: - port: 20- addresses: - ip: 192.168.56.13 ports: - port: 20[root@linux-node1 glusterfs]# cat pv-demo.yaml apiVersion: v1kind: PersistentVolumemetadata: name: pv-glusterspec: capacity: storage: 5G accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle glusterfs: endpoints: "glusterfs-volume" path: "devops" readOnly: false[root@linux-node1 glusterfs]# cat pvc-demo.yaml kind: PersistentVolumeClaimapiVersion: v1metadata: name: pvc-glusterfsspec: accessModes: - ReadWriteMany resources: requests: storage: 1G

Use kubectl to create a good resource directly.

2. Verification with deployment

[root@linux-node1 glusterfs]# cat nginx-ingress-deployment.yaml apiVersion: apps/v1kind: Deploymentmetadata: name: nginx-deployment labels: app: nginxspec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.13.12 ports: - containerPort: 80 volumeMounts: - name: www subPath: nginxpvc-gluster mountPath: /usr/share/nginx/html volumes: - name: www persistentVolumeClaim: claimName: pvc-glusterfs# kubectl apply -f nginx-ingress-deployment.yaml

3. Automatic expansion PVC

1. When the disk space of glustefs is large enough, the pv we use is different from the size we apply for. The upper limit is the size of GFS volume. If gfs storage is not enough, just expand the volume of gfs.

2. What happens when the storage administrator enables this parameter?

# gluster volume devops quota enable# gluster volume quota devops limit-usage / 1gb

After expanding the storage, we will increase our quota value. At this time, the pod in our k8s can use a large amount of storage without restarting. PV and PVC are not adjusted.

At this point, the study of "how to apply GlusterFS in Kubernetes" is over, hoping to solve everyone's doubts. Theory and practice can better match to help everyone learn, go and try it! If you want to continue learning more relevant knowledge, please continue to pay attention to the website, Xiaobian will continue to strive to bring more practical articles for everyone!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report