Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to manage GlusterFS through Heketi to provide persistent storage for K8S cluster

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article introduces the knowledge of "how to provide persistent storage for K8S cluster through Heketi management GlusterFS". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Reference documentation:

Github project: https://github.com/heketi/heketi

MANAGING VOLUMES USING HEKETI: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/ch05s02

StorageClass: https://kubernetes.io/docs/concepts/storage/storage-classes/

StorageClass (English): https://k8smeetup.github.io/docs/concepts/storage/storage-classes/

Dynamic Volume Provisioning: https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/

Nfs- provisioner: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs

I. introduction to Heketi 1. Brief introduction

Heketi is a framework that provides RESTful API to manage GlusterFS volumes, making it easy for administrators to manipulate GlusterFS:

Can be used to manage the lifecycle of GlusterFS volumes

Ability to dynamically supply storage resources on cloud platforms such as OpenStack,Kubernetes,Openshift (dynamically select bricks in the GlusterFS cluster to build volume)

Support GlusterFS multi-cluster management.

two。 Frame

Heketi supports GlusterFS multi-cluster management

The failure domain is distinguished by zone in the cluster.

2. Environment 1. Environment

Kubernetes and GlusterFS clusters have been deployed ahead of schedule. Please refer to:

Kubernetes: https://www.cnblogs.com/netonline/tag/kubernetes/

GlusterFS: https://www.cnblogs.com/netonline/p/9102004.html

Note: GlusterFS only needs to be installed and started, and there is no need to set up a trusted storage pool (trusted storage pools)

Hostname

IP

Remark

Kubenode1

172.30.200.21

Kubenode2

172.30.200.22

Kubenode3

172.30.200.23

Heketi

172.30.200.80

Selinux disabled

Glusterfs01

172.30.200.81

Glusterfs02

172.30.200.82

Glusterfs03

172.30.200.83

two。 Set iptables# setting iptables,heketi to provide RESTful API service via tcp8080 port by default; [root@heketi ~] # vim / etc/sysconfig/iptables-An INPUT-p tcp-m state-- state NEW-m tcp-- dport 8080-j ACCEPT [root@heketi ~] # service iptables restart III. Deploy heketi1. Install heketi# to add gluster yum source, and there is no related package;# heketi:heketi service in the default yum source; # heketi-client:heketi client / command line tool [root@heketi ~] # yum install-y centos-release-gluster [root@heketi ~] # yum install-y heketi heketi-client2. When configuring heketi.json#, note that the red font is modified [root@heketi ~] # vim / etc/heketi/heketi.json {# default port tcp8080 "_ port_comment": "Heketi Server Port Number", "port": "8080", # default value false, no authentication "_ use_auth": "Enable JWT authorization. Please enable for deployment "," use_auth ": true," _ jwt ":" Private keys for access "," jwt ": {" _ admin ":" Admin has access to all APIs "," admin ": {" key ":" admin@123 "}," _ user ":" User only has access to / volumes endpoint "," user ": {" key ":" user@123 "}} "_ glusterfs_comment": "GlusterFS Configuration", "glusterfs": {"_ executor_comment": ["Execute plugin. Possible choices: mock, ssh, "mock: This setting is used for testing and development.", "> ssh", "_ sshexec_comment": "SSH username and private keyfile information", "sshexec": {"keyfile": "/ etc/heketi/heketi_key", "user": "root", "port": "22", "fstab": "/ etc/fstab"} "_ kubeexec_comment": "Kubernetes configuration", "kubeexec": {"host": "https://kubernetes.host:8443"," cert ":" / path/to/crt.file "," insecure ": false," user ":" kubernetes username "," password ":" password for kubernetes user "," namespace ":" OpenShift project or Kubernetes namespace " Fstab: "Optional: Specify fstab file on node." Default is / etc/fstab "}," _ db_comment ":" Database file name "," db ":" / var/lib/heketi/heketi.db "," _ loglevel_comment ": [" Set log level. Choices are: "," none, critical, error, warning, info, debug "," Default is warning "], # default setting is debug, if not set, the default value is warning;# log information output in / var/log/message" loglevel ":" warning "} 3. Set the heketi secret-free access GlusterFS# and select the ssh executor. The heketi server needs to log in to each node of the GlusterFS cluster without secret login; #-t: secret key type; #-Q: quiet mode; #-f: specify the directory and name to generate the secret key, which is consistent with the "keyfile" value in heketi.json 's ssh executor. #-N: secret key password, "" is empty [root@heketi ~] # ssh-keygen-t rsa-Q-f / etc/heketi/heketi_key-N "" # heketi service is started by heketi users, and heketi users need to have read authorization for newly generated key, otherwise the service cannot start [root@heketi ~] # chown heketi:heketi / etc/heketi/heketi_key# to distribute public keys. " #-I: specify public key [root@heketi ~] # ssh-copy-id-I / etc/heketi/heketi_key.pub root@172.30.200.81 [root@heketi ~] # ssh-copy-id-I / etc/heketi/heketi_key.pub root@172.30.200.82 [root@heketi ~] # ssh-copy-id-I / etc/heketi/heketi_key.pub root@172.30.200.834. Launch heketi# and install heketi through yum. There is 1 error in the default systemd file; the "- config=/etc/heketi/heketi.json" of # / usr/lib/systemd/system/heketi.service file should be changed to "--config=/etc/heketi/heketi.json" # otherwise launch Times "Error: unknown shorthand flag:'c'in-config=/etc/heketi/heketi.json" error, causing the service to fail to start [root@heketi] # systemctl enable heketi [root@heketi] # systemctl restart heketi [root@heketi] # systemctl status heketi# verify [root@heketi ~] # curl http://localhost:8080/hello

4. Set up GlusterFS cluster 1. Create a topology.json file # set up a GlusterFS cluster through the definition of the topology.json file; # topology specifies the hierarchical relationship: clusters-- > nodes-- > node/devices-- > hostnames/zone;# node/hostnames field manage enter the host ip, which refers to the management channel. When the heketi server cannot access the GlusterFS node through hostname, the storage cannot enter the hostname;# node/hostnames field. Enter the host ip, which refers to the storage data channel, which can be different from manage The # node/zone field specifies the failure domain in which node is located. Heketi creates copies across failure domains to improve the high availability of data. For example, you can distinguish between rack values and create cross-rack fault domains. The # devices field specifies the drive letter of each node of the GlusterFS (can be multiple disks) Must be a bare device without creating a file system [root@heketi ~] # vim / etc/heketi/topology.json {"clusters": [{"nodes": [{"node": {"hostnames": {"manage": [ "172.30.200.81"] "storage": ["172.30.200.81"]}, "zone": 1} "devices": ["/ dev/sdb"]} {"node": {"hostnames": {"manage": ["172.30.200.82"] "storage": ["172.30.200.82"]}, "zone": 2} "devices": ["/ dev/sdb"]} {"node": {"hostnames": {"manage": ["172.30.200.83"] "storage": ["172.30.200.83"]}, "zone": 3} "devices": ["/ dev/sdb"]}]} 2. Set up a GlusterFS cluster through topology.json # the glusterd service of each node of the GlusterFS cluster has been started normally, but there is no need to set up a trusted storage pool; # heketi-cli command line can also manually add cluster,node,device,volume layer by layer; # "--server http://localhost:8080" GlusterFS localhost executes heketi-cli without specifying # "--user admin-- secret admin@123": authentication is set in heketi.json, and authentication information is required when performing heketi-cli, otherwise report "Error: Invalid JWT token: Unknown user" error [root@heketi ~] # heketi-cli-- server http://localhost:8080-- user admin-- secret admin@123 topology load-- json=/etc/heketi/topology.json

# View heketi topology information, when volume and brick are not created; # you can view cluster-related information through "heketi-cli cluster info"; # you can view node-related information through "heketi-cli node info"; # you can view device-related information through "heketi-cli device info" [root@heketi ~] # heketi-cli-- user admin-- secret admin@123 topology info

5. K8S cluster dynamically mounts GlusterFS storage 1. Dynamic Storage process based on StorageClass

Kubernetes shared storage provisioning model:

Static mode (Static): the cluster administrator manually creates the PV and sets the characteristics of the backend storage when defining the PV

Dynamic mode (Dynamic): instead of manually creating PV, the cluster administrator describes the backend storage through the setting of StorageClass and marks it as a "Class". In this case, PVC is required to describe the type of storage, and the system will automatically complete the creation of PV and bind with PVC. PVC can declare Class as "", indicating that PVC forbids the use of dynamic mode.

The whole process of dynamic storage provisioning based on StorageClass is shown in the following figure:

The cluster administrator creates the storage class (StorageClass) in advance

User creates persistent storage declaration (PVC:PersistentVolumeClaim) that uses storage classes

The storage persistence declaration informs the system that it needs a persistent storage (PV: PersistentVolume)

The system reads the information of the storage class

Based on the information of the storage class, the system automatically creates the PV needed by PVC in the background.

The user creates a Pod that uses PVC

Applications in Pod persist data through PVC

PVC uses PV for the final persistence of the data.

two。 Define StorageClass# provisioner: indicates storage allocator, which needs to be changed according to backend storage; # reclaimPolicy: default is "Delete". After pvc is deleted, the corresponding pv and backend volume,brick (lvm) are deleted; when set to "Retain", the data is retained, and the url;# restauthenabled provided by # resturl:heketi API service needs to be handled manually: optional parameter, default is "false", and heketi service must be set to "true" when authentication is enabled. # restuser: optional parameter, set the appropriate user name when authentication is enabled; # secretNamespace: optional parameter, which can be set to use persistent storage namespace;# secretName when authentication is enabled: optional parameter. When authentication is enabled, the authentication password of the heketi service needs to be saved in the secret resource; # clusterid: optional parameter, specify cluster id, or it can be a clusterid list in the format of "id1,id2" # volumetype: optional parameters to set the volume type and its parameters. If no volume type is assigned, the allocator determines the volume type For example, "volumetype: replicate:3" represents a 3-copy replicate volume, and "volumetype: disperse:4:2" represents a disperse volume, where'4' is data and'2' is redundancy check. "volumetype: none" represents distribute volume # [root@kubenode1 ~] # mkdir-p heketi [root@kubenode1 ~] # cd heketi/ [root@kubenode1 heketi] # vim gluster-heketi-storageclass.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: gluster-heketi-storageclassprovisioner: kubernetes.io/glusterfsreclaimPolicy: Deleteparameters: resturl: "http://172.30.200.80:8080" restauthenabled:" true "restuser:" admin "secretNamespace:" default "secretName:" Heketi-secret "volumetype:" replicate:2 "# generate secret resources The "key" value needs to be converted to the base64 encoding format [root@kubenode1 heketi] # echo-n "admin@123" | base64# Note that the definition of name/namespace is consistent with that of storageclass resources. # password must have "kubernetes.io/glusterfs" type [root@kubenode1 heketi] # cat heketi-secret.yamlapiVersion: v1kind: Secretmetadata: name: heketi-secret namespace: defaultdata: # base64 encoded password. E.g.: echo-n "mypassword" | base64 key: YWRtaW5AMTIztype: kubernetes.io/glusterfs

# create secret resource [root@kubenode1 heketi] # kubectl create-f heketi-secret.yaml # create storageclass resource; # Note: storageclass resource cannot be changed after creation. For example, only [root@kubenode1 heketi] # kubectl create-f gluster-heketi-storageclass.yaml can be rebuilt after modification.

# View storageclass resources [root@kubenode1 heketi] # kubectl describe storageclass gluster-heketi-storageclass

3. Define PVC1) define the correspondence of PVC# Note "storageClassName" [root@kubenode1 heketi] # vim gluster-heketi-pvc.yamlkind: PersistentVolumeClaimapiVersion: v1metadata: name: gluster-heketi-pvcspec: storageClassName: gluster-heketi-storageclass # ReadWriteOnce: abbreviated RWO, read-write permission, and can only be mounted by a single node; # ReadOnlyMany: abbreviated ROX, read-only permission, allowed to be mounted by multiple node; # ReadWriteMany: abbreviated RWX, read-write permission, allowed to be mounted by multiple node AccessModes:-ReadWriteOnce resources:requests: # Note the format, do not write "GB" storage: 1Gi# create pvc resource [root@kubenode1 heketi] # kubectl create-f gluster-heketi-pvc.yaml2) View K8s resource # View PVC, status is "Bound"; # "Capacity" is 2G because meta data [root@kubenode1 heketi] # kubectl describe pvc gluster-heketi-pvc is created synchronously

# View PV details, in addition to capacity, reference storageclass information, status, recycling policy, etc., and give GlusterFS Endpoint and path; [root@kubenode1 heketi] # kubectl get pv [root@kubenode1 heketi] # kubectl describe pv pvc-532cb8c3-cfc6-11e8-8fde-005056bfa8ba

# View endpoints resources, which can be obtained from pv information. Fixed format: the glusterfs-dynamic-PVC_NAME;# endpoints resource specifies the specific address when mounting storage [root@kubenode1 heketi] # kubectl describe endpoints glusterfs-dynamic-gluster-heketi-pvc.

3) check that heketi# volume and brick have been created; # the main mount point (communication) is on the glusterfs01 node, and the other two nodes are optional; # in the case of two copies, the glusterfs03 node does not create brick [root@heketi ~] # heketi-cli-- user admin-- secret admin@123 topology info

4) View GlusterFS node # take glusterfs01 node as an example [root@glusterfs01 ~] # lsblk

[root@glusterfs01 ~] # df-Th

# View the specific information of volume: 2 copy of the replicate volume; # also have "vgscan", "vgdisplay" can also view logical volume group information, etc. [root@glusterfs01 ~] # gluster volume list [root@glusterfs01 ~] # gluster volume info vol_308342f1ffff3aea7ec6cc72f6d13cd7

4. Pod mount storage resource # set 1 volume to be referenced by pod The type of volume is "persistentVolumeClaim" [root@kubenode1 heketi] # vim gluster-heketi-pod.yamlkind: PodapiVersion: v1metadata: name: gluster-heketi-podspec: containers:-name: gluster-heketi-container image: busybox command:-sleep-"3600" volumeMounts:-name: gluster-heketi-volume mountPath: "/ pv-data" readOnly: false volumes:-name: gluster-heketi-volume persistentVolumeClaim: claimName: gluster-heketi -pvc# creates pod [root@kubenode1 heketi] # kubectl create-f gluster-heketi-pod.yaml5. Verify # create a file in the container mount directory [root@kubenode1 heketi] # kubectl exec-it gluster-heketi-pod / bin/sh/ # cd / pv-data/pv-data # echo "This is a file!" > # View the created file in the corresponding mount directory of the GlusterFS node # Mount directory to obtain [root@glusterfs01 ~] # df-Th [root@glusterfs01 ~] # cd / var/lib/heketi/mounts/vg_af339b60319a63a77b05ddbec1b21bbe/brick_d712f1543476c4198d3869c682cdaa9a/brick/ [root @ glusterfs01 brick] # ls [root@glusterfs01 brick] # cat a.txt [root@glusterfs01 brick] # cat b.txt 6 through "df-Th" or "lsblk". After verifying that StorageClass's ReclaimPolicy# deletes the Pod application Then delete pvc [root@kubenode1 heketi] # kubectl delete-f gluster-heketi-pod.yaml [root@kubenode1 heketi] # kubectl delete-f gluster-heketi-pvc.yaml# k8s resources [root@kubenode1 heketi] # kubectl get pvc [root@kubenode1 heketi] # kubectl get pv [root@kubenode1 heketi] # kubectl get endpoints # heketi [root@heketi ~] # heketi-cli-- user admin-secret admin@123 topology info # GlusterFS node [root@glusterfs01 ~] # lsblk [root@glusterfs01 ~] # df-Th [root@glusterfs01 ~] # gluster volume list, "how to manage GlusterFS through Heketi to provide persistent storage for K8S clusters" ends here Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report