Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kubernetes deployment (X): stored glusterfs and heketi deployment

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Related content:

Kubernetes deployment (1): architecture and function description

Kubernetes deployment (2): initialization of system environment

Kubernetes deployment (3): CA certificate making

Kubernetes deployment (4): ETCD cluster deployment

Kubernetes deployment (5): Haproxy, Keppalived deployment

Kubernetes deployment (6): Master node deployment

Kubernetes deployment (7): Node node deployment

Kubernetes deployment (8): Flannel network deployment

Kubernetes deployment (IX): CoreDNS, Dashboard, Ingress deployment

Kubernetes deployment (X): stored glusterfs and heketi deployment

Kubernetes deployment (11): managed Helm and Rancher deployment

Kubernetes deployment (12): helm deployment harbor enterprise image repository

An overview

This guide supports the integration, deployment, and management of GlusterFS containerized storage nodes in a Kubernetes cluster. This enables Kubernetes administrators to provide reliable shared storage to their users.

Includes a setup guide that contains a sample server, pod, which uses dynamically configured GlusterFS volumes for storage. For those who want to test or learn more about this topic, follow the quick start instructions in the main README file to learn about gluster-kubernetes

This guide is designed to demonstrate the smallest example of Heketi managing Gluster in a Kubernetes environment.

The infrastructure requires a running Kubernetes cluster with at least three Kubernetes worker nodes, each connected to at least one available raw block device (such as an EBS volume or local disk). # use file-s to view the hard drive if it is displayed as data, it is the original block device. If it is not a data type, you can change it with pvcreate,pvremove first. [root@node-04] # file-s / dev/sdc/dev/sdc: x86 boot sector, code offset 0xb8 [root @ node-04 ~] # pvcreate / dev/sdcWARNING: dos signature detected on / dev/sdc at offset 510. Wipe it? [dev/sdc]: yWiping dos signature on / dev/sdc.Physical volume "/ dev/sdc" successfully created.[ root @ node-04 ~] # pvremove / dev/sdcLabels on physical volume "/ dev/sdc" successfully wiped.[ root @ node-04 ~] # file-s / dev/sdc / dev/sdc: data hosts on glusterfs nodes need to install glusterfs-client, glusterfs-fuse and socat packages. Yum install-y glusterfs-client glusterfs-fuse socat the host of each kubetnetes node needs to load the dm_thin_pool module modprobe dm_thin_pool client installation

Heketi provides CLI, which provides users with a way to manage the deployment and configuration of GlusterFS in Kubernetes. Download and install heketi-cli on your client computer, preferably with the same version of heketi-cli as the heketi server version, otherwise an error may occur.

Kubernetes deployment

Deploy GlusterFS DaemonSet

{"kind": "DaemonSet", "apiVersion": "extensions/v1beta1", "metadata": {"name": "glusterfs", "labels": {"glusterfs": "deployment"}, "annotations": {"description": "GlusterFS DaemonSet", "tags": "glusterfs"}} "spec": {"template": {"metadata": {"name": "glusterfs", "labels": {"glusterfs-node": "daemonset"}}, "spec": {"nodeSelector": {"storagenode": "glusterfs"} "hostNetwork": true, "containers": [{"image": "gluster/gluster-centos:latest", "imagePullPolicy": "Always", "name": "glusterfs" "volumeMounts": [{"name": "glusterfs-heketi", "mountPath": "/ var/lib/heketi"} {"name": "glusterfs-run", "mountPath": "/ run"}, {"name": "glusterfs-lvm" "mountPath": "/ run/lvm"}, {"name": "glusterfs-etc", "mountPath": "/ etc/glusterfs"} {"name": "glusterfs-logs", "mountPath": "/ var/log/glusterfs"}, {"name": "glusterfs-config" "mountPath": "/ var/lib/glusterd"}, {"name": "glusterfs-dev", "mountPath": "/ dev"} {"name": "glusterfs-cgroup", "mountPath": "/ sys/fs/cgroup"}], "securityContext": {"capabilities": {} "privileged": true}, "readinessProbe": {"timeoutSeconds": 3, "initialDelaySeconds": 60 "exec": {"command": ["/ bin/bash", "- c" "systemctl status glusterd.service"]}}, "livenessProbe": {"timeoutSeconds": 3, "initialDelaySeconds": 60 "exec": {"command": ["/ bin/bash", "- c" "systemctl status glusterd.service"]}], "volumes": [{"name": "glusterfs-heketi" "hostPath": {"path": "/ var/lib/heketi"}, {"name": "glusterfs-run"}, {"name": "glusterfs-lvm" "hostPath": {"path": "/ run/lvm"}}, {"name": "glusterfs-etc" "hostPath": {"path": "/ etc/glusterfs"}}, {"name": "glusterfs-logs" "hostPath": {"path": "/ var/log/glusterfs"}}, {"name": "glusterfs-config" "hostPath": {"path": "/ var/lib/glusterd"}}, {"name": "glusterfs-dev" "hostPath": {"path": "/ dev"}}, {"name": "glusterfs-cgroup" "hostPath": {"path": "/ sys/fs/cgroup"} $kubectl create-f glusterfs-daemonset.json gets the node name by running: $kubectl get nodes sets the label on this node through storagenode=glusterfs Deploy the gluster container to the specified node. [root@node-01 heketi] # kubectl label node 10.31.90.204 storagenode= glusterfs [root @ node-01 heketi] # kubectl label node 10.31.90.205 storagenode= glusterfs [root @ node-01 heketi] # kubectl label node 10.31.90.206 storagenode=glusterfs

Verify that pod is running on the node and that at least three pod should be running, as needed.

$kubectl get pods

Next we will create a ServiceAccount for Heketi:

{"apiVersion": "v1", "kind": "ServiceAccount", "metadata": {"name": "heketi-service-account"}} $kubectl create-f heketi-service-account.json

We must now establish the ability of the service account to control gluster pod. We do this by creating a cluster role binding for the newly created service account.

$kubectl create clusterrolebinding heketi-gluster-admin-- clusterrole=edit-- serviceaccount=default:heketi-service-account now we need to create a Kubernetes secret that will save the configuration of our Heketi instance. The configuration file must be set to execute the program using kubernetes so that the Heketi server can control the gluster pod. In addition, you can feel free to try configuration options. {"_ port_comment": "Heketi Server Port Number", "port": "8080", "_ use_auth": "Enable JWT authorization. Please enable for deployment "," use_auth ": false," _ jwt ":" Private keys for access "," jwt ": {" _ admin ":" Admin has access to all APIs "," admin ": {" key ":" My Secret "}," _ user ":" User only has access to / volumes endpoint "," user ": {" key ":" My Secret "}} "_ glusterfs_comment": "GlusterFS Configuration", "glusterfs": {"_ executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh, "executor": "kubernetes", "_ db_comment": "Database file name", "db": "/ var/lib/heketi/heketi.db", "kubeexec": {"rebalance_on_expansion": true}, "sshexec": {"rebalance_on_expansion": true, "keyfile": "/ etc/heketi/private_key" "fstab": "/ etc/fstab", "port": "22", "user": "root", "sudo": false}, "_ backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off. "," backup_db_to_kube_secret ": false} $kubectl create secret generic heketi-config-secret-- from-file=./heketi.json next, we need to deploy an initial Pod and a service to access the pod. There will be a heketi-bootstrap.json file as follows.

Submit the document and verify that everything is working properly As follows: {"kind": "List", "apiVersion": "v1", "items": [{"kind": "Service", "apiVersion": "v1", "metadata": {"name": "deploy-heketi", "labels": {"glusterfs": "heketi-service", "deploy-heketi": "support"} "annotations": {"description": "Exposes Heketi Service"}}, "spec": {"selector": {"name": "deploy-heketi"}, "ports": [{"name": "deploy-heketi", "port": 8080, "targetPort": 8080}]}}, {"kind": "Deployment" "apiVersion": "extensions/v1beta1", "metadata": {"name": "deploy-heketi", "labels": {"glusterfs": "heketi-deployment", "deploy-heketi": "deployment"}, "annotations": {"description": "Defines how to deploy Heketi"}, "spec": {"replicas": 1 Template: {"metadata": {"name": "deploy-heketi", "labels": {"name": "deploy-heketi", "glusterfs": "heketi-pod", "deploy-heketi": "pod"}}, "spec": {"serviceAccountName": "heketi-service-account" "containers": [{"image": "heketi/heketi:8", "imagePullPolicy": "Always", "name": "deploy-heketi", "env": [{"name": "HEKETI_EXECUTOR", "value": "kubernetes"} {"name": "HEKETI_DB_PATH", "value": "/ var/lib/heketi/heketi.db"}, {"name": "HEKETI_FSTAB", "value": "/ var/lib/heketi/fstab"} {"name": "HEKETI_SNAPSHOT_LIMIT", "value": "14"}, {"name": "HEKETI_KUBE_GLUSTER_DAEMONSET", "value": "y"}] "ports": [{"containerPort": 8080}], "volumeMounts": [{"name": "db", "mountPath": "/ var/lib/heketi"} {"name": "config", "mountPath": "/ etc/heketi"}], "readinessProbe": {"timeoutSeconds": 3, "initialDelaySeconds": 3, "httpGet": {"path": "/ hello" "port": 8080}}, "livenessProbe": {"timeoutSeconds": 3, "initialDelaySeconds": 30, "httpGet": {"path": "/ hello" "port": 8080}], "volumes": [{"name": "db"}, {"name": "config" "secret": {"secretName": "heketi-config-secret"}]}]} # kubectl create-f heketi-bootstrap.jsonservice "deploy-heketi" createddeployment "deploy-heketi" create [root @ node-01 heketi] # kubectl get podNAME READY STATUS RESTARTS AGEdeploy-heketi-8888799fd -cmfp6 1 to 1 Running 0 6mglusterfs-7t5ls 1 to 1 Running 0 8mglusterfs-drsx9 1 to 1 Running 0 8mglusterfs-pnnn8 1 to 8mglusterfs-pnnn8 1 Running 0 8m now the Bootstrap Heketi service is running We will configure port forwarding so that we can use Heketi CLI to communicate with the service. Using the name of Heketi pod, run the following command: kubectl port-forward deploy-heketi-8888799fd-cmfp6: 8080

If local port 8080 is idle on the system on which the command is running, you can run the port-forward command so that it binds to 8080 for convenience:

Kubectl port-forward deploy-heketi-8888799fd-cmfp6 18080:8080

Now verify that port forwarding is working properly by running a sample query against the Heketi service. This command should print the local port to be forwarded. Merge it into URL to test the service, as follows:

Curl http://localhost:18080/helloHandling connection for 18080Hello from Heketi

Finally, set the environment variable for the Heketi CLI client so that it knows how to get to Heketi Server.

Export HEKETI_CLI_SERVER= http://localhost:18080

Next, we will provide Heketi with information about the GlusterFS cluster to manage. We provide this information through a topology file. There is a sample topology file named topology-sample.json in the repo that you cloned. The topology specifies the Kubernetes node running the GlusterFS container and the corresponding raw block device for each node.

Make sure that hostnames/manage points to the exact name kubectl get nodes shown below, and that hostnames/storage is the IP address of the storage network.

Important: at this point, you must load the topology file using the heketi-cli version that matches the server version. As a last resort, the Heketi container comes with a heketi-cli kubectl exec that can be accessed in a way.

Modify the topology file to reflect your choices, and then deploy it, as follows:

{"clusters": [{"nodes": [{"node": {"hostnames": {"manage": ["10.31.90.204"], "storage": ["10.31.90.204"]} "zone": 1}, "devices": ["/ dev/sdc"]}, {"node": {"hostnames": {"manage": ["10.31.90.205"] "storage": ["10.31.90.205"]}, "zone": 1}, "devices": ["/ dev/sdc"]} {"node": {"hostnames": {"manage": ["10.31.90.206"], "storage": ["10.31.90.206"]}, "zone": 1} "devices": ["/ dev/sdc"]}]} [root@node-01 ~] # heketi-cli topology load-- json=top.jsonCreating cluster. ID: e758afb77ee26d5f969d7efee1516e64 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node 10.31.90.204... ID: a6eedd58c118dcfe44a0db2af1a4f863 Adding device / dev/sdc... OK Creating node 10.31.90.205... ID: 4066962c14bcdebd28aca193b5690792 Adding device / dev/sdc... OK Creating node 10.31.90.206... ID: 91e42a2361f0266ae334354e5c34ce11 Adding device / dev/sdc... OK next we will use Heketi to configure a volume for it to store its database:

After executing this command, a heketi-storage.json file will be generated, and we'd better put the

"image": "heketi/heketi:dev"

Change to

"image": "heketi/heketi:8"

# heketi-client/bin/heketi-cli setup-openshift-heketi-storage

And then create heketi-related services

# kubectl create-f heketi-storage.json

Trap: if heketi-cli reports a "no space" error when running the setup-openshift-heketi- story subcommand, it may inadvertently run a mismatched version of the server and heketi-cli. Stop the running Heketi pod (kubectl scale deployment deploy-heketi-- replicas=0), manually remove any signatures from the storage block device, and then continue running Heketi pod (kubectl scale deployment deploy-heketi-- replicas=1). Then reload the topology using a matching version of heketi-cli and retry the step.

Wait until the job is completed and then delete the bootstrap program Heketi:# kubectl delete all,service,jobs,deployment,secret-- selector= "deploy-heketi" to create a long-term Heketi instance: {"kind": "List", "apiVersion": "v1", "items": [{"kind": "Secret", "apiVersion": "v1", "metadata": {"name": "heketi-db-backup" "labels": {"glusterfs": "heketi-db", "heketi": "db"}, "data": {}, "type": "Opaque"}, {"kind": "Service", "apiVersion": "v1", "metadata": {"name": "heketi", "labels": {"glusterfs": "heketi-service" "deploy-heketi": "support"}, "annotations": {"description": "Exposes Heketi Service"}}, "spec": {"selector": {"name": "heketi"}, "ports": [{"name": "heketi", "port": 8080, "targetPort": 8080}]} {"kind": "Deployment", "apiVersion": "extensions/v1beta1", "metadata": {"name": "heketi", "labels": {"glusterfs": "heketi-deployment"}, "annotations": {"description": "Defines how to deploy Heketi"}}, "spec": {"replicas": 1 "template": {"metadata": {"name": "heketi", "labels": {"name": "heketi", "glusterfs": "heketi-pod"}, "spec": {"serviceAccountName": "heketi-service-account" "containers": [{"image": "heketi/heketi:8", "imagePullPolicy": "Always", "name": "heketi", "env": [{"name": "HEKETI_EXECUTOR", "value": "kubernetes"} {"name": "HEKETI_DB_PATH", "value": "/ var/lib/heketi/heketi.db"}, {"name": "HEKETI_FSTAB", "value": "/ var/lib/heketi/fstab"} {"name": "HEKETI_SNAPSHOT_LIMIT", "value": "14"}, {"name": "HEKETI_KUBE_GLUSTER_DAEMONSET", "value": "y"}] "ports": [{"containerPort": 8080}], "volumeMounts": [{"mountPath": "/ backupdb", "name": "heketi-db-secret"} {"name": "db", "mountPath": "/ var/lib/heketi"}, {"name": "config", "mountPath": "/ etc/heketi"}] "readinessProbe": {"timeoutSeconds": 3, "initialDelaySeconds": 3, "httpGet": {"path": "/ hello", "port": 8080}}, "livenessProbe": {"timeoutSeconds": 3 "initialDelaySeconds": 30, "httpGet": {"path": "/ hello", "port": 8080}], "volumes": [{"name": "db" "glusterfs": {"endpoints": "heketi-storage-endpoints", "path": "heketidbstorage"}, {"name": "heketi-db-secret", "secret": {"secretName": "heketi-db-backup"}} {"name": "config", "secret": {"secretName": "heketi-config-secret"}]}]} # kubectl create-f heketi-deployment.jsonservice "heketi" createddeployment "heketi" created does so now The Heketi database will remain in the GlusterFS volume and will not be reset each time Heketi pod is restarted.

Use commands such as heketi-cli cluster list and heketi-cli volume list to confirm that the previously established cluster exists and whether Heketi is aware of the db storage volumes created during the boot phase.

The next step in the demo test is to create the storage volume and then mount the test.

Before testing, we need to publish the heketi service through Ingress and resolve the A record of heketi.cnlinux.club to 10.31.90.200. ApiVersion: extensions/v1beta1kind: Ingressmetadata:name: ingress-heketiannotations:nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: nginxspec:rules:- host: heketi.cnlinux.club http: paths:-path: backend: serviceName: heketi servicePort: 8080 [root@node-01 heketi] # kubectl create-f ingress-heketi.yaml

Access http://heketi.cnlinux.club/hello in a browser

Create StorageClassapiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: gluster-heketiprovisioner: kubernetes.io/glusterfsparameters: resturl: "http://heketi.cnlinux.club" restauthenabled:" false "volumetype:" replicate:2 "[root@node-01 heketi] # kubectl create-f storageclass-gluster-heketi.yaml [root@node-01 heketi] # kubectl get scNAME PROVISIONER AGEgluster-heketi kubernetes.io/glusterfs 10s create pvcapiVersion: v1kind PersistentVolumeClaimmetadata:name: pvc-gluster-heketispec:storageClassName: gluster-heketiaccessModes:- ReadWriteOnceresources:requests: storage: 1GI [root @ node-01 heketi] # kubectl create-f pvc-gluster-heketi.yaml [root@node-01 heketi] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpvc-gluster-heketi Bound pvc-d978f524-0b74-11e9-875c-005056826470 1Gi RWO gluster-heketi 30s mount pvcapiVersion: v1kind: Podmetadata:name: pod-pvcspec:containers:- name: pod-pvcimage: busybox:latestcommand:- sleep- "3600" volumeMounts:- name: gluster-volume mountPath: "/ pv-data" volumes:- name: gluster-volume persistentVolumeClaim: claimName: pvc-gluster-heketi [root@node-01 heketi] # kubectl create-f pod-pvc.yaml in pod

Enter the container to see if it has been mounted successfully.

[root@node-01 heketi] # kubectl exec pod-pvc-it / bin/sh/ # df-hFilesystem Size Used Available Use% Mounted onoverlay 47.8G 4.3G 43.5G 9% / tmpfs 64.0M 064.0M 0% / devtmpfs 1.9G 0 1.9G 0% / sys/fs/cgroup10.31.90.204:vol_675cc9fe0e959157919c886ea7786d33 1014.0M 42.7M 971.3M 4% / pv-data/dev/sda3 47.8G 4.3G 43.5G 9% / dev/termination-log/dev/sda3 47.8G 4.3G 43.5G 9% / etc/resolv.conf/dev/ Sda3 47.8G 4.3G 43.5G 9% / etc/hostname/dev/sda3 47.8G 4.3G 43.5G 9% / etc/hosts

# write a file to / pv-data and automatically exit when the capacity exceeds 1G, which proves that the capacity limit is effective.

/ # cd / pv-data//pv-data # dd if=/dev/zero of=/pv-data/test.img bs=8M count=300123+0 records in122+0 records out1030225920 bytes (982.5MB) copied, 24.255925 seconds, 40.5MB/s

Check the host disk to see if the test.img file has been created

[root@node-04 cfg] # mount / dev/vg_2631413b8b87bbd6cb526568ab697d37/brick_1691ef862dd504e12e8384af76e5a9f2 / MNT [root @ node-04 cfg] # ll-h / mnt/brick/total 982MMurray RWMUR Mustang-2 root 2001 982M Jan 2 15:14 test.img

At this point, all the operations have been completed.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report