Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Deploy rook-ceph storage system on kubernetes

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

2.5.2 exposure to ceph monitor

This step is only to verify that ceph monitor can be connected to the outside of kubernetes, and it turns out that it really doesn't work.

The service,service type of the newly created monitor is LoadBalancer, so that K8s can be used externally. Because I am using Ali Cloud kubernetes, and I only want to use private network load balancer, I also need to add the following service:

Vim rook-ceph-mon-svc.yaml

ApiVersion: v1kind: Servicemetadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet" labels: app: rook-ceph-mon mon_cluster: rook-ceph rook_cluster: rook-ceph name: rook-ceph-mon namespace: rook-cephspec: ports:-name: msgr1 port: 6789 protocol: TCP targetPort: 6789-name: msgr2 port: 3300 protocol: TCP targetPort: 3300 selector: app: rook- Ceph-mon mon_cluster: rook-ceph rook_cluster: rook-ceph sessionAffinity: None type: LoadBalancer

Note:

Self-built kubernetes recommends MetalLB to provide LoadBalancer load balancing. Currently, rook does not support kubernetes external connection ceph monitor. 3. Configure rook-ceph

Configure ceph so that kubernetes can use dynamic volume management.

Vim rook-ceph-block-pool.yaml

ApiVersion: ceph.rook.io/v1kind: CephBlockPoolmetadata: name: replicapool namespace: rook-cephspec: failureDomain: host replicated: size: 2 # Sets up the CRUSH rule for the pool to distribute data only on the specified device class. # If left empty or unspecified, the pool will use the cluster's default CRUSH root, which usually distributes data over all OSDs, regardless of their class. # deviceClass: hdd

Vim rook-ceph-filesystem.yaml

ApiVersion: ceph.rook.io/v1kind: CephFilesystemmetadata: name: cephfs-k8s namespace: rook-cephspec: metadataPool: replicated: size: 3 dataPools:-replicated: size: 3 metadataServer: activeCount: 1 activeStandby: true

Vim rook-ceph-storage-class.yaml

ApiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: ceph-rbdprovisioner: ceph.rook.io/blockparameters: blockPool: replicapool # The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist clusterNamespace: rook-ceph # Specify the filesystem type of the volume. If not specified, it will use `ext4`. Fstype: xfs# Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/reclaimPolicy: Retain# Optional, if you want to add dynamic resize for PVC. Works for Kubernetes 1.14 # For now only ext3, ext4, xfs resize support provided Like in Kubernetes itself.allowVolumeExpansion: true---# apiVersion: storage.k8s.io/v1# kind: StorageClass# metadata:# name: cephfs# # Change "rook-ceph" provisioner prefix to match the operator namespace if needed# provisioner: rook-ceph.cephfs.csi.ceph.com# parameters:# # clusterID is the namespace where operator is deployed.# clusterID: rook-ceph# CephFS filesystem name into which the volume shall be created# fsName: cephfs-k8s# Ceph pool into Which the volume shall be created# # Required for provisionVolume: "true" # pool: cephfs-k8s-data0# Root path of an existing CephFS volume# # Required for provisionVolume: "false" # # rootPath: / absolute/path# The secrets contain Ceph admin credentials These are generated automatically by the operator# # in the same namespace as the cluster.# csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner# csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph# csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node# csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph# # reclaimPolicy: Retain

Enter toolbox to view the results:

[root@linuxba-node5 /] # ceph osd pool lsreplicapoolcephfs-k8s-metadatacephfs-k8s-data0 [root@linuxba-node5 /] # cephfs ls name: cephfs-k8s, metadata pool: cephfs-k8s-metadata, data pools: [cephfs-k8s-data0] [root@linuxba-node5 /] # 4. Kubernetes uses dynamic volumes to verify ceph

The ceph rbd of flex was successfully verified.

[root@linuxba-node1 ceph] # kubectl get podNAME READY STATUS RESTARTS AGEcurl-66bdcf564-9hhrt 1 23hcurl-66bdcf564-ghq5s 1 Running 0 23hcurl-66bdcf564-ghq5s 1 Running 0 23hcurl-66bdcf564-sbv8b 1 Running 1 23hcurl-66bdcf564-t9gnc 1 23hcurl 1 Running 0 23hcurl -66bdcf564-v5kfx 1 + 1 Running 0 23hnginx-rbd-dy-67d8bbfcb6-vnctl 1 + 1 Running 0 21s [root@linuxba-node1 ceph] # kubectl exec-it nginx-rbd-dy-67d8bbfcb6-vnctl / bin/bashroot@nginx-rbd-dy-67d8bbfcb6-vnctl:/# ps-efbash: ps: command not foundroot@nginx-rbd-dy-67d8bbfcb6-vnctl:/# df-hFilesystem Size Used Avail Use% Mounted onoverlay 197G 9.7G 179G 6% / tmpfs 64M 064M 0% / devtmpfs 32G 032G 0% / sys/fs/cgroup/dev/vda1 197G 9.7G 179G 6% / etc/hostsshm 64M 064m 0% / dev/shm/dev/rbd0 1014M 33M 982M 4% / usr/share/nginx/htmltmpfs 32G 12K 32G 1% / run/secrets/kubernetes.io/serviceaccounttmpfs 32G 0 32G 0 / proc/acpitmpfs 32G 0 32G 0 0 / proc/scsitmpfs 32G 0 32G 0 0 / sys/firmwareroot@nginx-rbd-dy-67d8bbfcb6-vnctl:/# cd / usr/share/nginx/html/root@nginx-rbd-dy-67d8bbfcb6-vnctl:/usr/share/nginx/html# lsroot@nginx -rbd-dy-67d8bbfcb6-vnctl:/usr/share/nginx/html# ls-latotal 4drwxr-xr-x 2 root root 6 Nov 5 08:47. Drwxr-xr-x 3 root root 4096 Oct 23 00:25.. root@nginx-rbd-dy-67d8bbfcb6-vnctl:/usr/share/nginx/html# echo a > test.htmlroot@nginx-rbd-dy-67d8bbfcb6-vnctl:/usr/share/nginx/html# ls-ltotal 4 Mustang RWKui Rafael-1 root root 2 Nov 5 08 : 47 test.htmlroot@nginx-rbd-dy-67d8bbfcb6-vnctl:/usr/share/nginx/html#

While cephfs verification fails, pod has been waiting to be mounted, which is described in more detail below.

5. Solve the problem that rook-ceph 's csi-cephfs cannot be mounted on flex's Ali Cloud kubernetes

View / var/log/message logs for all nodes of pod that use cephfs pvc

As prompted by the log, at first I thought it was a lack of permissions:

Kubectl get clusterrole system:node-oyaml

By adding the permission of this clusterrole, the error is still the same.

I just remembered that when I created cephfs storageclass, I used the csi plug-in way.

While Aliyun kubernetes only supports flex or csi, my cluster chooses to use flex plug-ins.

In the flex plug-in mode, the cluster node kubelet parameter, enable-controller-attach-detach is false.

If you need to change it to csi mode, you need to change this parameter to true.

Just do it and enter the node where the pod is located in the ContainerCreating state.

Vim / etc/systemd/system/kubelet.service.d/10-kubeadm.conf, change enable-controller-attach-detach to true, and then restart kubelet with systemctl daemon-reload & & systemctl restart kubelet. It is found that POD has been mounted normally.

It can be concluded that it is true that the kubelet parameter enable-controller-attach-detach of Aliyun kubernetes is false, which leads to the inability to use csi.

It is obviously not realistic to modify this parameter, because the flex plug-in method has been selected when purchasing Aliyun hosted kubernetes. Originally, there is no need to maintain kubelet, but now because this parameter needs to maintain the kubelet of all nodes. What else can be done without modifying the kubelet parameter?

I used to use the provisioner provided by kubernetes-incubator/external-storage/ceph, refer to my previous article:

Https://blog.51cto.com/ygqygq2/2163656

5.1Creating cephfs-provisioner

First, write the string after key in / etc/ceph/keyring in toolbox to the file / tmp/ceph.client.admin.secret, make secret, and start cephfs-provisioner.

Kubectl create secret generic ceph-admin-secret-from-file=/tmp/ceph.client.admin.secret-namespace=rook-cephkubectl apply-f cephfs/rbac/

Waiting for startup to succeed

[root@linuxba-node1 ceph] # kubectl get pod-n rook-ceph | grep cephfs-provisionercephfs-provisioner-5f64bb484b-24bqf 1 Running 0 2m

Then create the cephfs storageclass.

Vim cephfs-storageclass.yaml

The svc IP port of kind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: cephfsprovisioner: ceph.com/cephfsreclaimPolicy: Retainparameters: # ceph monitor monitors: 10.96.201.107VV 6789Magi 10.96.105.92Frey 6789LI 10.96.183.92 adminId: admin adminSecretName: ceph-admin-secret adminSecretNamespace: "rook-ceph" claimRoot: / volumes/kubernetes

The kubernetes node still needs to install ceph-common and ceph-fuse.

Use Ali Cloud's ceph yum source, cat / etc/yum.repos.d/ceph.repo

[Ceph] name=Ceph packages for $basearchbaseurl= http://mirrors.cloud.aliyuncs.com/ceph/rpm-nautilus/el7/$basearchenabled=1gpgcheck=1type=rpm-mdgpgkey=http://mirrors.cloud.aliyuncs.com/ceph/keys/release.asc[Ceph-noarch]name=Ceph noarch packagesbaseurl= http://mirrors.cloud.aliyuncs.com/ceph/rpm-nautilus/el7/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=http://mirrors.cloud.aliyuncs.com/ceph/keys/release.asc[ceph-source]name=Ceph source packagesbaseurl= http://mirrors.cloud.aliyuncs.com/ceph/rpm-nautilus/ El7/SRPMSenabled=1gpgcheck=1type=rpm-mdgpgkey= http://mirrors.cloud.aliyuncs.com/ceph/keys/release.asc5.2 verifies cephfs

Continuing with the previous test, you can see that it has been used normally.

Kubectl delete-f rook-ceph-cephfs-nginx.yaml-f rook-ceph-cephfs-pvc.yamlkubectl apply-f rook-ceph-cephfs-pvc.yamlkubectl apply-f rook-ceph-cephfs-nginx.yaml [root@linuxba-node1 ceph] # kubectl get pod | grep cephfsnginx-cephfs-dy-5f47b4cbcf-txtf9 1 3m50s 1 Running 0 3m50s [root@linuxba-node1 ceph] # kubectl exec-it nginx-cephfs-dy-5f47b4cbcf-txtf9 / bin/bashroot@nginx-cephfs-dy-5f47b4cbcf-txtf9: / # df-hFilesystem Size Used Avail Use% Mounted onoverlay 197G 9.9G 179G 6% / tmpfs 64M 064M 0% / devtmpfs 32G 0% / sys/fs/cgroup/dev/vda1 197G 9.9G 179G 6% / etc/hostsshm 64M 064m 0% / dev/shmceph-fuse 251G 0251G 0% / usr/share/nginx/htmltmpfs 32G 12K 32G 1% / run/secrets/kubernetes.io/serviceaccounttmpfs 32G 032G 0% / proc/acpitmpfs 32G 0% / proc/scsitmpfs 32G 0% / sys/firmwareroot@nginx-cephfs-dy-5f47b4cbcf-txtf9:/# echo test > / usr/share/nginx/html/test.html6. Summary

Kubernetes does not have external access to ceph monitor, and because of this limitation, it is much better to deploy directly on the machine.

Rook-ceph can provide rbd type storageclass of both flex and csi drivers, while cephfs currently only supports storageclass of csi drivers. For more information on the usage of cephfs storage volumes based on flex drivers, please see kube-registry.yaml.

Finally, the relevant Yaml files used in this article are attached:

Https://github.com/ygqygq2/kubernetes/tree/master/kubernetes-yaml/rook-ceph

Reference:

[1] https://rook.io/docs/rook/v1.1/ceph-quickstart.html

[2] https://rook.io/docs/rook/v1.1/helm-operator.html

[3] https://rook.io/docs/rook/v1.1/ceph-toolbox.html

[4] https://rook.io/docs/rook/v1.1/ceph-advanced-configuration.html#custom-cephconf-settings

[5] https://rook.io/docs/rook/v1.1/ceph-pool-crd.html

[6] https://rook.io/docs/rook/v1.1/ceph-block.html

[7] https://rook.io/docs/rook/v1.1/ceph-filesystem.html

[8] https://github.com/kubernetes-incubator/external-storage/tree/master/ceph

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report