In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you how to achieve the trinity of rook ceph, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to understand it!
Get started quickly
Official website address: https://rook.io/
Project address: https://github.com/rook/rook
Install the cluster
Prepare osd storage media
Hard disk symbol size function sdb50GBOSD Datasdc50GBOSD Datasdd50GBOSD Datasde50GBOSD Metadata
Before installation, use the commands lvm lvs,lvm vgs and lvm pvs to check whether the above hard disk has been used, delete it if it is already in use, and make sure that there are no partitions and file systems on the hard disk
Make sure the kernel rbd module is turned on and lvm2 is installed
Modprobe rbdyum install-y lvm2
Install operator
Git clone-- single-branch-- branch release-1.2 https://github.com/rook/rook.gitcd rook/cluster/examples/kubernetes/cephkubectl create-f common.yamlkubectl create-f operator.yaml
Install ceph cluster
-apiVersion: ceph.rook.io/v1kind: CephClustermetadata: name: rook-ceph namespace: rook-cephspec: cephVersion: image: ceph/ceph:v14.2.5 allowUnsupported: false dataDirHostPath: / var/lib/rook skipUpgradeChecks: false mon: count: 3 allowMultiplePerNode: true mgr: modules:-name: pg_autoscaler enabled: true dashboard: enabled: true ssl: true monitoring: enabled: false rulesNamespace: rook-ceph network: hostNetwork: false RbdMirroring: workers: 0 annotations: resources: removeOSDsIfOutAndSafeToRemove: false useAllNodes: false useAllDevices: false config: nodes:-name: "minikube" devices:-name: "sdb"-name: "sdc"-name: "sdd" config: storeType: bluestore metadataDevice: "sde" databaseSizeMB: "1024" journalSizeMB: "1024" OsdsPerDevice: "1" disruptionManagement: managePodBudgets: false osdMaintenanceTimeout: 30 manageMachineDisruptionBudgets: false machineDisruptionBudgetNamespace: openshift-machine-api
Install command line tools
Kubectl create-f toolbox.yaml
Use the command ceph-s in toolbox to view the cluster status
You need to clean up the rook data directory when reinstalling the ceph cluster (default: / var/lib/rook)
Add ingress routes for ceph-dashboard services
ApiVersion: extensions/v1beta1kind: Ingressmetadata: name: rook-ceph-mgr-dashboard namespace: rook-ceph annotations: kubernetes.io/ingress.class: "nginx" kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" nginx.ingress.kubernetes.io/server-snippet: | proxy_ssl_verify off Spec: tls:-hosts:-rook-ceph.minikube.local secretName: rook-ceph.minikube.local rules:-host: rook-ceph.minikube.local http: paths:-path: / backend: serviceName: rook-ceph-mgr-dashboard servicePort: https-dashboard
Get the admin account password required to access dashboard
Kubectl get secret rook-ceph-dashboard-password-n rook-ceph- o jsonpath=' {.data.password}'| base64-d
Add the domain name rook-ceph.minikube.local to / etc/hosts and access it through the browser
Https://rook-ceph.minikube.local/
Using rbd Stora
Create a rbd storage pool
-apiVersion: ceph.rook.io/v1kind: CephBlockPoolmetadata: name: replicapool namespace: rook-cephspec: failureDomain: osd replicated: size: 3
Since there is only one node and three OSD, osd is used as the fault domain
Using the instruction ceph osd pool ls in rook-ceph-tools after the creation is completed, you can see that the following storage pools have been created
Replicapool
Create storageclass with rbd as storage medium
-apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: rook-ceph-blockprovisioner: rook-ceph.rbd.csi.ceph.comparameters: clusterID: rook-ceph pool: replicapool imageFormat: "2" imageFeatures: layering csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage. K8s.io/node-stage-secret-namespace: rook-ceph csi.storage.k8s.io/fstype: ext4reclaimPolicy: Delete
Using statefulset tests to mount rbd storage through storageclass
-kind: StatefulSetapiVersion: apps/v1metadata: name: storageclass-rbd-test namespace: default labels: app: storageclass-rbd-testspec: replicas: 2 selector: matchLabels: app: storageclass-rbd-test template: metadata: labels: app: storageclass-rbd-testspec: restartPolicy: Always containers:-name: storageclass-rbd-test imagePullPolicy: IfNotPresent volumeMounts: -name: data mountPath: / data image: 'centos:7' args: -' sh' -'- c'-'sleep 3600' volumeClaimTemplates:-metadata: name: data spec: accessModes:-ReadWriteOnce resources: requests: storage : 1Gi storageClassName: rook-ceph-block uses cephfs storage
Create mds service and cephfs file system
-apiVersion: ceph.rook.io/v1kind: CephFilesystemmetadata: name: myfs namespace: rook-cephspec: metadataPool: failureDomain: osd replicated: size: 3 dataPools:-failureDomain: osd replicated: size: 3 preservePoolsOnDelete: true metadataServer: activeCount: 1 activeStandby: true placement: annotations: resources:
Using the instruction ceph osd pool ls in rook-ceph-tools after the creation is completed, you can see that the following storage pools have been created
Myfs-metadata
Myfs-data0
Create storageclass with cephfs as storage medium
-apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: csi-cephfsprovisioner: rook-ceph.cephfs.csi.ceph.comparameters: clusterID: rook-ceph fsName: myfs pool: myfs-data0 csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: rook-cephreclaimPolicy: DeletemountOptions:
Using deployment tests to mount cephfs shared storage through storageclass
-kind: PersistentVolumeClaimapiVersion: v1metadata: name: data-storageclass-cephfs-test namespace: default labels: app: accessModes:-ReadWriteMany resources: requests: storage: 1Gi storageClassName: csi-cephfs volumeMode: Filesystem---kind: DeploymentapiVersion: apps/v1metadata: name: storageclass-cephfs-test namespace: default labels: app: storageclass-cephfs-testspec: replicas: 2 selector: matchLabels: app: storageclass-cephfs-test template: metadata: Labels: app: storageclass-cephfs-test spec: restartPolicy: Always containers:-name: storageclass-cephfs-test imagePullPolicy: IfNotPresent volumeMounts:-name: data mountPath: / data image: 'centos:7' args: -' sh' -'- c' -'sleep 3600' volumes:-name: data persistentVolumeClaim: claimName: data-storageclass-cephfs-test uses S3 storage
Create an object storage gateway
-apiVersion: ceph.rook.io/v1kind: CephObjectStoremetadata: name: my-store namespace: rook-cephspec: metadataPool: failureDomain: osd replicated: size: 3 dataPool: failureDomain: osd replicated: size: 3 preservePoolsOnDelete: false gateway: type: S3 sslCertificateRef: port: 80 securePort: instances: 1 placement: annotations: resources:
Using the instruction ceph osd pool ls in rook-ceph-tools after the creation is completed, you can see that the following storage pools have been created
.rgw.root
My-store.rgw.buckets.data
My-store.rgw.buckets.index
My-store.rgw.buckets.non-ec
My-store.rgw.control
My-store.rgw.log
My-store.rgw.meta
Add ingress routes for ceph-rgw services
-apiVersion: extensions/v1beta1kind: Ingressmetadata: name: rook-ceph-rgw namespace: rook-ceph annotations: kubernetes.io/ingress.class: "nginx" kubernetes.io/tls-acme: "true" spec: tls:-hosts:-rook-ceph-rgw.minikube.local secretName: rook-ceph-rgw.minikube.local rules:-host: rook-ceph-rgw.minikube.local http: paths:-path: / backend: serviceName: rook-ceph-rgw-my-store servicePort: http
Add the domain name rook-ceph-rgw.minikube.local to / etc/hosts and access it through the browser
Https://rook-ceph-rgw.minikube.local/
Use S3 users
Add object Storage user
-apiVersion: ceph.rook.io/v1kind: CephObjectStoreUsermetadata: name: my-user namespace: rook-cephspec: store: my-store displayName: "my display name"
Creating an object storage user generates a secret with {{metadata.namespace}}-object-user- {{.spec.store}}-{{.metadata.name}} as the naming convention, where the AccessKey and SecretKey of the S3 user are saved
Get AccessKey
Kubectl get secret rook-ceph-object-user-my-store-my-user-n rook-ceph-o jsonpath=' {.data.AccessKey}'| base64-d
Get SecretKey
Kubectl get secret rook-ceph-object-user-my-store-my-user-n rook-ceph-o jsonpath=' {.data.SecretKey}'| base64-d
According to the information obtained from the above steps, the S3 user can be used to connect with the S3 client.
Use S3 bucket
Create a storageclass with S3 as storage
-apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: rook-ceph-delete-bucketprovisioner: ceph.rook.io/bucketreclaimPolicy: Deleteparameters: objectStoreName: my-store objectStoreNamespace: rook-ceph region: default
> currently, the creation of pvc using S3 storage is not supported. It can only be used to create buckets.
Create a corresponding bucket resource request for storageclass
ApiVersion: objectbucket.io/v1alpha1kind: ObjectBucketClaimmetadata: name: ceph-delete-bucketspec: generateBucketName: ceph-bkt storageClassName: rook-ceph-delete-bucket
After the bucket is created, a secret with the same name as the bucket resource request is generated, in which the AccessKey and SecretKey used to connect the bucket are stored.
Get AccessKey
Kubectl get secret ceph-delete-bucket-n rook-ceph-o jsonpath=' {.data.AWS_ACCESS_KEY_ID}'| base64-d
Get SecretKey
Kubectl get secret ceph-delete-bucket-n rook-ceph-o jsonpath=' {.data.AWS_SECRET_ACCESS_KEY}'| base64-d
> S3 users obtained by this method have set a quota limit and can only use one bucket
The above is all the contents of the article "how to achieve the Trinity of rook ceph". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.