In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Rook deployment Rook is an open source cloud-native storage orchestrator that provides platforms, frameworks and support for various storage solutions for native integration with cloud native environments. Rook does this by automating deployment, booting, configuring, provisioning, extending, upgrading, migrating, disaster recovery, monitoring, and resource management. Rook uses the tools provided by the underlying cloud native container management, scheduling, and orchestration platform to implement its own functions.
1. Install Rook Operator using Helm
Ceph Operator Helm Chart
# helm repo add rook-release https://charts.rook.io/release# helm install-namespace rook-ceph rook-release/rook-ceph-name rook-ceph
two。 Add a disk sdb to three Node nodes in the k8s cluster
[root@k8s-node01 ~] # lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 200G 0 disk sdb 8:16 0 50G 0 disk
3. Create Rook Cluster
# git clone https://github.com/rook/rook.git# cd rook/cluster/examples/kubernetes/ceph/# kubectl apply-f cluster.yaml
If you want to delete a created Ceph cluster, you need to delete the files in the / var/lib/rook/ directory
4. Deploy Ceph dashboard
Ceph dashboard is enabled by default in the cluster.yaml file, but the default service type is ClusterIP, which can only be accessed within the cluster. If external access, NodePort service exposure is required.
# kubectl get svc-n rook-ceph | grep mgr-dashboardrook-ceph-mgr-dashboard ClusterIP 10.106.163.135 8443/TCP 6h4m# kubectl apply-f dashboard-external-https.yaml# kubectl get svc-n rook-ceph | grep mgr-dashboardrook-ceph-mgr-dashboard ClusterIP 10.106.163.135 8443/TCP 6h4mrook-ceph-mgr-dashboard-external -https NodePort 10.98.230.103 8443:31656/TCP 23h
5. Log in to Ceph dashboard
# kubectl cluster-info | grep masterKubernetes master is running at https://20.0.20.200:6443
Get login password
# kubectl get pod-n rook-ceph | grep mgrrook-ceph-mgr-a-6b9cf7f6f6-fdhz5 1 6h21m# kubectl 1 Running 0 6h21m# kubectl-n rook-ceph logs rook-ceph-mgr-a-6b9cf7f6f6-fdhz5 | grep passworddebug 2019-09-20 001grep mgrrook-ceph-mgr-a-6b9cf7f6f6-fdhz5 09virtual 34.166 7f51ba8d2700 0 log_channel (audit) log [DBG]: from='client.24290-'entity='client.admin' cmd= [{"username": "admin", "prefix": "dashboard set-login-credentials" "password": "5PGcUfGey2", "target": ["mgr", "]," format ":" json "}]: dispatch
6. Deploy Ceph toolbox
Ceph authentication is enabled for the Ceph cluster started by default, so if you log in to the Pod where the Ceph component resides, it is impossible to obtain the cluster status and execute the CLI command. In this case, you need to deploy Ceph toolbox.
# kubectl apply-f toolbox.yaml deployment.apps/rook-ceph-tools created# kubectl-n rook-ceph get pods-o wide | grep ceph-toolsrook-ceph-tools-7cf4cc7568-m6wbb 1 Running 084s 20.0.20.206 k8s-node03 # kubectl-n rook-ceph exec-it rook-ceph-tools-7cf4cc7568-m6wbb bash [root@k8s-node03 /] # ceph status cluster: id : aa31c434-13cd-4858-9535-3eb6fa1a441c health: HEALTH_OK services: mon: 3 daemons Quorum a dagger bjournal c (age 6h) mgr: a (active, since 6h) osd: 3 osds: 3 up (since 6h), 3 in (since 6h) data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 3.0 GiB used 144GiB / 147GiB avail pgs: [root@k8s-node03 /] # ceph dfRAW STORAGE: CLASS SIZE AVAIL USED RAW USED% RAW USED hdd 147GiB 144GiB 4.7 MiB 3.0GiB 2.04TOTAL 147GiB 144GiB 4.7 MiB 3.04GiB 2.04POOLS: POOL ID STORED OBJECTS USED USED MAX AVAIL
7. Create Pool and StorageClass
# kubectl apply-f flex/storageclass.yaml cephblockpool.ceph.rook.io/replicapool createdstorageclass.storage.k8s.io/rook-ceph-block created# kubectl get storageclassNAME PROVISIONER AGErook-ceph-block ceph.rook.io/block 50s
8. Create a PVC
# cat cephfs-pvc.yamlapiVersion: v1kind: PersistentVolumeClaimmetadata: name: cephfs-pvcspec: accessModes:-ReadWriteOnce resources: requests: storage: 10Gi storageClassName: rook-ceph-block# kubectl apply-f cephfs-pvc.yaml persistentvolumeclaim/cephfs-pvc created# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEcephfs-pvc Bound pvc-cc158fa0-30f9-420b-96c8-b03b474eb9f7 10Gi RWO rook-ceph-block 4s
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.