Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Rook using tutorials to quickly orchestrate ceph

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Three-step installation of kubernetes cluster

Install git clone https://github.com/rook/rookcd cluster/examples/kubernetes/cephkubectl create-f operator.yaml

Check to see if operator is successful:

[root@dev-86-2012] # kubectl get pod-n rook-ceph-systemNAME READY STATUS RESTARTS AGErook-ceph-agent-5z6p7 1 88mrook-ceph-agent-6rj7l 1 Running 0 88mrook-ceph-agent-6rj7l 1 Running 0 88mrook-ceph-agent-8qfpj 1 Running 0 88mrook-ceph-agent-xbhzh 1/1 Running 0 88mrook-ceph-operator-67f4b8f67d-tsnf2 1/1 Running 0 88mrook-discover-5wghx 1/1 Running 0 88mrook-discover-lhwvf 1/1 Running 0 88mrook-discover-nl5m2 1 / 1 Running 0 88mrook-discover-qmbx7 1/1 Running 0 88m

Then create a ceph cluster:

Kubectl create-f cluster.yaml

View the ceph cluster:

[root@dev-86-2012] # kubectl get pod-n rook-cephNAME READY STATUS RESTARTS AGErook-ceph-mgr-a-8649f78d9b-jklbv 1 rook-cephNAME READY STATUS RESTARTS AGErook-ceph-mgr-a-8649f78d9b-jklbv 1 Running 0 64mrook-ceph-mon-a-5d7fcfb6ff-2wq9l 1 64mrook-ceph-mon-a-5d7fcfb6ff-2wq9l 1 Running 0 81mrook-ceph-mon-b-7cfcd567d8-lkqff 1 Running 0 80 mrookmura cephallus d- 65cd79df44-66rgz 1 56bd7545bd-5k9xk 1 Running 0 79mrook-ceph-osd-0-56bd7545bd-5k9xk 1 Running 0 63mrook-ceph-osd-1-77f56cd549-7rm4l 1 Running 0 63mrook-ceph-osd-2-6cf58ddb6f-wkwp6 1 Running 0 63mrook-ceph-osd-3-6f8b78c647-8xjzv 1 pound 1 Running 0 63m

Parameter description:

ApiVersion: ceph.rook.io/v1kind: CephClustermetadata: name: rook-ceph namespace: rook-cephspec: cephVersion: # For the latest ceph images See https://hub.docker.com/r/ceph/ceph/tags image: ceph/ceph:v13.2.2-20181023 dataDirHostPath: / var/lib/rook # data disk directory mon: count: 3 allowMultiplePerNode: true dashboard: true storage: useAllNodes: true useAllDevices: false config: databaseSizeMB: "1024" journalSizeMB: "1024"

Visit ceph dashboard:

[root@dev-86-2018] # kubectl get svc-n rook-cephNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGErook-ceph-mgr ClusterIP 10.98.183.33 9283/TCP 66mrook-ceph-mgr-dashboard NodePort 10.103.84.48 8443:31631/TCP 66m Changed to NodePort mode rook-ceph-mon-a ClusterIP 10.99.71.227 6790/TCP 83mrook-ceph-mon-b ClusterIP 10.110.245.119 6790/TCP 82mrook-ceph-mon-d ClusterIP 10.101.79.159 6790/TCP 81m

Then visit https://10.1.86.201:31631.

Manage the account admin and obtain the login password:

Kubectl-n rook-ceph get secret rook-ceph-dashboard-password-o yaml | grep "password:" | awk'{print $2}'| base64-- decode uses to create poolapiVersion: ceph.rook.io/v1kind: CephBlockPoolmetadata: name: replicapool # operator will listen and create a pool After execution, you can also see the corresponding pool namespace: rook-cephspec: failureDomain: host replicated: size: 3---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: rook-ceph-block # here create a storage class Specify this storage class in pvc to dynamically create PVprovisioner: ceph.rook.io/blockparameters: blockPool: replicapool # The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist clusterNamespace: rook-ceph # Specify the filesystem type of the volume. If not specified, it will use `ext4`. Fstype: xfs# Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/reclaimPolicy: Retain create pvc

Under the cluster/examples/kubernetes directory, the official has given an example of worldpress, which can be run directly:

Kubectl create-f mysql.yamlkubectl create-f wordpress.yaml

View PV PVC:

[root@dev-86-2012] # kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmysql-pv-claim Bound pvc-a910f8c2-1ee9-11e9-84fc-becbfc415cde 20Gi RWO rook-ceph-block 144mwp-pv-claim Bound pvc-af2dfbd4-1ee9-11e9-84fc-becbfc415cde 20Gi RWO rook-ceph- Block 144m [root @ dev-86-2011] # kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-a910f8c2-1ee9-11e9-84fc-becbfc415cde 20Gi RWO Retain Bound default/mysql-pv-claim rook-ceph-block 145mpvc-af2dfbd4-1ee9-11e9-84fc-becbfc415cde 20Gi RWO Retain Bound default/wp-pv-claim rook-ceph-block 145m

Look at the yaml file:

ApiVersion: v1kind: PersistentVolumeClaimmetadata: name: mysql-pv-claim labels: app: wordpressspec: storageClassName: rook-ceph-block # specifies that storage class accessModes:-ReadWriteOnce resources: requests: storage: 20Gi # requires a 20 GB disk. VolumeMounts:-name: mysql-persistent-storage mountPath: / var/lib/mysql volumes:-name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim # specify the PVC defined above

Isn't it very simple.

To access wordpress, change service to NodePort, which is officially given as loadbalance:

Kubectl edit svc wordpress [root @ dev-86-201 kubernetes] # kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEwordpress NodePort 10.109.30.99 80:30130/TCP 148m Summary

Distributed storage plays a very important role in container clusters. A very important idea of using container clusters is to use the cluster as a whole. If you also care about a single host in use, such as scheduling to a node.

Mounting a node directory, etc., will inevitably lead to the inability to give full play to the power of the cloud. Once the computing storage is separated, random drift can be realized, which is a great boon for cluster maintenance.

For example, if the cluster machine is overinsured and needs to be removed from the shelf, then our cloud-based architecture only needs to simply expel and change the node, and then take it off the shelf, because everything has no single point, regardless of whether it is stateful or non-existent.

The status can be repaired automatically. However, the biggest challenge now may be the performance of distributed storage. I highly recommend this computing storage separation architecture in scenarios where performance requirements are not stringent.

Discussion on Additive QQ Group: 98488045

Official account:

WeChat group:

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report