Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the daily operation and maintenance operations of ceph cluster in docker

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article is about the daily operation and maintenance of ceph cluster in docker. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

View all daemons for ceph

[root@k8s-node1 ceph] # systemctl list-unit-files | grep cephceph-disk@.service static ceph-mds@.service disabledceph-mgr@.service disabledceph-mon@.service enabled ceph-osd@.service enabled ceph- Radosgw@.service disabledceph-mds.target enabled ceph-mgr.target enabled ceph-mon.target enabled ceph-osd.target enabled ceph-radosgw.target enabled ceph .target enabled starts a specific type of daemon on a ceph node by type

Launch a specific daemon instance systemctl start ceph-osd@ {id} systemctl start ceph-mon@ {hostname} systemctl start ceph-msd@ {hostname} mon monitoring status check [root@k8s-node1 ceph] # ceph- s cluster 2e6519d9-b733-446f-8a14-8622796f83ef health HEALTH_OK monmap e4: 3 mons at {k8s-node1=172.16.22.201:6789/0 on the systemctl start ceph-osd.targetsystemctl start ceph-mon.targetsystemctl start ceph-mds.targetceph node Quorum election epoch Node2 mgr active: k8s-node1 standbys: k8s-node3, k8s-node2 osdmap e31: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds,require_kraken_osds pgmap v13640: 64 pgs, 1 pools, 0 bytes data, 0 objects 35913 MB used 21812 MB / 57726 MB avail 64 active+clean [root@k8s-node1 ceph] # ceph ceph > status cluster 2e6519d9-b733-446f-8a14-8622796f83ef health HEALTH_OK monmap e4: 3 mons at {k8smuri Node1title 172.16.22.20120189GetWord ("K8slite"); "K8sCorde Node2" 172.16.22.202; 6789Lenovo "K8sLayNode3" 172.16.22.203Rut6789Universe 0} election epoch 26, quorum 0Let1: 2k8slenode1 K8s-node3 mgr active: k8s-node1 standbys: k8s-node3, k8s-node2 osdmap e31: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds,require_kraken_osds pgmap v13670: 64 pgs, 1 pools, 0 bytes data, 0 objects 35915 MB used, 21810 MB / 57726 MB avail 64 active+cleanceph > healthHEALTH_OKceph > mon_status {"name": "k8s-node1", "rank": 0, "state": "leader" "election_epoch": 26, "quorum": [0je 1jue 2], "features": {"required_con": "9025616074522624", "required_mon": ["kraken"], "quorum_con": "1152921504336314367", "quorum_mon": ["kraken"]}, "outside_quorum": [], "extra_probe_peers": ["172.16.22.202 required_con 6789\ / 0", "172.16.22.203RM 6789\ / 0"], "sync_provider": [] "monmap": {"epoch": 4, "fsid": "2e6519d9-b733-446f-8a14-8622796f83ef", "modified": "2018-10-28 21 21 kraken 30V 09.197608", "created": "2018-10-28 09 9V 4911.509071", "features": {"persistent": ["kraken"], "optional": []}, "mons": [{"rank": 0, "name": "k8s-node1", "addr": "172.16.22.2012017 6789\ / 0" "public_addr": "172.16.22.201VR 6789\ / 0"}, {"rank": 1, "name": "k8s-node2", "addr": "172.16.22.202 name 6789\ / 0", "public_addr": "172.16.22.202 mosaic 6789\ / 0"}, {"rank": 2, "name": "k8s-node3", "addr": "172.16.22.203 moisture 6789\ / 0" "public_addr": "172.16.22.203VR 6789\ / 0"}} ceph logging

The default location of ceph logs is saved in the node / var/log/ceph/ceph.log. You can use ceph-w to view real-time log records.

Log in to any node that has reported an error and use the following command to read the log.

[root@k8s-node1 ceph] # ceph-w

Ceph mon is also constantly checking the free status, and when the check fails, it will write the free information to the cluster log.

[root@k8s-node1 ceph] # ceph mon state4: 3 mons at {k8smurf Node1, 172.16.22.20120) 6789pp0 Magi, k8sMurnode2, 172.16.22.202, 6789pp0, election epoch, election epoch 26, quorum 0parentin 1, 2k8sFlornode1, k8sMalaynode2, k8sslynode3 check osd

[root@k8s-node1 ceph] # ceph osd stat osdmap e31: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds Require_kraken_ ODS [root @ k8s-node1 ceph] # ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 0.05516 root default-2 0.01839 host k8s-node1 0 0.01839 osd.0 up 1.00000 1.00000-3 0.01839 Host k8s-node2 1 0.01839 osd.1 up 1.00000 1.00000-4 0.01839 host k8s-node3 2 0.01839 osd.2 up 1.00000 1.00000 check the size and availability of pool

[root@k8s-node1 ceph] # ceph dfGLOBAL: SIZE AVAIL RAW USED% RAW USED 57726M 21811M 35914M 62.21 POOLS: NAME ID USED% USED MAX AVAIL OBJECTS rbd 005817M 0 Thank you for reading! This is the end of this article on "what are the daily operation and maintenance operations of ceph clusters in docker?". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report