Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Ceph-s cluster error report too many PGs per OSD what to do

2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you ceph-s cluster error too many PGs per OSD how to do, I believe that most people do not understand, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

Background

An error was reported in the cluster status, as follows:

# ceph-s cluster 1d64ac80-21be-430e-98a8-b4d8aeb18560 health HEALTH_WARN max 300) monmap e1: 1 mons at {node1=109.105.115.67:6789/0} election epoch 4, quorum 0 node1 osdmap e49: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v1256: 912 pgs, 23 pools, 4503 bytes data, 175 objects 13636 MB used 497 GB / 537 GB avail 912 active+clean analysis

The reason for the problem is that the number of cluster osd is relatively small. During my testing, a large number of pool were created due to the construction of rgw gateway, integration with OpenStack, etc., and each pool takes up some pg. Ceph cluster has a default value for each disk, as if each osd is 300 pgs, but this default value can be adjusted, but adjusting too large or too small will have a certain impact on the performance of the cluster. Because this is a test environment, as long as we can eliminate the error report. Query the maximum pg alarm value under each osd:

$ceph-- show-config | grep mon_pg_warn_max_per_osdmon_pg_warn_max_per_osd = 300 solution

In the configuration file, increase the alarm threshold of this option for the cluster by adding it to the ceph.conf (/ etc/ceph/ceph.conf) configuration file of the mon node as follows:

$vi / etc/ceph/ ceph.confs [global] .mon _ pg_warn_max_per_osd = 1000

Restart the monitor service:

$systemctl restart ceph-mon.target

Check the ceph cluster status again.

$ceph-s

Cluster 1d64ac80-21be-430e-98a8-b4d8aeb18560 health HEALTH_OK monmap e1: 1 mons at {node1=109.105.115.67:6789/0} election epoch 6, quorum 0 node1 osdmap e49: 2 osds: 2 up, 2 in flags sortbitwise,require_jewel_osds pgmap v1273: 912 pgs, 23 pools, 4503 bytes data, 175 objects 13636 MB used, 497 GB / 537 GB avail 912 active+clean are all the contents of the article "ceph-s Cluster error too many PGs per OSD how to do" Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report