In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
[root@node141 ~] # ceph health detail
HEALTH_ERR 2 scrub errors; Possible data damage: 2 pgs inconsistent
OSD_SCRUB_ERRORS 2 scrub errors
PG_DAMAGED Possible data damage: 2 pgs inconsistent
Pg 3.3e is active+clean+inconsistent, acting [11,17,4]
Pg 3.42 is active+clean+inconsistent, acting [17,6,0]
Solutions to official website failures:
Https://ceph.com/geen-categorie/ceph-manually-repair-object/
The steps are as follows:
(1) find the abnormal PG, then find the corresponding osd, and fix it on the corresponding host.
[root@node140 /] # ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 8.71826 root default
-2 3.26935 host node140
0 hdd 0.54489 osd.0 up 1.00000 1.00000
1 hdd 0.54489 osd.1 up 1.00000 1.00000
2 hdd 0.54489 osd.2 up 1.00000 1.00000
3 hdd 0.54489 osd.3 up 1.00000 1.00000
4 hdd 0.54489 osd.4 up 1.00000 1.00000
5 hdd 0.54489 osd.5 up 1.00000 1.00000
-3 3.26935 host node141
12 hdd 0.54489 osd.12 up 1.00000 1.00000
13 hdd 0.54489 osd.13 up 1.00000 1.00000
14 hdd 0.54489 osd.14 up 1.00000 1.00000
15 hdd 0.54489 osd.15 down 1.00000 1.00000
16 hdd 0.54489 osd.16 up 1.00000 1.00000
17 hdd 0.54489 osd.17 up 1.00000 1.00000
-4 2.17957 host node142
6 hdd 0.54489 osd.6 up 1.00000 1.00000
9 hdd 0.54489 osd.9 up 1.00000 1.00000
10 hdd 0.54489 osd.10 up 1.00000 1.00000
11 hdd 0.54489 osd.11 up 1.00000 1.00000
# # this command is also fine
[root@node140 /] # ceph osd find 11
{
"osd": 11
"addrs": {
"addrvec": [
{
"type": "v2"
"addr": "10.10.202.142purl 6820"
"nonce": 24423
}
{
"type": "v1"
"addr": "10.10.202.142pur6821"
"nonce": 24423
}
]
}
"osd_fsid": "1e977e5f-f514-4eef-bd88-c3632d03b2c3"
"host": "node142"
"crush_location": {
"host": "node142"
"root": "default"
}
}
(2) corresponding problem osd 11 17, switch to the host and stop osd
[root@node142 ~] # systemctl stop ceph-osd@11
(3) Brush the log to disk
[root@node142] # ceph-osd-I 15-- flush-journal
(4) start osd
[root@node142 ~] # systemctl start ceph-osd@11
(5) repair pg
[root@node142] # ceph pg repair 3.3e
# osd 17 also fixes #
(6) View status
[root@node141 ~] # ceph health detail
HEALTH_OK
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
Touch a good article from metalink Oracle 10g Segment shrink=Mandatory=Init.ora paramet
© 2024 shulou.com SLNews company. All rights reserved.