In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article is to share with you about how to restore the erroneous deletion of ceph osd in docker. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Simulated deletion of an osd
First record the osd status
[root@k8s-node1 ceph] # ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 0.05516 root default-2 0.01839 host k8s-node1 0 0.01839 osd.0 up 1.00000 1.00000-3 0.01839 host k8s-node2 1 0.01839 osd.1 up 1.00000 1.00000-4 0.01839 host k8s-node3 2 0.01839 osd.2 up 1.00000 1.00000
Log in to k8s-node3 and the osd was deleted by simulation
The following shows that the osd2 of k8s-node3 is removed from the ceph cluster
[root@k8s-node3 ceph] # ceph osd out osd.2marked out osd.2.
Stop the service:
[root@k8s-node3 ceph] # systemctl stop ceph-osd@2
The following indicates that the osd2 of k8s-node3 is deleted:
[root@k8s-node3 ceph] # ceph osd crush remove osd.2removed item id 2 name 'osd.2' from crush map
The following shows the verification of deleting k8s-node3:
[root@k8s-node3 ceph] # ceph auth del osd.2updated
The following shows the complete deletion of k8s-node3 's osd2
[root@k8s-node3 ceph] # ceph osd rm osd.2removed osd.2
Check and find that osd2 is still there:
[root@k8s-node3 ceph] # ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 0.03677 root default-2 0.01839 host k8s-node1 0 0.01839 osd.0 up 1.00000 1.00000-3 0.01839 host k8s-node2 1 0.01839 osd.1 up 1.00000 1.00000-40 host k8s-node3
Restart k8s-node3 's mon service:
[root@k8s-node3 ceph] # systemctl restart ceph-mon@k8s-node3
Check again and find that osd2 is gone:
[root@k8s-node3 ceph] # ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 0.03677 root default-2 0.01839 host k8s-node1 0 0.01839 osd.0 up 1.00000 1.00000-3 0.01839 host k8s-node2 1 0.01839 osd.1 up 1.00000 1.00000-40 host k8s-node3
Check out what ceph services are available for centos:
[root@k8s-node3 ceph] # systemctl list-unit-files | grep cephceph-disk@.service static ceph-mds@.service disabledceph-mgr@.service disabledceph-mon@.service enabled ceph-osd@.service enabled ceph- Radosgw@.service disabledceph-mds.target enabled ceph-mgr.target enabled ceph-mon.target enabled ceph-osd.target enabled ceph-radosgw.target enabled ceph .target enabled
Restart k8s-node3 's osd service:
[root@k8s-node3 ceph] # systemctl stop ceph-osd@2
Although we mistakenly deleted the osd of the third node, its data data is still there:
[root@k8s-node3 ceph] # ll / data/osd0/total 5242932 RW Murray. 1 ceph ceph 193 Oct 28 21:14 activate.monmap-rw-r--r--. 1 ceph ceph 3 Oct 28 21:14 active-rw-r--r--. 1 ceph ceph 37 Oct 28 21:12 ceph_fsiddrwxr-xr-x. 132 ceph ceph 4096 Oct 28 21:14 current-rw-r--r--. 1 ceph ceph 37 Oct 28 21:12 fsid-rw-r--r--. 1 ceph ceph 5368709120 Oct 28 22:01 journal-rw-. 1 ceph ceph 56 Oct 28 21:14 keyring-rw-r--r--. 1 ceph ceph 21 Oct 28 21:12 magic-rw-r--r--. 1 ceph ceph 6 Oct 28 21:14 ready-rw-r--r--. 1 ceph ceph 4 Oct 28 21:14 store_version-rw-r--r--. 1 ceph ceph 53 Oct 28 21:14 superblock-rw-r--r--. 1 ceph ceph 0 Oct 28 21:14 systemd-rw-r--r--. 1 ceph ceph 10 Oct 28 21:14 type-rw-r--r--. 1 ceph ceph 2 Oct 28 21:13 whoami restores the mistakenly deleted osd
Go to the directory where it is mounted, for example
[root@k8s-node3 ceph] # cd / data/osd0/
Restore on deleted osd nod
[root@k8s-node3 osd0] # cat fsid 29f7e64d-62ad-4e5e-96c1-d41f2cb1d3f2 [root@k8s-node3 osd0] # ceph osd create 29f7e64d-62ad-4e5e-96c1-d41f2cb1d3f22
It is normal to return 2 from above.
Start Authorization:
[root@k8s-node3 osd0] # ceph auth add osd.2 osd 'allow *' mon 'allow rwx'-I / data/osd0/keyring added key for osd.2
Check the status:
[root@k8s-node3 osd0] # ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 0.03677 root default-2 0.01839 host k8s-node1 0 0.01839 osd.0 up 1.00000 1.00000-3 0.01839 host k8s-node2 1 0.01839 osd.1 up 1.00000 1.00000-4 0 host k8s-node3 2 0 osd.2 down 0 1.00000
Next, add osd2 back to the cluster
[root@k8s-node3 osd0] # ceph osd crush add 2 0.01839 host=k8s-node3add item id 2 name 'osd.2' weight 0.01839 at location {host=k8s-node3} to crush map
Description: the above 2 is the number of osd2; 0.01839 is the weight, which is checked out by ceph osd tree.
Take another look at the status:
[root@k8s-node3 osd0] # ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 0.05516 root default-2 0.01839 host k8s-node1 0 0.01839 osd.0 up 1.00000 1.00000-3 0.01839 host k8s-node2 1 0.01839 osd.1 up 1.00000 1.00000-4 0.01839 host k8s-node3 2 0.01839 osd.2 down 0 1.00000
Add in:
[root@k8s-node3 osd0] # ceph osd in osd.2marked in osd.2.
Then start the osd service:
[root@k8s-node3 osd0] # systemctl start ceph-osd@2
Check the status and find that osd2 has returned. If there is data, you will see the progress of data recovery:
[root@k8s-node3 osd0] # ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 0.05516 root default-2 0.01839 host k8s-node1 0 0.01839 osd.0 up 1.00000 1.00000-3 0.01839 host k8s-node2 1 0.01839 osd.1 up 1.00000 1.00000-4 0.01839 host k8s-node3 2 0.01839 osd.2 up 1.00000 1.00000 Thank you for reading! This is the end of the article on "how to restore the erroneous deletion of ceph osd in docker". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.