In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail how to upgrade the stand-alone Ceph from Firefly to Hammer. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.
The Firefly version of the Ceph cluster is deployed on a single machine, including mon (one), osd (three) and mds (one). The operating system used on a single computer is the ubuntu-14.04-server-amd64 version. The requirement of this upgrade is that the data cannot be lost and the upgrade cannot be stopped.
Because the current version of Firefly uses mkcephfs deployment, and the new version of ceph has replaced mkcephfs with ceph-deploy, there is no mkcephfs in the Hammer version. Upgrade deployments can be upgraded using the ceph-deploy tool or through package management. Here I use the ceph-deploy tool to upgrade.
The specific upgrade procedure is as follows:
1. Install ceph-deploy tools.
1) update the new version of the software source.
# wget-Q-O-'https://git.ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add-
# apt-add-repository 'deb http://download.ceph.com/debian-hammer/ trusty main'
# echo deb https://download.ceph.com/debian-hammer/ $(lsb_release-sc) main | sudo tee / etc/apt/sources.list.d/ceph.list
2) update ceph-deploy.
# pip install-U ceph-deploy
2. Update monitor node.
1) update the ceph version of all monitor nodes in the ceph cluster.
Since my cluster is deployed on one device, the update here is the update of the entire Ceph cluster.
# ceph-deploy install-release hammer ceph0
2) restart each monitor node.
# / etc/init.d/ceph restart mon
3) check the startup status of the monitor node.
# ceph mon stat
E1: 1 mons at {axim 192.168.250.58 bank 6789bank 0}, election epoch 1, quorum 0a
3. Update OSD node.
1) update the ceph version of all osd nodes in the ceph cluster.
# ceph-deploy install-release hammer ceph0
2) restart each osd node.
# / etc/init.d/ceph restart osd
3) check the startup status of the osd node.
# ceph osd stat
Osdmap e191: 3 osds: 3 up, 3 in
4. Update MDS node.
1) update the ceph version of all mds nodes in the ceph cluster.
# ceph-deploy install-release hammer ceph0
2) restart each mds node.
# / etc/init.d/ceph restart mds
3) check the startup status of the mds node.
# ceph mds stat
E27: 1-1-1 up {0=0=up:active}
5. Check the version number of the current ceph cluster.
# ceph-verison
Ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)
You can see here that ceph has been successfully upgraded to the Hammer version.
6. Check the running status of the current ceph cluster.
# ceph-s
Cluster e4251f73-2fe9-4dfc-947f-962843dc6ad9
Health HEALTH_WARN
Too many PGs per OSD (2760 > max 300)
Monmap e1: 1 mons at {axie 192.168.250.58 purl 6789Uniplet0}
Election epoch 1, quorum 0 a
Mdsmap e27: 1-1-1 up {0=0=up:active}
Osdmap e190: 3 osds: 3 up, 3 in
Pgmap v450486: 2760 pgs, 21 pools, 27263 MB data, 18280 objects
85251 MB used, 1589 GB / 1672 GB avail
2760 active+clean
At this point, you can see that the current state of Ceph is HEALTH_WARN. The problem is that the ceph cluster can only have a maximum of 300 PGs per OSD by default, while there are as many as 2760 PGs on the current system (the above HEALTH_WARN status is not given on the Firefly version, but is given after upgrading to the Hammer version).
My solution to this problem is to modify the ceph configuration file about the maximum number of PGs that can be configured without OSD. Add mon pg warn max per osd = 4096 under the [mon] node of ceph.conf. When you restart the ceph monitor node after saving the ceph.conf, and then use ceph-s to view the status of the current ceph cluster, everything will be fine.
# ceph-s
Cluster e4251f73-2fe9-4dfc-947f-962843dc6ad9
Health HEALTH_OK
Monmap e1: 1 mons at {axie 192.168.250.58 purl 6789Uniplet0}
Election epoch 1, quorum 0 a
Mdsmap e27: 1-1-1 up {0=0=up:active}
Osdmap e191: 3 osds: 3 up, 3 in
Pgmap v450550: 2760 pgs, 21 pools, 27263 MB data, 18280 objects
85245 MB used, 1589 GB / 1672 GB avail
2760 active+clean
The problems you should pay attention to when upgrading a ceph cluster from Firefly to Hammer are as follows:
1. Caps mon 'allow *' must be added to / var/lib/ceph/mon/ceph-a/keyring in monitor access.
2. The default path must be used in the cluster, that is, / var/lib/ceph
This is the end of the article on "how to upgrade stand-alone Ceph from Firefly to Hammer". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it out for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.