Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to add and remove monitors for Ceph

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "how to add and delete monitors for Ceph". The explanation in the article is simple and clear, easy to learn and understand. Please follow the editor's train of thought to study and learn "how to add and delete monitors for Ceph".

1. Environmental preparation 1.1. Existing environment

There are already three-node ceph clusters and three mon. Now add another mon.

# ceph-s cluster 520d715f-adb5-4a6a-afb2-dcf586308166 health HEALTH_OK monmap e3: 3 mons at {hadoop001=10.10.1.32:6789/0,hadoop002=10.10.1.33:6789/0,hadoop003=10.10.1.34:6789/0} election epoch 1850, quorum 0Lotagement 2hadoop001 osdmap E127: 4 osds: 4 up, 4 in flags sortbitwise pgmap v22405: 64 pgs, 1 pools, 0 bytes data, 0 objects 145 MB used 334 GB / 334 GB avail 64 active+clean1.2. System environment

To add a new mon node, the system environment of the new node also needs to be configured in the same way as the original one. Here is a simple list of the configurations that need to be configured, without going into details:

Hostname, / etc/hosts, ssh mutual trust, firewall, time synchronization, Selinux, maximum number of processes, file handles, maximum number of threads, yum source of ceph

two。 Use ceph-deploy to operate 2. 1. Add mon using ceph-deploy

After the system environment is configured, install the ceph software on the new mon node

# yum install ceph

Create a new mon directly using ceph-deploy on the original mon node

Note: public_network needs to be configured in the configuration file, otherwise it may fail to add.

# ceph-deploy mon create Hadoop004 [ceph _ deploy.conf] [DEBUG] found configuration file at: / root/ .cephdeploy.conf[ ceph _ deploy.cli] [INFO] Invoked (1.5.34): / usr/bin/ceph-deploy mon create hadoop004 [ceph_deploy.cli] [INFO] ceph-deploy options: [ceph _ deploy.cli] [INFO] username: one [ceph _ deploy.cli] [INFO] verbose : False[ceph _ deploy.cli] [INFO] overwrite_conf: False[ceph _ deploy.cli] [INFO] subcommand: create[ceph _ deploy.cli] [INFO] quiet: False[ceph _ deploy.cli] [INFO] cd_conf: [ceph_deploy. Cli] [INFO] cluster: CEO [ceph _ deploy.cli] [INFO] mon: ['hadoop004'] [ceph_deploy.cli] [INFO] func: [ceph_deploy.cli] [INFO] ceph_conf: one [ceph _ deploy.cli] [INFO] Default_release: False[ceph _ deploy.cli] [INFO] keyrings: None[ceph _ deploy.mon] [DEBUG] Deploying mon Cluster ceph hosts hadoop004[ceph _ deploy.mon] [DEBUG] detecting platform for host hadoop004... [hadoop004] [DEBUG] connected to host: hadoop004 [hadoop004] [DEBUG] detect platform information from remote host [hadoop004] [DEBUG] detect machine type [hadoop004] [DEBUG] find the location of an executable[ceph _ deploy.mon] [INFO] distro info: CentOS Linux 7.2.1511 Core [hadoop004] [DEBUG] determining if provided host has same hostname in remote [hadoop004] [DEBUG] get remote short hostname [hadoop004] [DEBUG] deploying Mon to hadoop004 [hadoop004] [DEBUG] get remote short hostname [hadoop004] [DEBUG] remote hostname: hadoop004 [hadoop004] [DEBUG] write cluster configuration to / etc/ceph/ {cluster} .conf [hadoop004] [DEBUG] create the mon path if it does not exist [hadoop004] [DEBUG] checking for done path: / var/lib/ceph/mon/ceph-hadoop004/done [hadoop004] [DEBUG] done path does not exist: / var/lib/ceph/mon/ceph-hadoop004/done [hadoop004] [INFO] creating keyring File: / var/lib/ceph/tmp/ceph-hadoop004.mon.keyring [hadoop004] [DEBUG] create the monitor keyring file [hadoop004] [INFO] Running command: ceph-mon-- cluster ceph--mkfs-I hadoop004-- keyring / var/lib/ceph/tmp/ceph-hadoop004.mon.keyring-- setuser 167-- setgroup 167 [hadoop004] [DEBUG] ceph-mon: renaming mon.noname-d 10.10.1.36 hadoop004 6789 bank 0 to mon.hadoop004 [hadoop004] [DEBUG] ceph-mon : set fsid to 520d715f-adb5-4a6a-afb2-dcf586308166 [hadoop004] [DEBUG] ceph-mon: created monfs at / var/lib/ceph/mon/ceph-hadoop004 for mon.hadoop004 [hadoop004] [INFO] unlinking keyring file / var/lib/ceph/tmp/ceph-hadoop004.mon.keyring [hadoop004] [DEBUG] create a done file to avoid re-doing the mon deployment [hadoop004] [DEBUG] create the init path if it does not exist [hadoop004] [INFO] Running command: systemctl enable ceph.target [hadoop004] [ INFO] Running command: systemctl enable ceph-mon@hadoop004 [hadoop004] [INFO] Running command: systemctl start ceph-mon@hadoop004 [hadoop004] [INFO] Running command: ceph--cluster=ceph-admin-daemon / var/run/ceph/ceph-mon.hadoop004.asok mon_ status [hadoop004] [DEBUG] * * * * [hadoop004] [DEBUG] status for monitor: mon.hadoop004 [hadoop004] [DEBUG] {[hadoop004] [DEBUG] "election_epoch": 0 [hadoop004] [DEBUG] "extra_probe_peers": [[hadoop004] [DEBUG] "10.10.1.32 hadoop004 6789 DEBUG 0", [hadoop004] [DEBUG] "10.10.1.33 extra_probe_peers 6789 bank 0", [hadoop004] [DEBUG] "10.10.1.34 extra_probe_peers 6789 bank 0" [hadoop004] [DEBUG]] [hadoop004] [DEBUG] "monmap": {[hadoop004] [DEBUG] "created": "2016-12-19 09 monmap 59 hadoop004 12.970500", [hadoop004] [DEBUG] "epoch": 3, [hadoop004] [DEBUG] "fsid": "520d715f-adb5-4a6a-afb2-dcf586308166", [hadoop004] [DEBUG] "modified": "2017-08-02 17race 2240.247484" [hadoop004] [DEBUG] "mons": [[hadoop004] [DEBUG] {[hadoop004] [DEBUG] "addr": "10.10.1.32 hadoop001] [DEBUG]" name ":" hadoop001 ", [hadoop004] [DEBUG]" rank ": 0 [hadoop004] [DEBUG]} [hadoop004] [DEBUG] {[hadoop004] [DEBUG] "addr": "10.10.1.33 addr", [hadoop004] [DEBUG] "name": "hadoop002", [hadoop004] [DEBUG] "rank": 1 [hadoop004] [DEBUG]} [hadoop004] [DEBUG] {[hadoop004] [DEBUG] "addr": "10.10.1.34 addr", [hadoop004] [DEBUG] "name": "hadoop003", [hadoop004] [DEBUG] "rank": 2 [hadoop004] [DEBUG]} [hadoop004] [DEBUG] [hadoop004] [DEBUG]}, [hadoop004] [DEBUG] "name": "hadoop004" [hadoop004] [DEBUG] "outside_quorum": [], [hadoop004] [DEBUG] "quorum": [], [hadoop004] [DEBUG] "rank":-1, [hadoop004] [DEBUG] "state": "probing" [hadoop004] [DEBUG] "sync_provider": [] [hadoop004] [DEBUG]} [hadoop004] [DEBUG] * [hadoop004] [INFO] Monitor: mon.hadoop004 is currently at the state of probing [hadoop004] [INFO] Running command: ceph--cluster=ceph-- admin-daemon / var/run/ceph/ceph-mon.hadoop004.asok mon_ status [hadoop004] [WARNIN] monitor hadoop004 does not exist in monmap

Check the status and add it successfully.

# ceph-s cluster 520d715f-adb5-4a6a-afb2-dcf586308166 health HEALTH_OK monmap e4: 4 mons at {hadoop001=10.10.1.32:6789/0,hadoop002=10.10.1.33:6789/0,hadoop003=10.10.1.34:6789/0,hadoop004=10.10.1.36:6789/0} election epoch 1850, quorum 0Grady 1 hadoop001,hadoop002,hadoop003,hadoop004 osdmap e127: 4 osds: 4 up, 4 in flags sortbitwise pgmap v22405: 64 pgs, 1 pools 0 bytes data, 0 objects 145 MB used, 334 GB / 334 GB avail 64 active+clean

After adding, add the hostname and ip addresses of the new mon node in the mon_initial_members and mon_host parameters of ceph.conf, respectively.

2.2. Use ceph-deploy to delete mon# ceph-deploy mon destroy Hadoop004 [ceph _ deploy.conf] [DEBUG] found configuration file at: / root/ .cephdeploy.conf[ ceph _ deploy.cli] [INFO] Invoked (1.5.34): / usr/bin/ceph-deploy mon destroy Hadoop004 [ceph _ deploy.cli] [INFO] ceph-deploy options: [ceph _ deploy.cli] [INFO] username: none [ceph _ deploy.cli] [ INFO] verbose: False[ceph _ deploy.cli] [INFO] overwrite_conf: False[ceph _ deploy.cli] [INFO] subcommand: destroy[ceph _ deploy.cli] [INFO] quiet: False[ceph _ deploy.cli] [INFO] cd_conf : [ceph_deploy.cli] [INFO] cluster: CEO [ceph _ deploy.cli] [INFO] mon: ['hadoop004'] [ceph_deploy.cli] [INFO] func: [ceph_deploy.cli] [INFO] ceph_conf: one [ceph _ Deploy.cli] [INFO] default_release: false [ceph _ deploy.mon] [DEBUG] Removing mon from hadoop004 [hadoop004] [DEBUG] connected to host: hadoop004 [hadoop004] [DEBUG] detect platform information from remote host [hadoop004] [DEBUG] detect machine type [hadoop004] [DEBUG] find the location of an executable [hadoop004] [DEBUG] get remote short hostname [hadoop004] [INFO] Running command: ceph-cluster=ceph-n mon. -k / var/lib/ceph/mon/ceph-hadoop004/keyring mon remove hadoop004 [hadoop004] [WARNIN] Error EINVAL: removing mon.hadoop004 at 10.10.1.36 Error EINVAL 6789Unip 0, there will be 3 monitors [hadoop004] [INFO] polling the daemon to verify it stopped [hadoop004] [INFO] Running command: systemctl stop ceph-mon@hadoop004.service [hadoop004] [INFO] Running command: mkdir-p / var/lib/ceph/mon-removed [hadoop004] [DEBUG] move old monitor data

Clean up thoroughly and operate with caution:

Note: this deletes all ceph data, configuration files, and rpm packages on the mon node hadoop004

# ceph-deploy purge hadoop004

If you think the deleted one is not clean, you can delete the legacy directory on hadoop004 again.

# rm-rf / var/lib/ceph# rm-rf / var/run/ceph/*3. Manual operation

Hadoop004's mon has been cleaned up in the previous chapter.

3.1. Add mon manually

Install the software on hadoop004 and create a mon directory

[root@hadoop004 ~] # yum install ceph

Copy the ceph.conf and client key to the / etc/ceph directory of hadoop004 on hadoop001

# scp ceph.conf ceph.client.admin.keyring hadoop004:/etc/ceph/

On hadoop004: get mon key ring

# mkdir dlw# cd dlw/# ceph auth get mon. -o keyingexported keyring for mon.

Get the monitor run diagram

# ceph mon getmap-o monmapgot monmap epoch 5

Create a monitor data directory

# ceph-mon-I hadoop004-- mkfs-- monmap monmap-- keyring keying ceph-mon: set fsid to 520d715f-adb5-4a6a-afb2-dcf586308166ceph-mon: created monfs at / var/lib/ceph/mon/ceph-hadoop004 for mon.hadoop004

Start a new monitor

# ceph-mon-I hadoop004-- public-addr 10.10.1.36VR 6789

Check statu

# ceph-s cluster 520d715f-adb5-4a6a-afb2-dcf586308166 health HEALTH_OK monmap e6: 4 mons at {hadoop001=10.10.1.32:6789/0,hadoop002=10.10.1.33:6789/0,hadoop003=10.10.1.34:6789/0,hadoop004=10.10.1.36:6789/0} election epoch 1854, quorum 0Grady 1 hadoop001,hadoop002,hadoop003,hadoop004 osdmap e127: 4 osds: 4 up, 4 in flags sortbitwise pgmap v22405: 64 pgs, 1 pools 0 bytes data, 0 objects 145 MB used, 334 GB / 334 GB avail 64 active+clean

It is found that the cluster has succeeded in four mon, but it is not over here. Ceph is very capable of self-repair, so you can't manually execute ceph-mon every time you start a new mon.

Add ceph-mon@hadoop004 service

First find the mon process that just started and terminate it.

# ps-ef | grep cephroot 30514 10 18:25 pts/1 00:00:00 ceph-mon-I hadoop004-- public-addr 10.10.1.36:6789root 30899 9739 0 18:30 pts/1 00:00:00 grep-- color=auto ceph# kill 30514

Add the hostname and ip addresses of the new mon node to the mon_initial_members and mon_host parameters in ceph.conf, respectively

Before starting the service, you need to change the permission of the mon data directory to ceph

# cd / var/lib/ceph/mon# chown-R ceph:ceph ceph-hadoop004/

Start the mon service

# systemctl reset-failed ceph-mon@hadoop004.service# systemctl restart ceph-mon@ `hostname` # systemctl enable ceph-mon@ `hostname` # systemctl restart ceph-mon.target# systemctl status ceph-mon@ `host name` ● ceph-mon@hadoop004.service-Ceph cluster monitor daemon Loaded: loaded (/ usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled) Active: active (running) since three 2017-08-02 18:37:36 CHOST 3s ago Main PID: 31115 (ceph-mon) CGroup: / system.slice/system-ceph\ x2dmon.slice/ceph-mon@hadoop004.service └─ 31115 / usr/bin/ceph-mon-f-- cluster ceph--id hadoop004-- setuser ceph--setgroup ceph8 02 18:37:36 hadoop004 systemd [1]: Started Ceph cluster monitor daemon.8 02 18:37:36 hadoop004 systemd [1]: Starting Ceph cluster monitor daemon...8 02 18:37:36 hadoop004 ceph -mon [31115]: starting mon.hadoop004 rank 3 at 10.10.1.36 mon_data / var/lib/ceph/mon/ceph-hadoop004 fsid 520d715f-adb5-4a6a-afb2-dcf5863081663.2. Manually delete mon# ceph mon remove hadoop004Error EINVAL: removing mon.hadoop004 at 10.10.1.36 there will be 6789 monitors

Clean up the data directory and uninstall the software package

# rm-rf / var/lib/ceph# rm-rf / var/run/ceph/*# yum remove ceph4. Command accumulation View Cluster mon selection # ceph quorum_status-f json-pretty get monmap# ceph-mon-I `hostname`-- inject-monmap / opt/monmap View monmap# monmaptool-- print / opt/monmap add mon# monmaptool in monmap-- add hadoop004 10.10.1.36 mon# monmaptool 6789 Delete mon# monmaptool / tmp/monmap-- rm hadoop004 inject monmap in monmap Stop all mon# systemctl stop ceph-mon@ `hostname` # ceph-mon-I `hostname`-inject-monmap / opt/monmap before injection. Thank you for reading. The above is the content of "how to add and delete monitors for Ceph". After the study of this article, I believe you have a deeper understanding of how to add and delete monitors for Ceph, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report