Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Ceph distributed deployment tutorial

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains the "Ceph distributed deployment tutorial", the content of the article is simple and clear, easy to learn and understand, the following please follow the editor's ideas slowly in depth, together to study and learn the "Ceph distributed deployment tutorial"!

First, environmental preparation:

The deployment uses three machines (ubuntu 14.04), two for osd and one for mon and mds. The details of the service are as follows:

Ceph2 (192.168.21.140): osd.0

Ceph3 (192.168.21.141): osd.1 osd.2

Ceph4 (192.168.21.142): mon mds

Modify their respective hostname and be able to access each other through hostname.

Nodes can ssh access to each other without entering a password (through the ssh-keygen command).

Configure password-free login:

On each node:

Ssh-keygen-t rsatouch / root/.ssh/authorized_keys

Configure ceph2 first

Scp / root/.ssh/id_rsa.pub ceph3:/root/.ssh/id_rsa.pub_ceph2scp / root/.ssh/id_rsa.pub ceph4:/root/.ssh/id_rsa.pub_ceph2ssh ceph3 "cat / root/.ssh/id_rsa.pub_ceph2 > > / root/.ssh/authorized_keys" ssh ceph4 "cat / root/.ssh/id_rsa.pub_ceph2 > > / root/.ssh/authorized_keys"

The nodes ceph3 and ceph4 also need to be configured with reference to the above command.

3. Install the ceph library

Install the Ceph library on each node:

Apt-get install ceph ceph-common ceph-mds

Display ceph version information:

Ceph-v

Create a ceph configuration file on ceph2

Vim / etc/ceph/ceph.conf [global] max open files = 131072 auth cluster required = none auth service required = none auth client required = none osd pool default size = 2 [osd] osd journal size = 1000 filestore xattruse omap = true osd mkfs type = xfs osd mkfs options xfs =-f # default for xfs is "- f" osd mount options xfs = rw,noatime # default mount option is "rw Noatime "[mon.a] host= ceph4 mon addr = 192.168.21.142 host= ceph4 mon addr [osd.0] host= ceph2 devs= / dev/sdb [osd.1] host= ceph3 devs= / dev/sdc [osd.2] host= ceph3 devs= / dev/sdb [mds.a] host= ceph4

After the configuration file is successfully created, it needs to be copied to every node except the pure client (and always consistent later):

Scp / etc/ceph/ceph.conf ceph3:/etc/ceph/ceph.confscp / etc/ceph/ceph.conf ceph4:/etc/ceph/ceph.conf

5. Create a data directory:

Execute on each node

Mkdir-p / var/lib/ceph/osd/ceph-0mkdir-p / var/lib/ceph/osd/ceph-1mkdir-p / var/lib/ceph/osd/ceph-2mkdir-p / var/lib/ceph/mon/ceph-amkdir-p / var/lib/ceph/mds/ceph-a

Create a partition and mount it:

For the nodes ceph2 and ceph3 where osd is located, the new partition needs to be xfs formatted and mount to the specified directory:

Ceph2:

Mkfs.xfs-f / dev/sdbmount / dev/sdb / var/lib/ceph/osd/ceph-0

Ceph3:

Mkfs.xfs-f / dev/sdcmount / dev/sdc / var/lib/ceph/osd/ceph-1mkfs.xfs-f / dev/sdbmount / dev/sdb / var/lib/ceph/osd/ceph-2

VII. Initialization

Note that before each initialization, you need to stop the Ceph service on each node and empty the original data directory:

/ etc/init.d/ceph stoprm-rf / var/lib/ceph/*/ceph-*/*

Initialization can then be performed on the node ceph4 where the mon is located:

Sudo mkcephfs-a-c / etc/ceph/ceph.conf-k / etc/ceph/ceph4.keyring

Note that once the configuration file ceph.conf changes, initialization is best performed again.

8. Start ceph

Execute on the node ceph4 where mon is located:

Sudo service ceph-a start

Note that when performing this step above, you may encounter the following prompts:

= osd.0 = =

Mounting xfs onceph5:/var/lib/ceph/osd/ceph-0

Error ENOENT: osd.0 does not exist. Create it before updating the crush map

After executing the following command, repeat the above command to start the service, and you can solve the problem:

Ceph osd create

9. The pit encountered

Ubuntu cannot log in remotely through root users

Modify vim / etc/ssh/sshd_config to change PermitEmptyPasswords to no restart ssh

Ceph osd tree saw that the host of the three osd is the same as ubuntu, because I did the experiment with a virtual machine, and the virtual machine was obtained by clone, so the hostname is the same, vim / etc/hostname.

Ceph osd tree saw three osd host as ceph4 restart the ceph service / etc/init.d/ceph restart on ceph2 and ceph3

Ceph-s sees warning, not ok.

Root@ceph4:/var/lib/ceph/osd# ceph-s cluster 57b27255-c63d-4a70-8561-99e76615e10f health HEALTH_WARN 576 pgs stuck unclean monmap e1: 1 mons at {axie 192.168.21.142 99e76615e10f health HEALTH_WARN 6789Ember 0}, election epoch 1, quorum 0 a mdsmap e6: 1-1-1 up {0=a=up:active} osdmap e57: 3 osds: 3 up, 3 in pgmap v108: 576 pgs, 3 pools, 1884 bytes data, 20 objects 3125 MB used 12204 MB / 15330 MB avail 576 active+remapped

Solution, add [global] in / etc/ceph/ceph.conf

Osd pool default size = 2

It is speculated that the number of osd should be greater than the number of replicas before ceph can operate normally.

Thank you for reading, the above is the content of the "Ceph distributed deployment tutorial". After the study of this article, I believe you have a deeper understanding of the Ceph distributed deployment tutorial, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report