Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of Ceph single-Node and Multi-Node installation under Ubuntu

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail the example analysis of single-node and multi-node installation of Ceph under Ubuntu. The editor finds it very practical, so I share it with you for reference. I hope you can get something after reading this article.

1 Overview

Ceph is a distributed file system, which adds replication and fault tolerance while maintaining POSIX compatibility. The biggest feature of Ceph is the distributed metadata server, which allocates the location of files through the quasi-algorithm CRUSH (Controlled Replication Under Scalable Hashing). At the core of Ceph is RADOS (ReliableAutonomic Distributed Object Store), an object cluster storage that itself provides high availability, error detection and repair capabilities for objects.

The Ceph ecosystem architecture can be divided into four parts:

Client: client (data user). Client export out a POSIX file system interface for applications to call, and connect to mon/mds/osd for metadata and data interaction; the original client, which is implemented in FUSE, is now written into the kernel and needs to compile a ceph.ko kernel module to use.

Mon: cluster monitor, whose corresponding daemon program is cmon (Ceph Monitor). Mon monitors and manages the entire cluster, export a network file system to the client, and the client can mount the Ceph file system through the mount-t ceph monitor_ip:/ mount_point command. According to the official statement, three mon can guarantee the reliability of the cluster.

Mds: metadata server, whose corresponding daemon program is cmds (Ceph Metadata Server). If there can be multiple MDS in Ceph to form a distributed metadata server cluster, dynamic directory segmentation in Ceph will be involved for load balancing.

Osd: object storage cluster, whose corresponding daemon program is cosd (Ceph Object StorageDevice). Osd encapsulates the local file system, provides an interface for object storage, and stores data and metadata as objects. The local file system here can be ext2/3, but Ceph believes that these file systems can not adapt to the special access mode of osd. They previously implemented ebofs themselves, but now Ceph uses btrfs.

Ceph supports hundreds or more nodes, and the above four parts are best distributed on different nodes. Of course, for basic testing, you can install mon and mds on one node, or you can deploy all four parts on the same node.

2 the environment is ready for version 2.1

Linux system version: Ubuntu Server 12.04.1 LTS

Ceph version: 0.72.2 (installed later)

2.2 Update source (optional)

In the case of "slow network speed" or "failed to install software", you can consider replacing it with a domestic image:

# sudo sed-iroomsroomus.archive.ubuntu.compositionmirrors.163.compositiong' / etc/apt/sources.list

# sudo apt-get update

The default Ceph version of Ubuntu 12.04 is 0.41. If you want to install a newer version of Ceph, you can add key to APT and update sources.list:

# sudo wget-Q-O-'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'| sudo apt-key add-

# sudo echo deb http://ceph.com/debian/ $(lsb_release-sc) main | sudo tee/etc/apt/sources.list.d/ceph.list

# sudo apt-get update

2.3 system time

Check whether the system time is correct, then ignore the following two steps

# sudo date-s "2013-11-0415 purl 05purr 57" # set system time

# sudo hwclock-w # write hardware time

2.4 turn off the firewall

Make sure that SELinux is turned off (Ubuntu is not enabled by default).

In addition, it is recommended that you turn off the firewall:

# sudo ufw disable # turn off the firewall

3 Ceph single-node installation 3.1-node IP

192.168.73.129 (hostname is ceph2, two partitions / dev/sdb1 and / dev/sdb2 are provided to osd, and client/mon/mds is also installed)

3.2 install the CephLibrary

# apt-get install ceph ceph-common ceph-mds

# ceph-v # will display the version of ceph and key information

3.3.Create a Ceph profile

# vim / etc/ceph/ceph.conf

[global]

Max open files = 131072

# For version 0.55 and beyond, you must explicitly enable

# or disable authentication with "auth" entries in [global].

Auth cluster required = none

Auth service required = none

Auth client required = none

[osd]

Osd journal size = 1000

# The following assumes ext4 filesystem.

Filestore xattruse omap = true

# For Bobtail (v0.56) and subsequent versions, you may

# add settings for mkcephfs so that it will create and mount

# the file system on a particular OSD for you. Remove the comment `#`

# character for the following settings and replace the values

# in braces with appropriate values, or leave the following settings

# commented out to accept the default values. You must specify the

#-mkfs option with mkcephfs in order for the deployment script to

# utilize the following settings, and you must define the 'devs'

# option for each osd instance; see below.

Osd mkfs type = xfs

Osd mkfs options xfs =-f # default for xfs is "- f"

Osd mount options xfs = rw,noatime # default mount option is "rw,noatime"

# For example, for ext4, the mount option might look like this:

# osd mkfs options ext4 = user_xattr,rw,noatime

# Execute $hostname to retrieve the name of your host

# and replace {hostname} with the name of your host.

# For the monitor, replace {ip-address} with the IP

# address of your host.

[mon.a]

Host = ceph2

Mon addr = 192.168.73.129VR 6789

[osd.0]

Host = ceph2

# For Bobtail (v0.56) and subsequent versions, you may

# add settings for mkcephfs so that it will create and mount

# the file system on a particular OSD for you. Remove the comment `#`

# character for the following setting for each OSD and specify

# a path to the device if you use mkcephfs with the-mkfs option.

Devs = / dev/sdb1

[osd.1]

Host= ceph2

Devs= / dev/sdb2

[mds.a]

Host= ceph2

Note that for lower Ceph versions (for example, 0.42), you need to add one line under [mon]: mondata = / data/$name, and under [osd], add a line: osd data = / data/$name as the subsequent data directory; accordingly, the subsequent steps for the data directory also need to be adjusted.

3.4 create a data directory

# mkdir-p / var/lib/ceph/osd/ceph-0

# mkdir-p / var/lib/ceph/osd/ceph-1

# mkdir-p / var/lib/ceph/mon/ceph-a

# mkdir-p / var/lib/ceph/mds/ceph-a

3.5 create partitions and mounts for osd

Format the new partition with xfs or btrfs:

# mkfs.xfs-f / dev/sdb1

# mkfs.xfs-f / dev/sdb2

The first time you must mount the partition to write initialization data:

# mount / dev/sdb1 / var/lib/ceph/osd/ceph-0

# mount / dev/sdb2 / var/lib/ceph/osd/ceph-1

3.6 perform initialization

Note that before each initialization, you need to stop the Ceph service and empty the original data directory:

# / etc/init.d/ceph stop

# rm-rf / var/lib/ceph/*/ceph-*/*

You can then perform initialization on the same node as mon:

# sudo mkcephfs-a-c / etc/ceph/ceph.conf-k / etc/ceph/ceph2.keyring

Note that once the configuration file ceph.conf changes, initialization is best performed again.

3.7 start the Ceph service

Execute on the same node as mon:

# sudo service ceph-a start

Note that when performing this step above, you may encounter the following prompts:

= osd.0 = =

Mounting xfs onceph5:/var/lib/ceph/osd/ceph-0

Error ENOENT: osd.0 does not exist. Create it before updating the crush map

After executing the following command, repeat the above command to start the service, and you can solve the problem:

# ceph osd create

3.8 perform a health check

# sudo ceph health # you can also use the ceph-s command to view the status

If HEALTH_OK is returned, it means success!

Note that if you encounter the following prompt:

HEALTH_WARN 576 pgs stuckinactive; 576 pgs stuck unclean; no osds

Or you may encounter the following prompt:

HEALTH_WARN 178pgs peering; 178pgs stuck inactive; 429 pgs stuck unclean; recovery 2 objects degraded 24 objects degraded (8.333%)

You can solve the problem by executing the following command:

# ceph pg dump_stuck stale & & ceph pg dump_stuck inactive & & ceph pg dump_stuck unclean

If you encounter the following prompt:

HEALTH_WARN 384pgs degraded; 384pgs stuck unclean; recovery 21/42degraded (50.000%)

The number of osd is not enough, and Ceph needs to provide at least two osd by default.

3.9 Ceph test

The node where the client mounts the mon:

# sudo mkdir / mnt/mycephfs

# sudo mount-t ceph 192.168.73.129 mnt/mycephfs / mnt/mycephfs

Client authentication:

# df-h # if you can see the usage of / mnt/mycephfs, Ceph is installed successfully.

4 Ceph multi-node installation

In the case of multiple nodes, Ceph has the following requirements:

Modify their respective hostname and be able to access each other through hostname.

Nodes can ssh access to each other without entering a password (through the ssh-keygen command).

4.1Node IP

192.168.73.129 (hostname is ceph2 and a partition / dev/sdb1 is provided to osd)

192.168.73.130 (hostname is ceph3 and a partition / dev/sdb1 is provided to osd)

192.168.73.131 (hostname is ceph4, client/mon/mds is installed)

4.2 configure Hostname

Set the appropriate hostname on each node, for example:

# vim / etc/hostname

Ceph2

Modify / etc/hosts by adding the following lines:

192.168.73.129 ceph2

192.168.73.130 ceph3

192.168.73.131 ceph4

4.3 configure password-free access

Create a RSA key on each node:

# ssh-keygen-t rsa # just press OK all the time

# touch / root/.ssh/authorized_keys

Configure ceph2 first so that ceph2 can access ceph3 and ceph4 without a password:

Ceph2# scp / root/.ssh/id_rsa.pub ceph3:/root/.ssh/id_rsa.pub_ceph2

Ceph2# scp / root/.ssh/id_rsa.pub ceph4:/root/.ssh/id_rsa.pub_ceph2

Ceph2# ssh ceph3 "cat / root/.ssh/id_rsa.pub_ceph2 > > / root/.ssh/authorized_keys"

Ceph2# ssh ceph4 "cat / root/.ssh/id_rsa.pub_ceph2 > > / root/.ssh/authorized_keys"

The nodes ceph3 and ceph4 also need to be configured with reference to the above command.

4.4 install the CephLibrary

Install the Ceph library on each node:

# apt-get install ceph ceph-common ceph-mds

# ceph-v # will display the version of ceph and key information

4.5 create a Ceph profile

# vim / etc/ceph/ceph.conf

[global]

Max open files = 131072

# For version 0.55 and beyond, you must explicitly enable

# or disable authentication with "auth" entries in [global].

Auth cluster required = none

Auth service required = none

Auth client required = none

[osd]

Osd journal size = 1000

# The following assumes ext4 filesystem.

Filestore xattruse omap = true

# For Bobtail (v0.56) and subsequent versions, you may

# add settings for mkcephfs so that it will create and mount

# the file system on a particular OSD for you. Remove the comment `#`

# character for the following settings and replace the values

# in braces with appropriate values, or leave the following settings

# commented out to accept the default values. You must specify the

#-mkfs option with mkcephfs in order for the deployment script to

# utilize the following settings, and you must define the 'devs'

# option for each osd instance; see below.

Osd mkfs type = xfs

Osd mkfs options xfs =-f # default for xfs is "- f"

Osd mount options xfs = rw,noatime # default mount option is "rw,noatime"

# For example, for ext4, the mount option might look like this:

# osd mkfs options ext4 = user_xattr,rw,noatime

# Execute $hostname to retrieve the name of your host

# and replace {hostname} with the name of your host.

# For the monitor, replace {ip-address} with the IP

# address of your host.

[mon.a]

Host = ceph4

Mon addr = 192.168.73.131pur6789

[osd.0]

Host = ceph2

# For Bobtail (v0.56) and subsequent versions, you may

# add settings for mkcephfs so that it will create and mount

# the file system on a particular OSD for you. Remove the comment `#`

# character for the following setting for each OSD and specify

# a path to the device if you use mkcephfs with the-mkfs option.

Devs = / dev/sdb1

[osd.1]

Host = ceph3

Devs= / dev/sdb1

[mds.a]

Host = ceph4

After the configuration file is successfully created, it needs to be copied to every node except the pure client (and always consistent later):

Ceph2# scp / etc/ceph/ceph.conf ceph3:/etc/ceph/ceph.conf

Ceph2# scp / etc/ceph/ceph.conf ceph4:/etc/ceph/ceph.conf

4.6 create a data directory

Create a data directory on each node:

# mkdir-p / var/lib/ceph/osd/ceph-0

# mkdir-p / var/lib/ceph/osd/ceph-1

# mkdir-p / var/lib/ceph/mon/ceph-a

# mkdir-p / var/lib/ceph/mds/ceph-a

4.7 create partitions and mounts for osd

For the nodes ceph2 and ceph3 where osd resides, you need to format the new partition with xfs or btrfs:

# mkfs.xfs-f / dev/sdb1

For nodes ceph2 and ceph3, the partition must be mounted to write initialization data for the first time:

Ceph2# mount / dev/sdb1 / var/lib/ceph/osd/ceph-0

Ceph3# mount / dev/sdb1 / var/lib/ceph/osd/ceph-1

4.8 perform initialization

Note that before each initialization, you need to stop the Ceph service on each node and empty the original data directory:

# / etc/init.d/ceph stop

# rm-rf / var/lib/ceph/*/ceph-*/*

Initialization can then be performed on the node ceph4 where the mon is located:

# sudo mkcephfs-a-c / etc/ceph/ceph.conf-k / etc/ceph/ceph4.keyring

Note that once the configuration file ceph.conf changes, initialization is best performed again.

4.9 start the Ceph service

Execute on the node ceph4 where mon is located:

# sudo service ceph-a start

Note that when performing this step above, you may encounter the following prompts:

= osd.0 = =

Mounting xfs onceph5:/var/lib/ceph/osd/ceph-0

Error ENOENT: osd.0 does not exist. Create it before updating the crush map

After executing the following command, repeat the above command to start the service, and you can solve the problem:

# ceph osd create

4.10 perform a health check

# sudo ceph health # you can also use the ceph-s command to view the status

If HEALTH_OK is returned, it means success!

Note that if you encounter the following prompt:

HEALTH_WARN 576 pgs stuckinactive; 576 pgs stuck unclean; no osds

Or you may encounter the following prompt:

HEALTH_WARN 178pgs peering; 178pgs stuck inactive; 429 pgs stuck unclean; recovery 2 objects degraded 24 objects degraded (8.333%)

You can solve the problem by executing the following command:

# cephpg dump_stuck stale & & cephpg dump_stuck inactive & & cephpg dump_stuck unclean

If you encounter the following prompt:

HEALTH_WARN 384pgs degraded; 384pgs stuck unclean; recovery 21/42degraded (50.000%)

The number of osd is not enough, and Ceph needs to provide at least two osd by default.

4.11 Ceph Test

The node where the client (node ceph4) mounts the mon (node ceph4):

# sudo mkdir / mnt/mycephfs

# sudo mount-t ceph 192.168.73.131 mnt/mycephfs / mnt/mycephfs

Client authentication:

# df-h # if you can see the usage of / mnt/mycephfs, Ceph is installed successfully.

This is the end of this article on "sample analysis of single-node and multi-node installation of Ceph under Ubuntu". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report