Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to quickly deploy ceph in ceph-ansible

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article is to share with you about how to quickly deploy ceph in ceph-ansible, the editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article.

1 prepare the experimental environment 1.1 basic environment

Operating system: CentOS 7.6.It requires 4 nodes (1 monitor + 3 osd node) or 6 nodes (3 monitor + 3 osd node). Of course, monitor can also be deployed on osd node.

Ceph version: Luminous (12.2.11), using the version in centos's official storage repo

Ansible version: 2.6.16 (can be installed using virtualenv and pip, and the version must match the ceph-ansible requirements)

Ceph-ansible version: 3.2.15 (download the corresponding version directly on github)

Cluster Network (cluster network): 192.168.122.0 Universe 24, used for data synchronization and storage management within the cluster

Service Network (public network): 10.0.122.0Comp24, used for client to read and write data

1.2 prepare the node

Prepare 7 ceph cluster nodes, 3 monitor nodes, 3 osd nodes, 1 admin node, with two network cards for each node. The ip configuration is as follows

Node cluster ip public ip--mon-11 192.168.122.11 mon-12 192.168.122.12mon-13 192.168.122.13osd-21 10.0 .122.21 192.168.122.21osd-22 10.0.122.22 192.168.122.22osd-23 10.0.122.23 192.168.122.23admin-node 192.168.122.100m-

The monitor node does not need cluster ip, it only needs public network. Because we need online software sources, public network must be able to access the public network.

Admin-node is used as the deployment node to install ansible and ceph-ansible

Configure / etc/hosts on the admin-node node

192.168.122.11 mon-11192.168.122.12 mon-12192.168.122.13 mon-13192.168.122.21 osd-21192.168.122.22 osd-22192.168.122.23 osd-23

Configure ssh secret-free login with 6 ceph nodes in admin-node, for example:

# ssh-keygen# ssh-copy-id root@mon-11

Disable firewalld

# systemctl stop firewalld# systemctl disable firewalld

Disable selinux and make it effective again

# sed-I's etc/selinux/config# reboot1.3 prepares disk for SELINUXPRENTEN

The following is a list of disks on the osd node, where vda is the disk on which the operating system resides.

Sda 8:0 0 100G 0 disk sdb 8:16 0 100G 0 disk sdc 8:32 0 100G 0 disk sdd 8:48 0 20G 0 disk vda 252:0 0 40G 0 disk ├─ vda1 252:1 0 1G 0 part / boot └─ vda2 252:2 0 39G 0 part ├─ centos- Root 253:0 0 35G 0 lvm / └─ centos-swap 253:1 0 4G 0 lvm [SWAP]

Ceph luminous already supports bluestore as the backend, but we will use filestore this time. Sda, sdb and sdc are used as osd disks with 100g each and sdd as journal disks with a size of 20g.

1.4 get the software package

Install the following software on all cluster nodes:

# yum install centos-release-luminous-* epel-release-y

After installation is complete, refresh the source index

# yum repolist1.5 gets ansible and ceph-ansible

On admin-node, do the following

Install anible

$yum install python-pip-y $pip install ansible==2.6.16

Get the ceph-ansible and extract it

$wget https://codeload.github.com/ceph/ceph-ansible/zip/v3.2.15

Create an ansible_hosts file and copy it to the ceph-ansible folder

[ceph:children] monsosds [mons] mon-11mon-12mon-13 [osds] osd-21osd-22osd-232 configuration ceph-ansible

Open the decompressed ceph-ansible

# cp site.yml.sample site.yml

Edit the site.yml file and change the target node group at the beginning, leaving only mons and osds

-# Defines deployment design and assigns role to server groups- hosts:-mons-osds#-mdss#-agents#-mgrs#-rgws#-nfss#-restapis#-rbdmirrors#-clients#-iscsigws#-iscsi-gws # for backward compatibility only!

Open the group_vars directory and copy the template file

# cd group_vars# cp all.yml.sample all.yml# cp mons.yml.sample mons.yml# cp osds.yml.sample osds.yml

Edit the all.yml file, I only made the following changes, maybe not so much. In addition, the monitor node must use public ip because it provides services to the client.

# Inventory host group variablesmon_group_name: monsosd_group_name: osds# PACKAGES # centos_package_dependencies:-python-pycurl-epel-release-python-setuptools-libselinux-pythonupgrade_ceph_packages: False#ceph_use_distro_backports: false # DEBIAN ONLY# INSTALL # ceph_repository_type : dummyceph_origin: distro## Monitor optionsmonitor_interface: eth2#monitor_address: 0.0.0.0monitor_address_block: 10.0.122.0 OSD journal size in MB#block_db_size 24 # OSD optionsjournal_size: 3072 # OSD journal size in MB#block_db_size:-1 # block db size in bytes for the ceph-volume lvm batch. -1 means use the default of'as big as possible'.public_network: 10.0.122.0/24cluster_network: 192.168.122.0/24#osd_mkfs_type: xfs#osd_mkfs_options_xfs:-f-I size=2048#osd_mount_options_xfs: noatime,largeio,inode64,swallocosd_objectstore: filestore

Edit the osds.yml file, there are several scenarios, we use scenario 2 non-collocated, that is, the log and data are stored on different disks.

-# Variables here are applicable to all host groups NOT rolesosd_scenario: non-collocateddevices:-/ dev/sda-/ dev/sdb-/ dev/sdcdedicated_devices:-/ dev/sdd-/ dev/sdd-/ dev/sdd# as a result of this configuration, three new partitions / dev/sdd1, / dev/sdd2, / dev/sdd3 will be generated on / dev/sdd # as the journal partition of / dev/sda, / dev/sdb, / dev/sdc, respectively.

Because we are just a simple experimental cluster, we do not need to make a special configuration for mons.yml, just keep it by default.

3 start deployment

Go back to the ceph-ansible directory, start the deployment, and wait for the deployment to complete.

# ansible-playbook site.yml-u root-I. / ansible_hosts

After the deployment is complete, log in to a mon node to view the cluster status and osd daemon distribution

The above # ceph- w # ceph osd tree is how to quickly deploy ceph in ceph-ansible. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report