Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build Ceph

2025-02-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you how to build Ceph, I hope you will learn something after reading this article, let's discuss it together!

Basic introduction

Ceph is a unified, distributed file system designed for excellent performance, reliability, and scalability.

Origin

Its name is related to the mascot of UCSC, the birthplace of Ceph. The mascot is "Sammy", a banana-colored slug, a shellless mollusk among cephalopods. These multi-tentacled cephalopods are a highly parallel metaphor for a distributed file system.

Ceph was originally a PhD research project on storage systems, implemented by Sage Weil in University of California, SantaCruz (UCSC).

Development goal

Simply defined as the following 3 items:

Can easily scale to several PB capacity

Supports high performance for multiple workloads (input / output operations per second [IOPS] and bandwidth)

High reliability

However, these goals compete with each other (for example, scalability reduces or suppresses performance or affects reliability). Ceph's design also includes fault tolerance to protect against a single point of failure, which assumes that large-scale (PB-level storage) storage failures are a common phenomenon rather than an exception.

It is not designed to assume a particular workload, but includes the ability to adapt to changing workloads and provide optimal performance. It accomplishes all these tasks with POSIX compatibility, allowing it to transparently deploy applications that currently rely on POSIX semantics (through improvements aimed at Ceph).

System architecture

The Ceph ecosystem architecture can be divided into four parts:

Clients: client (data user)

Cmds:Metadata server cluster, metadata server (caching and synchronizing distributed metadata)

Cosd:Object storage cluster, object storage cluster (storing data and metadata as objects and performing other key functions)

Cmon:Cluster monitors, cluster monitor (performs monitoring functions)

The content comes from encyclopedia: https://baike.baidu.com/item/CEPH/1882855

Operation process

CEPH environment configuration

192.168.27.210 master (ceph-deploy)

192.168.27.211 client1 osd0 mds1 、 mon1

192.168.27.212 client2 osd1 mds2 、 mon2

192.168.27.213 client3 osd2 mds3 、 mon3

Hostnam

Hostnamectl set-hostname master

Hostnamectl set-hostname client1

Hostnamectl set-hostname client2

Hostnamectl set-hostname client3

Map hostname:

192.168.27.210 master

192.168.27.211 client1

192.168.27.212 client2

192.168.27.213 client3

Confirm the connectivity of the mapping relationship between nodes:

Ping-c 3 master

Ping-c 3 client1

Ping-c 3 client2

Ping-c 3 client3

Tip: as a result of doing SSH login-free in advance, the steps are saved here, and each new installation needs to be done once.

Each node shuts down the firewall and selinux

# systemctl stop firewalld

# systemctl disable firewalld

# sed-I's etc/selinux/config, SELINUXAfen forcingUniverse, SELINUXAfter ledAccording to

# setenforce 0

Install and configure NTP on each node (it is officially recommended that all nodes in the cluster install and configure NTP, and you need to ensure that the system time of each node is the same. Synchronize NTP online without deploying your own ntp server)

# yum install ntp ntpdate ntp-doc-y

# systemctl restart ntpd

Systemctl status ntpd

Each node prepares the yum source

Delete the default source, the foreign one is slower

# yum clean all

# mkdir / mnt/bak

# mv / etc/yum.repos.d/* / mnt/bak/

Download the Basebase and epel feeds of Aliyun

# wget-O / etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

# wget-O / etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

Add ceph Feed

# vim / etc/yum.repos.d/ceph.repo

[ceph]

Name=ceph

Baseurl= http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/

Gpgcheck=0

Priority = 1

[ceph-noarch]

Name=cephnoarch

Baseurl= http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/

Gpgcheck=0

Priority = 1

[ceph-source]

Name=Ceph source packages

Baseurl= http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS

Gpgcheck=0

Priority=1

Prepare the disk (it can also be saved in practice, as long as there is no problem with the disk, there is no problem to install it directly, and this step is not needed)

Fdisk / dev/sdb

Parted-s / dev/sdb mklabel gpt mkpart primary xfs 0% 100%

Mkfs.xfs / dev/sdb-f

Blkid / dev/sdb # take a visual look at it with ansible

Deployment phase (rapid deployment using ceph-deploy on admin nodes)

Install ceph-deploy

Sudo yum update-y & & sudo yum install ceph-deploy-y

Create a cluster directory

[root@master ~] # mkdir-pv / data/idc-cluster

Mkdir: directory "/ data/idc-cluster" has been created

Create a cluster (followed by the name of the cluster member node, where the master node is only used for CEPH-DEPLOY)

[root@master idc-cluster] # ceph-deploy new client1 client2 client3

Add a red item in the ceph.conf file and save it

[root@client2 ceph] # cat ceph.conf

[global]

Fsid = d5a5f367-97d2-45a5-8b6b-b462bd65fe3d

Mon_initial_members = client1, client2, client3

Mon_host = 192.168.27.211192.168.27.212192.168.27.213

Auth_cluster_required = cephx

Auth_service_required = cephx

Auth_client_required = cephx

Osd_pool_default_size= 3

Public_network = 192.168.27.0Universe 22

Re-push the configuration information to each node:

[root@master idc-cluster] # ceph-deploy-- overwrite-conf config push master client1 client2 client3

Initialize the cluster:

Ceph-deploy mon create-initial

Add OSD to the cluster

Prepare OSD (use the prepare command)

[root@master idc-cluster] # ceph-deploy osd prepare client1:/dev/sdb client2:/dev/sdb client3:/dev/sdb

Activate OSD (note that because ceph partitions the disk, / dev/sdb disk partition is / dev/sdb1)

[root@master idc-cluster] # ceph-deploy osd activate client1:/dev/sdb1 client2:/dev/sdb1 client3:/dev/sdb1

[root@master idc-cluster] # ceph-deploy admin master client1 client2 client3

Create a file system

Check the status of the management node first. There is no management node by default.

[root@master idc-cluster] # ceph mds stat

E1:

[root@master idc-cluster] #

Create a management node (master node as a management node)

[root@master idc-cluster] # ceph-deploy mds create master

Check the status of the management node again and find that it is already starting

[root@master idc-cluster] # ceph mds stat

E2:, 1 up:standby

Creating a pool,pool is a logical partition when ceph stores data, and it acts as a namespace

Take a look at:

Ceph osd lspools

The newly created ceph cluster has only one pool for rdb. You need to create a new pool.

[root@master idc-cluster] # ceph osd pool create cephfs_data 128# followed by the number of PG

Pool 'cephfs_data' created

Check again to find out:

[root@master idc-cluster] # ceph osd pool create cephfs_metadata 128 # create metadata for pool

Pool 'cephfs_metadata' created

Check the pool status again:

Cephfs new myceph cephfs_metadata cephfs_data

Check the status of mds management nodes

[root@master idc-cluster] # ceph mds stat

Check the status of the cluster and have alerts:

Solution

In the configuration file, increase the alarm threshold of this option for the cluster by adding it to the ceph.conf (/ etc/ceph/ceph.conf) configuration file of the mon node as follows:

Vi / etc/ceph/ceph.conf

[global]

.

Mon_pg_warn_max_per_osd = 666,

Push configuration

Restart the monitor service:

Systemctl restart ceph-mon.target

1. Cephfs is used for client mount:

[root@BDDB ceph] # ceph-fuse-m 192.168.27.211 ceph/cephsys

2. Use kernel drivers to mount CephFs

[root@BDDB] # mount-t ceph 192.168.27.213 ceph 6789 name=admin,secretfile=/etc/ceph/admin.secret / / ceph/cephsys/-o name=admin,secretfile=/etc/ceph/admin.secret

Effect of storing files:

After reading this article, I believe you have some understanding of "how to build Ceph". If you want to know more about it, you are welcome to follow the industry information channel. Thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report