Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

[mimic] ceph cluster is built manually

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

1 introduction of core components

Object

The lowest storage unit of Ceph is the Object object, and each Object contains metadata and raw data.

OSD

The full name of OSD is Object Storage Device, that is, the process responsible for returning specific data in response to client requests. A Ceph cluster generally has many OSD.

PG

PG, whose full name is Placement Grouops, is a logical concept. A PG contains multiple OSD. The introduction of PG layer is actually for better distribution of data and positioning data.

Monitor

A Ceph cluster requires a small cluster of multiple Monitor, which synchronizes data through Paxos and is used to store OSD metadata.

RADOS

RADOS, whose full name is Reliable Autonomic Distributed Object Store, is the essence of Ceph cluster. Users implement data allocation, Failover and other cluster operations.

Libradio

Librados is a Rados library, because RADOS is a protocol that is difficult to access directly, so the upper RBD, RGW and CephFS are all accessed through librados. Currently, PHP, Ruby, Java, Python, C and C++ support are provided.

CRUSH

CRUSH is the data distribution algorithm used by Ceph, which is similar to consistent hashing to distribute data to the desired places.

RBD

RBD, whose full name is RADOS block device, is a block device service provided by Ceph.

RGW

RGW, whose full name is RADOS gateway, is an object storage service provided by Ceph. The interface is compatible with S3 and Swift.

MDS

MDS, whose full name is Ceph Metadata Server, is the metadata service that CephFS services rely on.

CephFS

CephFS, whose full name is Ceph File System, is a file system service provided by Ceph.

2 installation preparation

Operating system: CentOS Linux release 7.4.1708 (Core)

Kernel version: 3.10.0-693.el7.x86_64

Server usage planning

Node

Host Name

Ceph Deploy

Ceph Admin

Mon

OSD

MDS

MGR

RGW

192.168.2.241

Node1

192.168.2.242

Node2

192.168.2.243

Node3

192.168.2.244

Node4

192.168.2.245

Node5

Storage preparation

The node4 and node5 osd nodes mount a 5G storage disk separately.

Catalogue planning

/ install # is used to store installation files

Create the above directory (node1)

Mkdir / install

Modify the hostname in turn. Replace 1-5 for xx.

Hostnamectl set-hostname nodexx

Configure secret-free access (management node)

Ssh-keygenssh-copy-id node1ssh-copy-id node2ssh-copy-id node3ssh-copy-id node4ssh-copy-id node5

Modify host hosts configuration (management node)

Vi / etc/hosts192.168.2.241 node1192.168.2.242 node2192.168.2.243 node3192.168.2.244 node4192.168.2.245 node5

Synchronize hosts configuration to other nodes

Scp / etc/hosts node2:/etc/scp / etc/hosts node3:/etc/scp / etc/hosts node4:/etc/scp / etc/hosts node5:/etc/

Turn off the firewall (all nodes)

Setenforce 0systemctl stop firewalld & & systemctl disable firewalld

Time synchronization (all nodes)

Yum install ntp ntpdate-yntpdate cn.pool.ntp.orgsystemctl enable ntpdsystemctl start ntpd

Kernel optimization (node1)

Vi / etc/sysctl.confnet.ipv4.tcp_fin_timeout = 2net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 5net .ipv4.tcp _ keepalive_intvl = 15net.ipv4.ip_local_port_range = 1024 65000net.ipv4.tcp_max_syn_backlog = 262144net.ipv4.tcp_max_tw_buckets = 5000net.ipv4.tcp _ syn_retries = 1net.ipv4.tcp_synack_retries = 1net.core.somaxconn = 16384net.core.netdev_max_backlog = 16384net.ipv4.tcp_max_orphans = 16384

Synchronize to other nodes

Scp / etc/sysctl.conf node2:/etcscp / etc/sysctl.conf node3:/etcscp / etc/sysctl.conf node4:/etcscp / etc/sysctl.conf node5:/etcvi / etc/security/limits.confsoft nofile 1024000hard nofile 1024000root soft nofile 1024000root hard nofile 1024000

Synchronize to other nodes

Scp / etc/security/limits.conf node2:/etc/securityscp / etc/security/limits.conf node3:/etc/securityscp / etc/security/limits.conf node4:/etc/securityscp / etc/security/limits.conf node5:/etc/security

Effective configuration (all nodes)

Sysctl-p

Modify the environment configuration (all nodes)

Echo "ulimit-HSn 102400" > > / etc/profile

Add yum Feed

Cat / etc/yum.repos.d/ceph.repo [Ceph-SRPMS] name=Ceph SRPMS packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS/enabled=1gpgcheck=0type=rpm-md[Ceph-aarch74]name=Ceph aarch74 packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/aarch74/enabled=1gpgcheck=0type=rpm-md[Ceph-noarch]name=Ceph noarch packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/enabled=1gpgcheck=0type=rpm-md[Ceph-x86_64]name=Ceph x86 / 64 packagesbaseurl= https://mirrors . aliyun.com/ceph/rpm-mimic/el7/x86_64/enabled=1gpgcheck=0type=rpm-mdEOM

Synchronize to other nodes

Scp / etc/yum.repos.d/ceph.repo node2:/etc/yum.repos.dscp / etc/yum.repos.d/ceph.repo node3:/etc/yum.repos.dscp / etc/yum.repos.d/ceph.repo node4:/etc/yum.repos.dscp / etc/yum.repos.d/ceph.repo node5:/etc/yum.repos.d

Clear the cache (all nodes)

Yum makecache

3 Cluster deployment

Install dependencies (all nodes)

Yum install-y https://mirrors.aliyun.com/epel/epel-release-latest-7.noarch.rpm

Install ceph (all nodes)

Yum-y install epel-release ceph ceph-radosgw ceph-common deltarpm

Install ceph-deploy (node1 execution)

Yum install-y ceph-deploy

Create a cluster named mycluster. If you do not specify-- clster, the default name is ceph, and the cluster node (mon) includes node {1, 2, 3} (node1 execution)

Cd / installceph-deploy-- cluster mycluster new node {1pm 2pm 3}

Modify ceph configuration file to add network segment (production environment is based on reality, it is recommended that public and cluster use different segments)

Vi ceph.confpublic network = 192.168.2.0/24cluster network = 192.168.2.0/24mon_allow_pool_delete = trueosd pool default size = 2osd pool default min size = 1osd pool default pg num = 64osd pool default pgp num = 64

Create and initialize the monitor node and collect all keys (node1 execution)

Ceph-deploy mon create-initial

Copy the configuration file and management key to the management node and the ceph node (node1 execution)

Ceph-deploy admin node {1,2,3,4,5}

Deploy mgr (node1 execution)

Ceph-deploy mgr create node {1,2}

Deploy osd (node1 execution)

Ceph-deploy osd create-data / dev/sdb node4ceph-deploy osd create-data / dev/sdb node5

4 enable Dashboard

Check the dashboard status first

Ceph mgr services

Enable dashboard

Ceph mgr module enable dashboard

Generate a self-signed certificate

Ceph dashboard create-self-signed-certcd / installopenssl req\-new-nodes\-x509\-subj "/ O=IT/CN=ceph-mgr-dashboard"\-days 3650\-keyout dashboard.key\-out dashboard.crt\-extensions v3_ca

Enable certificate

Ceph config-key set mgr mgr/dashboard/crt-I dashboard.crtceph config-key set mgr mgr/dashboard/key-I dashboard.key

When you do not use the certificate feature, you can choose to disable the certificate

Ceph config set mgr mgr/dashboard/ssl false

Note: after changing the certificate, you need to restart dashboard

Ceph mgr module disable dashboardceph mgr module enable dashboard

Configure service address and port (in production environment, domain name or payload IP is recommended for mgr access address)

Ceph config set mgr mgr/dashboard/server_addr 192.168.2.241ceph config set mgr mgr/dashboard/server_port 8443

Create a user

Ceph dashboard set-login-credentials admin admin

Restart mgr

Systemctl restart ceph-mgr@node1

Browser access

Https://192.168.2.241:8443

5 blocks of storage

Any ceph cluster host executes

Create a storage pool named ec,pg and pgs assigned to 64 (here is just a test, production environment, please calculate pg and pgs according to the formula)

Ceph osd pool create ec 64 64

Enable the storage pool in rbd mode

Ceph osd pool application enable ec rbd

Create a user client.ec for client mount access

Ceph auth get-or-create client.ec\ mon 'allow r'\ osd 'allow rwx pool=ec'\-o / etc/ceph/ceph.client.ec.keyring

Remote client mount

Check the client kernel version. The kernel version is recommended to be above 2.6.34.

Uname-r

Check whether the current kernel version supports rbd

Modprobe rbd & & lsmod | grep rbd

Add ceph cluster host mapping in hosts configuration

Vi / etc/hosts192.168.2.241 node1192.168.2.242 node2192.168.2.243 node3192.168.2.244 node4192.168.2.245 node5

Considering the slow download abroad, configure the domestic mirror source.

Cat / etc/yum.repos.d/ceph.repo [Ceph-SRPMS] name=Ceph SRPMS packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS/enabled=1gpgcheck=0type=rpm-md[Ceph-aarch74]name=Ceph aarch74 packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/aarch74/enabled=1gpgcheck=0type=rpm-md[Ceph-noarch]name=Ceph noarch packagesbaseurl= https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/enabled=1gpgcheck=0type=rpm-md[Ceph-x86_64]name=Ceph x86 / 64 packagesbaseurl= https://mirrors . aliyun.com/ceph/rpm-mimic/el7/x86_64/enabled=1gpgcheck=0type=rpm-mdEOM

Clean the cache

Yum makecache

Installation dependency

Yum install-y https://mirrors.aliyun.com/epel/epel-release-latest-7.noarch.rpm

Install the ceph client

Yum-y install ceph-common

Copy the user key (client.ec) generated by the server to the client server

Scp-r / etc/ceph/ceph.client.ec.keyring node5:/etc/ceph

Verify user permissions. Since no admin account is used, you need to specify an access user every time.

Ceph-s-name client.ec

Create a block device (disk) on the storage pool ec, named ec, and allocate a maximum storage space of 10G

Rbd create ec/ec-- size 10G-- image-format 2-- image-feature layering-- name client.ec

View the created block device

Rbd info ec/ec-name client.ec or rbd info-p ec ec-name client.ec

Map block devices to local disk

Rbd map ec/ec-name client.ec

Format the disk, which can be formatted as ext4 or xfs

Mkfs.ext4 / dev/rbd0

Create a mount directory

Mkdir / ecdata

Mount the drive letter to this directory

Mount / dev/rbd0 / ecdata

View mount status

Df-hrbd showmapped

Expand capacity

Rbd resize-size xxG pool-name/image-name

After expanding the storage space, it will not take effect immediately on the client. You need to perform the following operations on the client.

# for ext file system, rbd0 is the mapped local device name blockdev-- getsize64 / dev/rbd0resize2fs / dev/rbd0# for xfs file system, you also need to execute the following command, and mnt is the mount point xfs_growfs / mnt

Create a snapshot

Rbd snap create ec/ec@ec-snap

Roll back the snapshot (when the rollback fails, you need to unmount and local mapping first)

Rbd snap rollback ec/ec@ec-snap

Clone snapshot

Rbd snap protect ec/ec@ec-snaprbd snap clone ec/ec@ec-snap new-ec-pool

6 file storage

Server side

Deploy mds (node1 execution), created when file storage is enabled

Ceph-deploy mds create node2

Create a storage pool

Ceph osd pool create cephfs_data 64ceph osd pool create cephfs_metadata 64

Create a file system

Cephfs new cephfs cephfs_metadata cephfs_data

View MDS server status

Ceph mds stat

Create a user

Ceph auth get-or-create client.cephfs\ mon 'allow r'\ mds' allow r allow rw path=/'\ osd 'allow rw pool=cephfs_data'\-o / etc/ceph/ceph.client.cephfs.keyring

Synchronize key to other nodes

Scp / etc/ceph/ceph.client.cephfs.keyring node2:/etc/cephscp / etc/ceph/ceph.client.cephfs.keyring node3:/etc/ceph

View user key

Ceph auth get-key client.cephfs

Client

Mount the ceph file system (after cephx authentication is enabled, you must specify an authorized user), where 192.168.2.241 is the mon address

Mount-t ceph node1:6789:/ / data-o name=cephfs,secret=AQAHs9RdRVLkOBAAYl1JZqYupcHnORttIo+Udg==

Or

Echo "AQAHs9RdRVLkOBAAYl1JZqYupcHnORttIo+Udg==" > / etc/ceph/cephfskeymount-t ceph node1:6789:/ / data-o name=cephfs,secretfile=/etc/ceph/cephfskey

Write data Test (1G)

Dd if=/dev/zero of=/data/file2 bs=1M count=1024

Power on and mount automatically

Echo "node1:6789:/ / data/ ceph name=cephfs,secret=AQAHs9RdRVLkOBAAYl1JZqYupcHnORttIo+Udg==,_netdev,noatime 0" > > / etc/fstab

Or

Echo "node1:6789:/ / data/ ceph name=cephfs,secretfile=/etc/ceph/cephfskey,_netdev,noatime 0" > > / etc/fstab

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report