Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

CentOS 7.2install and deploy Ceph and add PG

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Preliminary preparation:

Prepare three CentOS 7.2systems, each with three hard drives, turn off selinux, turn off iptables, do time synchronization, do local domain name resolution, and trust each machine.

192.168.10.101 ceph-node1

192.168.10.22 ceph-node2

192.168.10.33 ceph-node3

There are three osd on each machine, and the whole cluster has nine osd.

Create a ceph.repo under the / etc/yum.repos.d/ directory on each machine and write the following

[Ceph]

Name=Ceph packages for $basearch

Baseurl= http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch

Enabled=1

Gpgcheck=0

Type=rpm-md

Gpgkey= https://mirrors.163.com/ceph/keys/release.asc

Priority=1

[Ceph-noarch]

Name=Ceph noarch packages

Baseurl= http://mirrors.163.com/ceph/rpm-jewel/el7/noarch

Enabled=1

Gpgcheck=0

Type=rpm-md

Gpgkey= https://mirrors.163.com/ceph/keys/release.asc

Priority=1

[ceph-source]

Name=Ceph source packages

Baseurl= http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS

Enabled=1

Gpgcheck=0

Type=rpm-md

Gpgkey= https://mirrors.163.com/ceph/keys/release.asc

Priority=1

When you create a folder in ceph-node1 and install it with ceph-deploy, a lot of files are generated.

Mkidr / home/ceph & & cd / home/ceph

Install ceph-depoly

Yum install-y ceph-deploy

Create a ceph cluster

Ceph-depoly new ceph-node1 ceph-node2 ceph-node3

The initial default of ceph is three copies, and the number of copies can be changed. Add to the generated ceph.conf (write under the [global] section)

Osd pool default size = 2

If you have multiple network cards, you can write public network under the [global] section of the ceph configuration file.

Public network = 192.168.10.0 Universe 24

Install ceph

Ceph-deploy install ceph-node1 ceph-node2 ceph-node3

Report osd to ceph cluster

Ceph-deploy osd prepare ceph-node1:/dev/sdb ceph-node1:/dev/sdc ceph-node1:/dev/sdd ceph-node2:/dev/sdb ceph-node2:/dev/sdc ceph-node2:/dev/sdd ceph-node3:/dev/sdb ceph-node3:/dev/sdc ceph-node3:/dev/sdd

Activate osd in the cluster

Ceph-deploy osd activate ceph-node1:/dev/sdb ceph-node1:/dev/sdc ceph-node1:/dev/sdd ceph-node2:/dev/sdb ceph-node2:/dev/sdc ceph-node2:/dev/sdd ceph-node3:/dev/sdb ceph-node3:/dev/sdc ceph-node3:/dev/sdd

Ceph-s check health status creation is complete!

[root@ceph-node1 local] # ceph- s

Cluster dc045fd8-0851-4052-8791-25cb6e5b3e8e

Health HEALTH_WARN

Too few PGs per OSD (21 < min 30)

Monmap e1: 3 mons at {ceph-node1=192.168.10.101:6789/0,ceph-node2=192.168.10.22:6789/0,ceph-node3=192.168.10.33:6789/0}

Election epoch 8, quorum 0,1,2 ceph-node2,ceph-node3,ceph-node1

Osdmap e44: 9 osds: 9 up, 9 in

Flags sortbitwise,require_jewel_osds

Pgmap v113: 64 pgs, 1 pools, 0 bytes data, 0 objects

971 MB used, 45009 MB / 45980 MB avail

64 active+clean

Since the cluster is newly built and there is only one pool, there is an alarm because there is too little pg on each osd, which causes the pg on each osd to fail to reach the online profile.

View the number of pg in a pool

Ceph osd pool get rbd pg_num

Pg_num: 64

Pgs is 64, because it is a 3-copy configuration, so when there are 9 osd, each osd has an average of 64 pgs 9 * 3'21 pgs, that is, the alarm above is less than the minimum configuration of 30.

Modify the number of pg of rbd pool

Ceph osd pool set rbd pg_num 256

Set pool 0 pg_num to 256

The number of gpg should be the same as the number of pg. Modify the number of pgp of rbd pool.

Sudo ceph osd pool set rbd pgp_num 256

Set pool 0 pgp_num to 256

The modification is completed, waiting for the cluster synchronization, and the fault is resolved.

Note: pg,pgp cannot be added in bulk in a production environment. If you don't add one, wait for the synchronization to complete before adding the next pg.

The number of pg added per pool is calculated according to the following formula:

{(target PG for each OSD) x (OSD#) x (% data)} / (size)

1. The target PG per OSD is about 100.

two。 If the value calculated above is less than the value of (OSD#) / (size), the value is updated to the value of (OSD#) / (size). This is to ensure a uniform load / data distribution by allocating at least one primary or secondary PG to each OSD of each pool.

3. The output value is then rounded to the nearest power of 2.

Hint: the nearest power of 2 provides a slight improvement in the efficiency of the CRUSH algorithm.

4. If the power of the nearest 2 is more than 25% lower than the original value, the next higher power of 2 is used.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report