Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Summary of commands commonly used in Ceph

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly explains "the summary of commands commonly used in Ceph". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn the "summary of commands commonly used in Ceph"!

1. Create a custom pool

Ceph osd pool create pg_num pgp_num

Where pgp_num is the number of valid configuration groups for pg_num, which is an optional parameter. Pg_num should be large enough not to stick to the calculation of official documents, and choose 256,512, 1024, 2048, 4096 according to the actual situation.

two。 Set the number of replicas, minimum and maximum replicas of pool

Ceph osd pool set size 2ceph osd pool set min_size 1ceph osd pool set max_size 10

Limited by resources, if you do not want to save 3 copies, you can use this command to change the number of copies for a specific pool.

Using get, you can get the number of copies of a particular pool.

Ceph osd pool get size

3. Add osd

You can use ceph-deploy to add osd:

Ceph-deploy osd prepare monosd1:/mnt/ceph osd2:/mnt/cephceph-deploy osd activate monosd1:/mnt/ceph osd2:/mnt/ceph# is equivalent to: ceph-deploy osd create monosd1:/mnt/ceph osd2:/mnt/ceph# also has a way to specify the installation path ceph-deploy osd create osd1:/cephmp1:/dev/sdf1 / cephmp2:/dev/sdf2 of the corresponding journal when installing osd

You can also manually add:

# # Prepare disk first, create partition and format itmkfs.xfs-f / dev/sddmkdir / cephmp1mount / dev/sdd / cephmp1cd / cephmp1ceph-osd-I 12-- mkfs-- mkkeyceph auth add osd.12 osd 'allow *' mon 'allow rwx'-I / cephmp1/keyring#change the crushmapceph osd getcrushmap-o mapcrushtool-d map-o map.txtvim map.txtcrushtool-c map.txt-o mapceph osd setcrushmap-I map## Start it/etc/init.d/ceph start osd.12

4. Delete osd

Stop this osd from working first:

# # Mark it outceph osd out 5 steps # Wait for data migration to complete (ceph-w), then stop itservice ceph-a stop osd.5## Now it is marked out and down

Then delete it:

# # If deleting from active stack, be sure to follow the above to mark it out and downceph osd crush remove osd.5## Remove auth for diskceph auth del osd.5## Remove diskceph osd rm 5## Remove from ceph.conf and copy new conf to all hosts

5. View the overall situation of osd, details of osd, details of crush

Ceph osd treeceph osd dump-format=json-prettyceph osd crush dump-format=json-pretty

6. Get and modify CRUSH maps

# # save current crushmap in binaryceph osd getcrushmap-o crushmap.bin## Convert to txtcrushtool-d crushmap.bin-o crushmap.txt## Edit it and re-convert to binarycrushtool-c crushmap.txt-o crushmap.bin.new## Inject into running systemceph osd setcrushmap-I crushmap.bin.new## If you've added a new ruleset and want to use that for a pool, do something like:ceph osd pool default crush rule = pool can also set a rulecpeh osd pool set testpool crush_ruleset in this way

-ostatic output;-dumped output;-cased output;-i=input

With these abbreviations in mind, the above command is easy to understand.

7. Add / remove journal

To improve performance, the journal of ceph is usually placed on a separate disk or partition:

First use the following command to set the ceph cluster to nodown:

Ceph osd set nodown

# Relevant ceph.conf options-- existing setup-- [osd] osd data = / srv/ceph/osd$id osd journal = / srv/ceph/osd$id/journal osd journal size = 51 million stop the OSD:/etc/init.d/ceph osd.0 stop/etc/init.d/ceph osd.1 stop/etc/init.d/ceph osd.2 stop# Flush the journal:ceph-osd-I 0-- flush-journalceph-osd-I 1-- flush-journalceph-osd-I 2-- flush- Journal# Now update ceph.conf-this is very important or you'll just recreate journal on the same disk again-- change to [filebased journal]-- [osd] osd data = / srv/ceph/osd$id osd journal = / srv/ceph/journal/osd$id/journal osd journal size = 10000-change to [partitionbased journal (journal in this case would be on / dev/sda2)]-- [osd] osd data = / srv/ceph/osd$id osd journal = / dev/sda2 osd journal size = Create new journal on each diskceph-osd-I 0-- mkjournalceph-osd-I 1-- mkjournalceph-osd-I 2-- mkjournal# Done Now start all OSD again/etc/init.d/ceph osd.0 start/etc/init.d/ceph osd.1 start/etc/init.d/ceph osd.2 start

Remember to set nodown back:

Ceph osd unset nodown

8. Ceph cache pool

After preliminary tests, the cache pool performance of ceph is not good, and sometimes it is even lower than that without cache pool. Alternatives such as flashcache can be used to optimize the cache of ceph.

Ceph osd tier add satapool ssdpoolceph osd tier cache-mode ssdpool writebackceph osd pool set ssdpool hit_set_type bloomceph osd pool set ssdpool hit_set_count 1 million # In this example 80-85% of the cache pool is equal to 280GBceph osd pool set ssdpool target_max_bytes $((280,1024,1024,1024)) ceph osd tier set-overlay satapool ssdpoolceph osd pool set ssdpool hit_set_period 300ceph osd pool set ssdpool cache_min_flush_age 300 # 10 minutesceph osd pool set ssdpool cache_min_evict_age 1800 # 30 Minutesceph osd pool set ssdpool cache_target_dirty_ratio .4ceph osd pool set ssdpool cache_target_full_ratio. 8

9. View run-time configuration

Ceph--admin-daemon / var/run/ceph/ceph-osd.0.asok config show

10. View the status of the monitoring cluster

Ceph healthcehp health detailceph statusceph-s # can be added-- fortmat=json-prettyceph osd statceph osd dumpceph osd treeceph mon statceph quorum_statusceph mon dumpceph mds statceph mds dump

11. View all pool

Ceph osd lspoolsrados lspools

twelve。 Check to see if kvm and qemu support rbd

Qemu-system-x86_64-drive format=?qemu-img-h | grep rbd

13, view a specific pool and the files in it

Rbd ls testpoolrbd create testpool/test.img-s 1024-- image-format=2rbd info testpool/test.imgrbd rm testpool/test.img # count the number of blocks rados-p testpool ls | grep ^ rb.0.11a1 | wc-l # import and view the file rados makepool testpoolrados put-p testpool logo.png logo.pngceph osd map testpool logo.pngrbd import logo.png testpool/logo.pngrbd info testpool/logo.png

14. Mount / unmount the created block device

Ceph osd pool create testpool 256 256rbd create testpool/test.img-s 1024-image-format=2rbd map testpool/test.imgrbd showmappedmkfs.xfs / dev/rbd0rbd unmap / dev/rbd0

15. Create a snapshot

# create rbd snap create testpool/test.img@test.img-snap1# to view rbd snap ls testpool/test.img# rollback rbd snap rollback testpool/test.img@test.img-snap1# delete rbd snap rm testpool/test.img@test.img-snap1# erase all snapshot rbd snap purge testpool/test.img

16. Calculate the reasonable number of pg

It is officially recommended that each OSD50 be-100 pg. The number of total pgs=osds*100/ copies, for example, the environment of 6osd, 2 copies, the pgs is 6 copies, 100 copies, 2 copies, 300.

The number of pg can only be increased but not decreased, and the number of pgp_num must be increased or decreased at the same time after increasing pg_num.

17. Operation on pool

Ceph osd pool create testpool 256 256ceph osd pool delete testpool testpool--yes-i-really-really-mean-itceph osd pool rename testpool anothertestpoolceph osd pool mksnap testpool testpool-snap

18. Formatting before reinstallation

Ceph-deploy purge osd0 osd1ceph-deploy purgedata osd0 osd1ceph-deploy forgetkeysceph-deploy disk zap-fs-type xfs osd0:/dev/sdb1

19. Modify the storage path of osd journal

The # noout parameter prevents osd from being marked as out with a weight of 0ceph osd set nooutservice ceph stop osd.1ceph-osd-I 1-- flush-journalmount / dev/sdc / journalceph-osd-I 1-- mkjournal / journalservice ceph start osd.1ceph osd unset noout

20. Xfs mount parameters

Mkfs.xfs-n size=64k / dev/sdb1#/etc/fstab mount parameter rw,noexec,nodev,noatime,nodiratime,nobarrier

21. Authentication configuration

[global] auth cluser required = noneauth service required = noneauth client required = none#0.56 before auth supported = none

twenty-two。 Pg_num is not enough, migrate and rename

Ceph osd pool create new-pool pg_numrados cppool old-pool new-poolceph osd pool delete old-poolceph osd pool rename new-pool old-pool# or directly add the pg_num of pool

23. Push config file

Ceph-deploy-overwrite-conf config push mon1 mon2 mon3

24. Modify config parameters online

Ceph tell osd.* injectargs'--mon_clock_drift_allowde 1'

To use this command, you need to distinguish whether the configured parameters belong to mon, mds, or osd.

At this point, I believe you have a deeper understanding of the "summary of commands commonly used in Ceph". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report