Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the pool-related commands of ceph common commands

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail what ceph commands are related to pool commonly used commands. The editor thinks it is very practical, so I share it with you as a reference. I hope you can get something after reading this article.

1.1 query all pool in the system

Command format:

Ceph osd lspools or rados lspools

1.2 create pool

(1) specify the number of pg, number of pgp, etc., to create pool.

Command format:

Ceph osd pool create {pg-num} {pgp-num} {replicated | erasure} {} {}

Meaning of the command:

Pool is divided into replicated pool and erasure code pool,replicate pool to provide data backup in the form of multiple copies, and EC pool uses erasure codes to provide data security.

To create an erasure pool, you can also specify erasure_code_profile,erasure_code_profile and use the command osd erasure-code-profile set to specify the meaning of the erasure-code-profile parameter:

Directory=\ # plugin directory absolute path

Plugin=jerasure\ # plugin name (only jerasure)

K =\ # data chunks (default 2)

M =\ # coding chunks (default 2)

Technique=\ # coding technique

Jerasure is an open source library of many kinds of Erasure Coding algorithms, which is implemented by Cpicket +. It is very active and widely used. Technique is optional: reed_sol_van, reed_sol_r6_op, cauchy_orig, cauchy_good, liberation, blaum_roth and liber8tion.

Each pg takes up a certain amount of memory and CPU, and the increase in the number of PG will increase the number of peer, so there is a certain limit on the number of PG configured per pool, otherwise it will affect the performance of the entire cluster. The approximate number of PG required for each pool is:

(OSDs * 100)

Total PGs =-

OSD per object

OSD per object is the number of replicas for replicated pool and KVM for EC pool

(2) specify uid to create pool

Command format:

Mkpool [123 [4]]

Meaning of the command:

Create a pool with a uid of 123 and crush rule of 4

1.3 modify pool parameters

Command format:

Ceph osd pool set {pool-name} {key} {value}

Meaning of the command:

The meaning of Key value:

Number of Size:pool copies

The minimum number of copies of min_size:pool. When the number of copies of object in pool is less than min_size, object will stop receiving Imax O

Crash_replay_interval: during PG repair, the client replay request is run without submitting the interval (in s, default is 45s)

Pgp_num: using pgp_num to calculate pg id

The crush rule_id,ceph osd crush rule dump used by crush_ruleset:pool can query all configured policies

Auid: set pool home user id

Hit_set_type: cache hit trace type setting. Default is bloom. Other types include explicit_hash and explicit_object.

Hit_set_period:

Hit_set_count:

False positive rate of hit_set_ fpp:bloom

Cache_target_dirty_ratio: default .4. When the cache dirty data reaches 40%, the dirty data will be brushed to the backend pool.

Cache_target_full_ratio: default .8. D when the amount of data in cache reaches 80%, delete the cold data in cache.

Maximum capacity of target_max_bytes:cache pool

Maximum number of object stored by target_max_objects:cache pool

Cache_min_flush_age: the minimum age from object to backend pool (in s)

The minimum age (in s) that cache_min_evict_age:object removes from cache.

1.4 query pool parameters

Command format:

Osd pool get size | min_size | crash_replay_interval | pg_num | pgp_num | crush_ruleset | hit_set_type | hit_set_period | hit_set_count | hit_set_fpp | hit_set_fpp | auid | target_max_bytes | cache_target_dirty_ratio | cache_target_full_ratio | cache_min_flush_age | cache_min_evict_age | erasure_code_profile or use ceph osd dump | grep pool to display all pool parameters

1.5 Delete pool

Command format:

Ceph osd pool delete {pool-name} [{pool-name}\ [- yes-i-really-really-mean-it] or rados rmpool [--yes-i-really-really-mean-it]

Meaning of the command:

Delete pool. When pool has data or configured information such as users, you need to delete these information manually, otherwise the information will still be deleted in the

1.6 rename pool

Command format:

Ceph osd pool rename {current-pool-name} {new-pool-name}

1.7 query pool quota

Command format:

Ceph osd pool get-quota

Meaning of the command:

Query pool capacity and maximum number of object

1.8Setting pool quota

Command format:

Ceph osd pool set-quota max_objects | max_bytes

Meaning of the command:

Set pool capacity and maximum number of object

1.9 query pool attributes

Command format:

Ceph osd pool stats {}

1.10 create a pool snapshot

Command format:

Ceph osd pool mksnap this command is equivalent to: rados mksnap-p

Command meaning: take a snapshot of all object in pool

1.11 query pool Snapshot

Command format:

Rados lssnap-p

Meaning of the command:

Query pool Snapshot

1.12 pool Snapshot rollback

Command format:

Rados rollback-p

< poolname >

Meaning of the command: currently, we can only roll back a certain object in pool, not the entire pool. Can we consider implementing it ourselves?

1.13 cache pool configuration and deletion

(1) configure one pool as another pool cache command:

Ceph osd tier add or osd tier add-cache [cachepoolname size

(2) set cache pool mode:

Ceph osd pool set cache-mode

There are four of them:

'none', 'writeback',' forward', and 'readonly'

For 'writeback' and' readonly' see Section 2.3

Before forward' shuts down cache pool, you need to change the cache pool mode to 'forward', does not receive the client's IO before brushing the cache pool data to the backend pool

(3) if you set the cache pool working mode to writeback', you need to execute the following command

Cache pool can only work, in order to map IWeiO to cache pool.

Ceph osd tier set-overlay

(4) swipe the cache pool data to the backend pool

Rados-p {cachepool} cache-flush-evict-all

(5) undo the correspondence between cache pool and backend pool

This is the end of ceph osd tier remove-overlay ceph osd tier remove's article on "what are the ceph commands related to pool?". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it out for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report