Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Configuration information for ceph pool

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "the configuration information of ceph pool". In the daily operation, I believe that many people have doubts about the configuration information of ceph pool. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "configuration information of ceph pool"! Next, please follow the editor to study!

1. 1 the definition of the relevant pool in osdmap is as follows

Configuration information for map pools; / / pool

Map pool_name; / / key is pool id,value and poolname

Map erasure_code_profiles

Map name_pool; / / find id by name

1.2 PG pool definition for PG..h, which stores snapshots of each pool

Struct PGPool {

Int64_t id

String name

Uint64_t auid

Pg_pool_t info

SnapContext snapc; / / the default pool snapc, ready to go.

Interval_set cached_removed_snaps; / / current removed_snaps set

Interval_set newly_removed_snaps; / / newly removed in the last epoch

PGPool (int64_t I, const string& _ name, uint64_t au)

: id (I), name (_ name), auid (au) {}

Void update (OSDMapRef map)

}

3. Pg_pool_t is defined in Osd_types.h

Private: _ _ U32 pg_num, pgp_num; / < number of pgspublic: map properties; / / < OBSOLETE string erasure_code_profile; / / < name of the erasure code profile in OSDMap epoch_t last_change; / < most recent epoch changed, exclusing snapshot changes epoch_t last_force_op_resend; / < last epoch that forced clients to resend snapid_t snap_seq; / < seq for per-pool snapshot epoch_t snap_epoch / / < osdmap epoch of last snap uint64_t auid; / / < who owns the pg _ _ U32 crash_replay_interval; / < seconds to allow clients to replay ACKed but unCOMMITted requests

Uint64_t quota_max_bytes; / / < maximum number of bytes for this pool uint64_t quota_max_objects; / < maximum number of objects for this pool

/ * Pool snaps (global to this pool). These define a SnapContext for * the pool, unless the client manually specifies an alternate * context. * / map snaps; / * * Alternatively, if we are definining non-pool snaps (e.g. Via the * Ceph MDS), we must track @ removed_snaps (since @ snaps is not * used). Snaps and removed_snaps are to be used exclusive of each * other! * / interval_set removed_snaps

Unsigned pg_num_mask, pgp_num_mask

Set tiers; / / < pools that are tiers of us int64_t tier_of; / / < pool for which we are a tier / / Note that write wins for read+write ops int64_t read_tier; / < pool/tier for objecter to direct reads to int64_t write_tier; / < pool/tier for objecter to direct writes to

Cache_mode_t cache_mode; / < cache pool mode

Uint64_t target_max_bytes; / / < tiering: target max pool size uint64_t target_max_objects; / / < tiering: target max pool size

Uint32_t cache_target_dirty_ratio_micro; / / < cache: fraction of target to leave dirty uint32_t cache_target_full_ratio_micro; / / < cache: fraction of target to fill before we evict in earnest

Uint32_t cache_min_flush_age; / / < minimum age (seconds) before we can flush uint32_t cache_min_evict_age; / < minimum age (seconds) before we can evict

HitSet::Params hit_set_params; / / < The HitSet params to use on this pool uint32_t hit_set_period; / / < periodicity of HitSet segments (seconds) uint32_t hit_set_count; / < number of periods to retain uint32_t min_read_recency_for_promote; / < minimum number of HitSet to check before promote

Uint32_t stripe_width; / < erasure coded stripe size in bytes

Uint64_t expected_num_objects; / / < expected number of objects on this pool, a value of 0 indicates

Uint64_t flags; / / < FLAG_* _ U8 type; / < TYPE_* _ U8 size, min_size; / / < number of osds in each pg _ U8 crush_ruleset; / < crush placement ruleset _ U8 object_hash / / < hash mapping object name to ps at this point, the study of "configuration information for ceph pool" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report