In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Before you begin, refer to my previous article, introduction and installation of Ceph object Storage, introduction and installation of Ceph object Storage
The Ceph object gateway stores the vault index data in the index_pool pool, with the default value of default.rgw.buckets.index. Sometimes, users like to put multiple objects (hundreds of thousands to millions of objects) in a bucket. If you do not use the gateway management interface to set a quota for the maximum objects in each bucket, the bucket index may suffer serious performance degradation when users put a large number of objects into the bucket.
Check out this pool
[root@ceph-node1 ~] # ceph osd lspools | grep buckets.index
31 default.rgw.buckets.index
In Ceph 0.94, when a large number of objects are allowed in each bucket, the vault index can be fragmented to prevent performance bottlenecks. The rgw_override_bucket_index_max_shards setting allows you to set the maximum number of shards per store, with a default value of 0, which means that bucket index shards are off by default.
The following command can view the value of this parameter, which is the value of this parameter for osd with an id of 0
[root@ceph-node1 ~] # ceph-osd-I 0-- show-config | grep "rgw_override_bucket"
Rgw_override_bucket_index_max_shards = 0
To open bucket index sharding, set rgw_override_bucket_index_max_shards to a value greater than 0. For simple configuration, you can add rgw_override_bucket_index_max_shards to the Ceph configuration file, add it to [global], or set it according to different osd instances in the Ceph configuration file, as follows
[root@ceph-node1 ~] # vim / etc/ceph/ceph.conf
[global]
Fsid = 6355eb2f-2b5c-4280-9747-2d77a307b3d9
Mon_initial_members = ceph-node1,ceph-node2,ceph-node3
Mon_host = 172.16.4.78
Auth_cluster_required = cephx
Auth_service_required = cephx
Auth_client_required = cephx
Public network = 172.16.4.0Universe 21
Rgw_override_bucket_index_max_shards = 2 # # globally configure this parameter, but you also need to push this configuration file to other nodes
[client.rgw.ceph-node1]
Rgw_frontends = "civetweb port=7481"
Rgw_override_bucket_index_max_shards = 2 # # configure this parameter for the osd of Node 1
[root@ceph-node1 ~] # systemctl restart ceph-radosgw@rgw.ceph-node1.service
For federated configurations, each zone may have different index_pool settings for failover. To make this value consistent with the zone of the zone group, you can set rgw_override_bucket_index_max_shards in the zone group configuration of the gateway as follows
[root@ceph-node1 ~] # radosgw-admin zonegroup get > zonegroup.json
Open the zonegroup.json file, then edit the bucket_index_max_shards settings for each named region, save the zonegroup.json file, and reset the zonegroup, as follows
[root@ceph-node1 ~] # radosgw-admin zonegroup set < zonegroup.json
After updating the regional group, please update the submission deadline as follows
[root@ceph-node1] # radosgw-admin period update-- commit
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.