In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to alleviate the impact caused by excessive index shard". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to alleviate the impact of excessive index shard".
The following are emergency operations, belong to rapid hemostasis and pain relief, some operations are high risk, must be used with caution.
Adjust several op timeout parameters of OSD
The following parameters are just use cases, which should be adjusted according to your online situation, but should not be too large.
Osd_op_thread_timeout = 90 # default is 15 osd_op_thread_suicide_timeout = 300 # default is 150 filestore_op_thread_timeout = 180 # default is 60 filestore_op_thread_suicide_timeout = 300 # default is 180 osd_scrub_thread_suicide_timeout = 300 # if there is an op timeout caused by scrub, you can increase this appropriately
Compress the OMAP directory of OSD
If you can stop OSD, you can compact OSD. It is recommended that ceph version 0.94.6 or above, and bug below this version.
1. Open noout operation ceph osd set noout2. Stop the OSD service systemctl stop ceph-osd@ or / etc/init.d/ceph stop osd.3. Check the osd process ps-ef | grep "id" 4. In ceph.conf, the corresponding [osd.id] plus the following configuration leveldb_compact_on_mount = true5. Start the osd service systemctl start ceph-osd@ or / etc/init.d/ceph start osd.6. Confirm that the ps-ef is running | grep "id" 7. Use the ceph-s command to observe the results, and it is best to use the tailf command to observe the corresponding OSD log. Wait for all pg to be in active+clean before continuing with the following operation 8. Confirm the omap size after the completion of compact: du-sh / var/lib/ceph/osd/ceph-$id/current/omap9. Delete the temporarily added leveldb_compact_on_mount configuration 10. 0 in osd. Cancel the noout operation (depending on the situation, it is recommended to keep the noout online): ceph osd unset noout does the reshard operation to bucket
By doing reshard operation on bucket, we can adjust the number of shard of bucket and realize the redistribution of index data.
Only support ceph version 0.94.10 or above, need to stop bucket reading and writing, there is a risk of data loss, careful use, I am not responsible for any problems.
Note that the following actions must ensure that the corresponding bucket-related operations have all been stopped. Then use the following command to back up bucket's indexradosgw-admin bi list-- bucket= > .list.backup to restore data radosgw-admin bi put-- bucket= < .list.backup to view bucket's index idroot@demo:/home/user# radosgw-admin bucket stats-- bucket=bucket-maillist {"bucket": "bucket-maillist", "pool": "default.rgw.buckets.data", "index_pool": "default.rgw.buckets.index" "id": "0a6967a5-2c76-427a-99c6-8a788ca25034.54133.1", # pay attention to this id "marker": "0a6967a5-2c76-427a-99c6-8a788ca25034.54133.1", "owner": "user", "ver": "0pm 1: 1", "master_ver": "0pm 1: 0", "mtime": "2017-08-23 1342purl 59.007081", "max_marker": "horse" Index "," usage ": {}," bucket_quota ": {" enabled ": false," max_size_kb ":-1," max_objects ":-1} Reshard corresponding to bucket index operation is as follows: use the command to adjust the shard of" bucket-maillist "to 4 Note that the command outputs the instance idroot@demo:/home/user# radosgw-admin bucket reshard of osd and new bucket-- bucket= "bucket-maillist"-- num-shards=4*** NOTICE: operation will not remove old bucket index objects * these will need to be removed manually * old bucket instance id: 0a6967a5-2c76-427a-99c6-8a788ca25034.54133.1new bucket instance id: 0a6967a5-2c76-427a-99c6-8a788ca25034.54147.1total entries: 3 Then use the following command to delete the old instance idroot@demo:/home/user# radosgw-admin bi purge-- bucket= "bucket-maillist"-- bucket-id=0a6967a5-2c76-427a-99c6-8a788ca25034.54133.1 to view the final result root@demo:/home/user# radosgw-admin bucket stats-- bucket=bucket-maillist {"bucket": "bucket-maillist" "pool": "default.rgw.buckets.data", "index_pool": "default.rgw.buckets.index", "id": "0a6967a5-2c76-427a-99c6-8a788ca25034.54147.1", # id has changed "marker": "0a6967a5-2c76-427a-99c6-8a788ca25034.54133.1", "owner": "user", "ver": "master_ver": "master_ver": "rgw.main": {"size_kb": 50, "size_kb_actual": 60 "rgw.main": {"size_kb": 50, "size_kb_actual": 60 "num_objects": 3}}, "bucket_quota": {"enabled": false, "max_size_kb":-1, "max_objects":-1}} close scrub and deep-scrub of pool
Versions above Jewel are available
Use the following command to turn on pool's noscrub and deep-scrub# ceph osd pool set noscrub preferences ceph osd pool set nodeep-scrub 1 to confirm the configuration # ceph osd dump | grep pool 11 'pool-name' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 800 flags hashpspool,noscrub,nodeep-scrub stripe_width 0 cancel pool's noscrub and deep-scrub settings # ceph osd pool set noscrub preferences ceph osd pool set nodeep-scrub 0 Thank you for reading The above is the content of "how to alleviate the impact caused by excessive index shard". After the study of this article, I believe you have a deeper understanding of how to alleviate the impact of excessive index shard, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.