In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail about the skills of using rbd blocks in ceph. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.
1. The true size of the rbd block
Because ceph uses thin provisioning, the corresponding blocks are allocated only when the data is written. So when we create a large block, it is also done instantly, because except for some metadata, ceph does not allocate the corresponding space. So how big is the rbd block we created? Take my environment as an example:
[root@osd1 /] # rbd ls myrbdhello.txtrbd1 [root@osd1 /] # rbd info myrbd/rbd1rbd image 'rbd1': size 1024 MB in 256 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.13446b8b4567 format: 2 features: layering [root@osd1 /] # rbd diff myrbd/rbd1 | awk' {SUM + = $2} END {print SUM/1024/1024 "MB"} '14.2812 MB [root@osd1 /] # rbd diff myrbd/rbd1Offset Length Type 0 131072 data 4194304 16384 data 130023424 16384 data 260046848 16384 data 390070272 16384 data 520093696 4194304 data 524288000 4194304 data 528482304 2129920 data 650117120 16384 data 780140544 16384 data 910163968 16384 data 1040187392 16384 data 1069547520 4194304 data2. Rbd format1 and rbd fromat2
Rbd format1:
[root@osd1 /] # rbd create myrbd/rbd1-s 8 [root@osd1 /] # rbd info myrbd/rbd1rbd image 'rbd1': size 8192 kB in 2 objects order 22 (4096 kB objects) block_name_prefix: rb.0.13fb.6b8b4567 format: 1 [root@osd1 /] # rados ls-p myrbdrbd_ directoryrbd1.rbd [root @ osd1 /] # rbd map myrbd/rbd1 [root@osd1 /] # rbd showmappedid pool image snap device 0 myrbd rbd1 -/ dev/rbd0 [root@osd1 /] # dd if=/dev/zero of=/dev/rbd0 dd: writing to `/ dev/rbd0': No space left on device16385+0 records in16384+0 records out8388608 bytes (8.4MB) copied 2.25155 s, 3.7 MB/s [root@osd1 /] # rados ls-p myrbdrbd_directoryrbd1.rbdrb.0.13fb.6b8b4567.000000000001rb.0.13fb.6b8b4567.000000000000
$image_name.rbd: contains the id (rb.0.13fb.6b8b4567) of this block
$rbd_id.$fragment: data block
Rbd_directory: list of rbd blocks in the current pool
Rbd format2
[root@osd1 /] # rbd create myrbd/rbd1-s 8-- image-format=2 [root@osd1 /] # rbd info myrbd/rbd1rbd image 'rbd1': size 8192 kB in 2 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.13436b8b4567 format: 2 features: layering [root@osd1 /] # rados ls-p myrbdrbd_directoryrbd_header.13436b8b4567rbd_ id.rbd1 [root @ osd1 /] # rbd map myrbd/rbd1 [root @ osd1 /] # rbd showmappedid pool image snap device 0 myrbd rbd1-/ dev/rbd0 [root@osd1 /] # dd if=/dev/zero of=/dev/rbd0dd: writing to `/ dev/rbd0': No space left on device16385+0 records in16384+0 records out8388608 bytes (8.4 MB) copied 2.14407 s, 3.9 MB/s [root@osd1 /] # rados ls-p myrbdrbd_directoryrbd_data.13436b8b4567.0000000000000000rbd_data.13436b8b4567.0000000000000001rbd_header.13436b8b4567rbd_id.rbd1
Rbd_data.$rbd_id.$fragment: block
Rbd_directory: list of rbd blocks in the current pool
Rbd_header.$rbd_id: metadata for rbd blocks
Rbd_id.$image_name: the id (13436b8b4567) that contains this block
3. Ceph Primary Affinity [root@mon0 yum.repos.d] # ceph--admin-daemon / var/run/ceph/ceph-mon.*.asok config show | grep 'primary_affinity' "mon_osd_allow_primary_affinity": "false", # add primary affinitymon osd allow primary affinity = true [root@mon0 yum.repos.d] # ceph pg dump to ceph.conf | grep active+clean | egrep "\ [0 | | wc-ldumped all in format plain109 [root@mon0 yum.repos.d] # ceph pg dump | grep active+clean | egrep ", 0\]" | wc-ldumped all in format plain123# ceph osd primary-affinity osd.0 0.5set osd.0 primary-affinity to 0.5 (8327682) # ceph pg dump | grep active+clean | egrep "\ [0," | wc-L4 domains ceph pg dump | grep active+clean | egrep " 0\] "| wc-L13 ceph osd primary-affinity osd.0 0set osd.0 primary-affinity to 0,802 # ceph pg dump | grep active+clean | egrep"\ [0, "| wc-lumped ceph pg dump | grep active+clean | egrep", 0\] "| wc-l1804. Upgrade ceph
Ceph 29 released version 0.87 giant, and we upgraded it as soon as possible. The upgrade process is very simple, only need to modify a ceph.repo, and then yum update ceph. Restart various services after the upgrade is complete. Ceph.repo is as follows:
[root@mon0 software] # cat / etc/yum.repos.d/ ceph.repo [Cepho] name=Ceph packages for $basearchgpgkey= https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.ascenabled=1baseurl=http://ceph.com/rpm-giant/el6/$basearchpriority=1gpgcheck=1type=rpm-md[ceph-source]name=Ceph source packagesgpgkey= https://ceph.com/git/?p=ceph.git;a=blob_plain; Noarch packagesgpgkey= release. Ascending enabled1baseurl = http://ceph.com/rpm-giant/el6/SRPMSpriority=1gpgcheck=1type=rpm-md[Ceph-noarch]name=Ceph and https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.ascenabled=1baseurl=http://ceph.com/rpm-giant/el6/noarchpriority=1gpgcheck=1type=rpm-m5. Ceph admin socket
The online parameters of ceph can be obtained by using ceph admin socket, which is very helpful for verification and debugging.
$ceph--admin-daemon / path/to/your/ceph/socket [root@osd2 ~] # ceph--admin-daemon / var/run/ceph/ceph-osd.4.asok help {"config diff": "dump diff of current config and default config", "config get": "config get: get the config value", "config set": "config set [...]: set a config variable", "config show": "dump current config settings" "dump_blacklist": "dump blacklisted clients and times", "dump_historic_ops": "show slowest recent ops", "dump_op_pq_state": "dump op priority queue state", "dump_ops_in_flight": "show the ops currently in flight", "dump_reservations": "show recovery reservations", "dump_watchers": "show clients which have active watches, and on which objects", "flush_journal": "flush the journal to permanent store" "get_command_descriptions": "list available commands", "getomap": "output entire object map", "git_version": "get git sha1", "help": "list available commands", "injectdataerr": "injectdata error into omap", "injectmdataerr": "inject metadata error", "log dump": "dump recent log entries to log file", "log flush": "flush log entries to log file", "log reopen": "reopen log file" "objecter_requests": "show in-progress osd requests", "perf dump": "dump perfcounters value", "perf schema": "dump perfcounters schema", "rmomapkey": "remove omapkey", "setomapheader": "setomapheader", "setomapval": "setomap key", "status": "high-level status of OSD", "truncobj": "truncate object to length", "version": "get ceph version"}
Get the parameter settings related to journal:
[root@osd2 ~] # ceph--admin-daemon / var/run/ceph/ceph-mon.osd2.asok config show | grep journal "debug_journaler": "0\ / 5", "debug_journal": "1\ / 3", "journaler_allow_split_entries": "true", "journaler_write_head_interval": "15", "journaler_prefetch_periods": "10", "journaler_prezero_periods": "5" "journaler_batch_interval": "0.001", "journaler_batch_max": "0", "mds_kill_journal_at": "0", "mds_kill_journal_expire_at": "0", "mds_kill_journal_replay_at": "0", "mds_journal_format": "1", "osd_journal": "\ / var\ / lib\ / ceph\ / osd / ceph-osd2\ / journal" Osd_journal_size: "5120", "filestore_fsync_flushes_journal_data": "false", "filestore_journal_parallel": "false", "filestore_journal_writeahead": "false", "filestore_journal_trailing": "false", "journal_dio": "true", "journal_aio": "true", "journal_force_aio": "false", "journal_max_corrupt_search": "10485760" "journal_block_align": "true", "journal_write_header_frequency": "0", "journal_max_write_bytes": "10485760", "journal_max_write_entries": "10485760", "journal_queue_max_ops": "33554432", "journal_queue_max_bytes": "33554432", "journal_align_min_size": "65536", "journal_replay_from": "0" "journal_zero_on_create": "false", "journal_ignore_corruption": "false", this article ends here on "what are the tips for using rbd blocks in ceph?" Hope that the above content can be helpful to you, so that you can learn more knowledge, if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.