In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
One: introduction to rbd
A block is a sequence of bytes (for example, a 512-byte block). Block-based storage interfaces are the most common way to store data using rotating media such as hard drives, CD, floppy disks, and even traditional 9-track tape. The ubiquity of block device interfaces makes virtual block devices an ideal candidate for interacting with massive data storage systems such as Ceph.
Ceph block devices are streamlined, resizable, and store striped data on multiple OSD in the Ceph cluster, while ceph block devices take advantage of RADOS features such as snapshots, replication, and consistency. Ceph's RADOS block device (RBD) uses kernel modules or librbd libraries to interact with OSD.
'
Ceph's block devices provide high performance and unlimited scalability to kernel devices, KVMS such as QEMU, and cloud-based computing systems such as OpenStack and CloudStack. You can use the same cluster to operate Ceph RADOS gateways, Ceph file systems and Ceph block devices at the same time.
Two: create and use block devices
Create pools and blocks
[root@ceph-node1 ~] # ceph osd pool create block 6
Pool 'block' created
Create a user for the client and scp the key file to the client
[root@ceph-node1 ~] # ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=block' | tee. / ceph.client.rbd.keyring
[client.rbd]
Key = AQA04PpdtJpbGxAAd+lCJFQnDfRlWL5cFUShoQ==
[root@ceph-node1 ~] # scp ceph.client.rbd.keyring root@ceph-client:/etc/ceph
The client creates a 2G block device
[root@ceph-client /] # rbd create block/rbd0-size 2048-name client.rbd
Map this block device to the client
[root@ceph-client /] # rbd map-image block/rbd0-name client.rbd
/ dev/rbd0
[root@ceph-client /] # rbd showmapped-- name client.rbd
Id pool image snap device
0 block rbd0-/ dev/rbd0
Note: the following error may be reported here
[root@ceph-client /] # rbd map-image block/rbd0-name client.rbd
Rbd: sysfs write failed
In some cases useful info is found in syslog-try "dmesg | tail".
Rbd: map failed: (2) No such file or directory
There are three solutions, see my blog rbd: sysfs write failed solution
Create a file system and mount a block device
[root@ceph-client /] # fdisk-l / dev/rbd0
Disk / dev/rbd0: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I take O size (minimum/optimal): 4194304 bytes / 4194304 bytes
[root@ceph-client /] # mkfs.xfs / dev/rbd0
Meta-data=/dev/rbd0 isize=512 agcount=8, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
Data = bsize=4096 blocks=524288, imaxpct=25
= sunit=1024 swidth=1024 blks
Naming = version 2 bsize=4096 ascii-ci=0 ftype=1
Log = internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
Realtime = none extsz=4096 blocks=0, rtextents=0
[root@ceph-client /] # mount / dev/rbd0 / ceph-rbd0
[root@ceph-client /] # df-Th / ceph-rbd0
Filesystem Type Size Used Avail Use% Mounted on
/ dev/rbd0 xfs 2.0G 33M 2.0G 2% / ceph-rb
Write data test
[root@ceph-client /] # dd if=/dev/zero of=/ceph-rbd0/file count=100 bs=1M
100 minutes 0 records in
100 minutes 0 records out
104857600 bytes (105MB) copied, 0.0674301 s, 1.6GB/s
[root@ceph-client /] # ls-lh / ceph-rbd0/file
-rw-r--r-- 1 root root 100m Dec 19 10:50 / ceph-rbd0/file
Make a system service
[root@ceph-client /] # cat / usr/local/bin/rbd-mount
#! / bin/bash# Pool name where block device image is storedexport poolname=block# Disk image nameexport rbdimage0=rbd0# Mounted Directoryexport mountpoint0=/ceph-rbd0# Image mount/unmount and pool are passed from the systemd service as arguments# Are we are mounting or unmountingif ["$1" = = "m"]; then modprobe rbd rbd feature disable $rbdimage0 object-map fast-diff deep-flatten rbd map $rbdimage0-- id rbd-- keyring / etc/ceph/ceph.client.rbd.keyring mkdir-p $mountpoint0 mount/ dev/rbd/$poolname/$rbdimage0 $mountpoint0fiif ["$1" = = "u"] Then umount $mountpoint0 rbd unmap / dev/rbd/$poolname/$rbdimage0fi
[root@ceph-client ~] # cat / etc/systemd/system/rbd-mount.service
[Unit]
Description=RADOS block device mapping for $rbdimage in pool $poolname "
Conflicts=shutdown.target
Wants=network-online.target
After=NetworkManager-wait-online.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/rbd-mount m
ExecStop=/usr/local/bin/rbd-mount u
[Install]
WantedBy=multi-user.target
Power on and mount automatically
[root@ceph-client ~] # systemctl daemon-reload
[root@ceph-client ~] # systemctl enable rbd-mount.service
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.