In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Make sure that the cluster status is normal (the specific configuration process is brief):
You can refer to the configuration in the upper part of https://blog.51cto.com/jdonghong/244175.
CEPH environment configuration
Start deploying RBD or RADOS Block Device
The client installs ceph (the client in this case is 192.168.27.210192.168.26.112)
Ceph-deploy install bddb.com
Push the configuration file to the client.
[root@master idc-cluster] # ceph-deploy admin bddb.com
Client creates rdb block mirror device
[root@BDDB ceph] # rbd create idc-- size 4096-- image-feature layering
[root@BDDB ceph] # rbd ls
Idc
[root@BDDB ceph] #
Block mirroring devices created by client mapping
[root@BDDB ceph] # rbd map idc
/ dev/rbd0
[root@BDDB ceph] #
Format the block device to create a file system (on the client node).
[root@BDDB ceph] # mkfs.xfs / dev/rbd0
Mount directory: [root@BDDB ceph] # mount / dev/rbd/rbd/idc / ceph/rbd
View the status and store or create test files:
[root@BDDB ceph] # ls / ceph
Rbd rbd2
[root@BDDB ceph] # ls / ceph-l
Total 0
Drwxr-xr-x 2 ceph ceph 6 Aug 23 11:17 rbd
Drwxr-xr-x 2 ceph ceph 6 Aug 26 10:01 rbd2
[root@BDDB ceph] # mount / dev/rbd
Rbd/ rbd0
[root@BDDB ceph] # mount / dev/r
Random raw/ rbd/ rbd0 rtc rtc0
[root@BDDB ceph] # mount / dev/rbd
Rbd/ rbd0
[root@BDDB ceph] # mount / dev/rbd/rbd/idc / ceph/rbd
[root@BDDB ceph] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/sda2 60G 32G 29G 53% /
/ dev/sda5 60G 3.5G 57G 6% / data
/ dev/sda1 497m 148m 350m 30% / boot
Tmpfs 184m 0 184m 0% / run/user/0
/ dev/rbd0 3.9G 16M 3.6G 1% / ceph/rbd
[root@BDDB ceph] # ls
Ceph.client.admin.keyring ceph.conf rbdmap tmp3nza8m tmpGs8qYv tmpNTb5P9 tmpOJovru
[root@BDDB ceph] # cd / ceph/rbd
[root@BDDB rbd] # ls
123.txt 1.txt lost+found my1.txt my2.txt my3.txt
[root@BDDB rbd] # rbd map isc
/ dev/rbd1
[root@BDDB rbd] # lsblk-f
NAME FSTYPE LABEL UUID MOUNTPOINT
Sda
├─ sda1 xfs 5f75d5de-3e02-43e3-a36d-57bc39e9a5ae / boot
├─ sda2 xfs bab8e8ae-cb5e-4299-b879-54960f1a24b9 /
├─ sda3 swap ce44af87-b8a7-40d2-8504-1c8fae81f613 [SWAP]
├─ sda4
└─ sda5 xfs cb8c9f72-8154-4ae9-aa57-54703dedfd06 / data
Sr0
Rbd0 ext4 cf1f5bc8-5dd1-44d4-87a6-0c55d39405fe / ceph/rbd
Rbd1 xfs 42989fcc-0746-4848-a957-0c01704865b8
[root@BDDB rbd] # mount / dev/rbd
Rbd/ rbd0 rbd1
[root@BDDB rbd] # mount / dev/rbd/rbd/isc
123.txt 1.txt lost+found/ my1.txt my2.txt my3.txt
[root@BDDB rbd] # mount / dev/rbd/rbd/isc / ceph/rbd
Rbd/ rbd2/
[root@BDDB rbd] # mount / dev/rbd/rbd/isc / ceph/rbd2/
[root@BDDB rbd] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/sda2 60G 32G 29G 53% /
/ dev/sda5 60G 3.5G 57G 6% / data
/ dev/sda1 497m 148m 350m 30% / boot
Tmpfs 184m 0 184m 0% / run/user/0
/ dev/rbd0 3.9G 16M 3.6G 1% / ceph/rbd
/ dev/rbd1 10G 33m 10G 1% / ceph/rbd2
[root@BDDB rbd] # cd / ceph/rbd2/
[root@BDDB rbd2] # ls
[root@BDDB rbd2] # touch {1..3} .txt
[root@BDDB rbd2] # ls
1.txt 2.txt 3.txt
[root@BDDB rbd2] # echo my test > 1.txt
[root@BDDB rbd2] # ls
1.txt 2.txt 3.txt
[root@BDDB rbd2] # cat 1.txt
My test
[root@BDDB rbd2] #
Then map the block device on another client and mount it to observe the effect of RBD:
First, observe the device mirror status on another client:
[root@master rbd] # rbd ls
Idc
Isc
[root@master rbd] # rbd info idc isc
Rbd: too many arguments
[root@master rbd] # rbd info idc
Rbd image 'idc':
Size 4096 MB in 1024 objects
Order 22 (4096 kB objects)
Block_name_prefix: rbd_data.85456b8b4567
Format: 2
Features: layering
Flags:
[root@master rbd] # rbd info isc
Rbd image 'isc':
Size 10240 MB in 2560 objects
Order 22 (4096 kB objects)
Block_name_prefix: rbd_data.148a56b8b4567
Format: 2
Features: layering
Flags:
[root@master rbd] #
Map rbd block mirror devices to native (map)
[root@master rbd] # lsblk-f
NAME FSTYPE LABEL UUID MOUNTPOINT
Fd0
Sda
├─ sda1 xfs fb314ba6-93e0-4d7d-bb80-9c6e5a92fd61 / boot
└─ sda2 LVM2_member 3Zne0f-m5MZ-OP67-TQ2a-Lnzr-SGME-UNJnMK
├─ centos-root xfs d009c83b-2ca3-4642-a956-9f967fa249e6 /
├─ centos-swap swap cf124d61-2df9-44a4-bba2-cee94056f547 [SWAP]
└─ centos-data xfs 538b2348-c8cd-4755-9ba9-2f3b10fb8f33 / data
Sr0
Rbd0 ext4 cf1f5bc8-5dd1-44d4-87a6-0c55d39405fe / ceph/rbd
[root@master rbd] # rbd map isc
/ dev/rbd1
[root@master rbd] #
View the mapping effect after map:
[root@master rbd] # lsblk-f
Mount the mapping device and observe the effect: (note that there is no need for formatting here, because the file system type has been formatted on another client, otherwise it may cause data corruption, even error reports and cluster errors.)
[root@master rbd] # mount / dev/rbd/rbd/isc / ceph/rbd2
[root@master rbd] # df-h
Enter the mount directory to see if the files and contents created by another client exist:
[root@master rbd] # cd / ceph/rbd2/
[root@master rbd2] # ls
1.txt 2.txt 3.txt
[root@master rbd2] # cat 1.txt
My test
[root@master rbd2] #
Cut to another client to observe the file changes:
There is no change in the content of the file. Try to remap to observe the change:
[root@BDDB rbd2] # cd
[root@BDDB ~] # umount / ceph/rbd2/
[root@BDDB ~] # rbd unmap isc
[root@BDDB] # df-h
Filesystem Size Used Avail Use% Mounted on
/ dev/sda2 60G 32G 29G 53% /
/ dev/sda5 60G 3.5G 57G 6% / data
/ dev/sda1 497m 148m 350m 30% / boot
Tmpfs 184m 0 184m 0% / run/user/0
/ dev/rbd0 3.9G 16M 3.6G 1% / ceph/rbd
[root@BDDB ~] #
The contents of the file are updated after remapping:
[root@BDDB] # lsblk-f
NAME FSTYPE LABEL UUID MOUNTPOINT
Sda
├─ sda1 xfs 5f75d5de-3e02-43e3-a36d-57bc39e9a5ae / boot
├─ sda2 xfs bab8e8ae-cb5e-4299-b879-54960f1a24b9 /
├─ sda3 swap ce44af87-b8a7-40d2-8504-1c8fae81f613 [SWAP]
├─ sda4
└─ sda5 xfs cb8c9f72-8154-4ae9-aa57-54703dedfd06 / data
Sr0
Rbd0 ext4 cf1f5bc8-5dd1-44d4-87a6-0c55d39405fe / ceph/rbd
Rbd1 xfs 42989fcc-0746-4848-a957-0c01704865b8
[root@BDDB ~] # mount / dev/rbd/rbd/isc / ceph/rbd2/
[root@BDDB ~] # cd / ceph/rbd2/
[root@BDDB rbd2] # ls
1.txt 2.txt 3.txt
[root@BDDB rbd2] # cat 1.txt
My test
My test
My test
My test
My test
Observe the effect of editing the same file:
There is no hint
Change the content:
There are inconsistencies.
Expand (shrink):
[root@BDDB] # rbd resize-- size 20480 rbd/idc
[root@BDDB ~] # rbd info idc
Rbd image 'idc':
Size 20480 MB in 5120 objects
Order 22 (4096 kB objects)
Block_name_prefix: rbd_data.85456b8b4567
Format: 2
Features: layering
Flags:
[root@BDDB ~] # resize2fs / dev/rbd0
The view file still exists:
[root@BDDB ~] # cd / ceph/rbd
[root@BDDB rbd] # ls
123.txt 1.txt lost+found my1.txt my2.txt my3.txt
[root@BDDB rbd] # cat my1.txt
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.27.210 master
192.168.27.211 client1
192.168.27.212 client2
192.168.27.213 client3
192.168.26.112 BDDB.com
192.168.26.112 bddb.com bddb
Summary: rbd can achieve multi-client remote access rbd block device mapping for local storage device mount use, but does not support multiple clients to mount at the same time, there will be data asynchronous, resulting in data confusion, so it is a non-shared asynchronous transmission mode, rbd supports online expansion (reduction) capacity, the latest ceph supports layering,striping exclusive lock, object map,fast diff, deep-flatten and other new new features
Layering .
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.