In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces what ceph skills are, which are introduced in great detail and have certain reference value. Friends who are interested must finish reading them.
1. Ceph rbd online resize
Before capacity expansion
[root@mon0 ceph] # rbd create myrbd/rbd1-s 1024-- image-format=2 [root@mon0 ceph] # rbd ls myrbdrbd1 [root@mon0 ceph] # rbd info myrbd/rbd1rbd image 'rbd1': size 1024 MB in 256 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.12ce6b8b4567 format: 2 features: layering
Expand capacity
[root@mon0 ceph] # rbd resize myrbd/rbd1-s 2048Resizing image: 100% complete...done.
Before rbd1 is formatted and mounted, resize is fine. If the rbd1 has been formatted and mounted, some additional actions are required:
[root@mon0 ceph] # rbd map myrbd/rbd1 [root@mon0 ceph] # rbd showmappedid pool image snap device 0 test test.img-/ dev/rbd0 1 myrbd rbd1-/ dev/rbd1 [root@mon0 ceph] # mkfs.xfs / dev/rbd1log stripe unit (4194304 bytes) is too large (maximum is 256KiB) log stripe unit adjusted to 32KiBmeta-data=/dev/rbd1 isize=256 agcount=9, agsize=64512 blks = sectsz=512 attr=2 Projid32bit=0data = bsize=4096 blocks=524288, imaxpct=25 = sunit=1024 swidth=1024 blksnaming = version 2 bsize=4096 ascii-ci=0log = internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1realtime = none extsz=4096 blocks=0 Rtextents=0 [root@mon0 ceph] # mount / dev/rbd1 / mnt [root@mon0 ceph] # df-hFilesystem Size Used Avail Use% Mounted on/dev/sda1 529G 20G 482G 4% / tmpfs 16G 408K 16G 1% / dev/shm/dev/sdb 559G 33G 527G 6% / openstack/dev/sdc 1.9T 75M 1.9T 1% / cephmp1/dev/sdd 1.9T 61m 1. 9T 1% / cephmp2/dev/rbd1 2.0G 33M 2.0G 2% / mnt [root@mon0 ceph] # rbd resize myrbd/rbd1-s 4096Resizing image: 100% complete...done. [root@mon0 ceph] # df-hFilesystem Size Used Avail Use% Mounted on/dev/sda1 529G 20G 482G 4% / tmpfs 16G 408K 16G 1% / dev/shm/dev/sdb 559G 33G 527G 6% / openstack/dev/sdc 1.9T 75M 1.9T 1% / cephmp1/dev/sdd 1.9T 61M 1.9T 1% / cephmp2/dev/rbd1 2.0G 33M 2.0G 2% / mnt [root@mon0 ceph] # xfs_growfs / mntmeta-data=/dev/rbd1 isize=256 agcount=9 Agsize=64512 blks = sectsz=512 attr=2, projid32bit=0data = bsize=4096 blocks=524288, imaxpct=25 = sunit=1024 swidth=1024 blksnaming = version 2 bsize=4096 ascii-ci=0log = internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1realtime = none extsz=4096 blocks=0 Rtextents=0data blocks changed from 524288 to 1048576 [root@mon0 ceph] # df-hFilesystem Size Used Avail Use% Mounted on/dev/sda1 529G 20G 482G 4% / tmpfs 16G 408K 16G 1% / dev/shm/dev/sdb 559G 33G 527G 6% / openstack/dev/sdc 1.9T 75M 1.9T 1% / cephmp1/dev/sdd 1.9T 61M 1.9T 1% / cephmp2/ Dev/rbd1 4.0G 33m 4.0G 1% / mnt
In another case, rbd1 has been mounted to a vm:
Virsh domblklist myvmrbd resize myrbd/rbd1# needs to operate virsh blockresize-- domain myvm-- path vdb-- size 100Grbd info myrbd/rbd12 through virsh blockresize. Using ceph-deploy
Installing ceph with ceph-deploy is very simple, and you can adjust the ceph.conf as needed after installation.
Mkdir ceph-deploy Cd ceph-deployceph-deploy install $clusterceph-deploy new cephnode-01 cephnode-02 cephnode-03ceph-deploy-- overwrite-conf mon create cephnode-01 cephnode-02 cephnode-03ceph-deploy gatherkeys cephnode-01ceph-deploy osd create\ cephnode-01:/dev/sdb:/dev/sda5\ cephnode-01:/dev/sdc:/dev/sda6\ cephnode-01:/dev/sdd:/dev/sda7\ cephnode-02:/dev/sdb:/dev/sda5\ cephnode-02:/ Dev/sdc:/dev/sda6\ cephnode-02:/dev/sdd:/dev/sda7\ cephnode-03:/dev/sdb:/dev/sda5\ cephnode-03:/dev/sdc:/dev/sda6\ cephnode-03:/dev/sdd:/dev/sda7\ cephnode-04:/dev/sdb:/dev/sda5\ cephnode-04:/dev/sdc:/dev/sda6\ cephnode-04:/dev/sdd:/dev/sda7 \ cephnode-05:/dev/sdb:/dev/sda5\ cephnode-05:/dev/sdc:/dev/sda6\ cephnode-05:/dev/sdd:/dev/sda7
Uninstall with ceph-deploy:
Ceph-deploy purgedata $clusterceph-deploy purge $clusterfor host in $cluster do ssh $host up ([4jue 2], p4) acting ([4p2], p4)
View the location and verify:
[root@osd2 software] # ceph osd tree# id weight type name up/down reweight-1 10.92 root default-2 3.64 host mon00 1.82 osd.0 up 1 11.82 osd.1 up 1-3 3.64 host osd12 1.82 Osd.2 up 1 31.82 osd.3 up 1-4 3.64 host osd24 1.82 osd.4 up 1 51.82 osd.5 up 1 [root@osd2 software] # Cd / cephmp1/current/4.be_head/ [root @ osd2 4.be_head] # lsepel.rpm__head_E9DDF5BE__4 [root@osd2 4.be_head] # md5sum epel.rpm__head_E9DDF5BE__4 2cd0ae668a585a14e07c2ea4f264d79b epel.rpm__head_E9DDF5BE__4 [root@osd2 4.be_head] # ll-htotal 20KMurray 1 root root 15K Nov 4 17:59 epel.rpm__head_E9DDF5BE__4
Verify after uploading with rbd input:
[root@osd2 software] # touch hello.txt [root@osd2 software] # echo "hello world" > > hello.txt [root@osd2 software] # rbd import. / hello.txt myrbd/hello.txtImporting image: 100% complete...done. [root@osd2 software] # rbd info myrbd/hello.txtrbd image 'hello.txt': size 12 bytes in 1 objects order 22 (4096 kB objects) block_name_prefix: rb.0.1365.6b8b4567 format: 1 [root@osd2 software] # rados ls-p myrbdrbd_data.13446b8b4567.00000000000000barbd_directoryrbd_data.13446b8b4567.000000000000007drbd_data.13446b8b4567.000000000000007crbd_data.13446b8b4567.000000000000005drbd_data.13446b8b4567.000000000000007erbd_data.13446b8b4567.00000000000000ffrb.0.1365.6b8b4567.000000000000hello.txt.rbdrbd_data.13446b8b4567.00000000000000d9rbd_data.13446b8b4567.00000000000000f8rbd_data.13446b8b4567.000000000000009brbd_data.13446b8b4567.0000000000000001rbd_header.13446b8b4567epel.rpmrbd_data.13446b8b4567.000000000000001frbd_data.13446b8b4567.000000000000003erbd_id.rbd1rbd_data.13446b8b4567.0000000000000000# got it like this The location information is incorrect [root@osd2 software] # ceph osdmap myrbd hello.txtosdmap E88 pool 'myrbd' (4) object' hello.txt'-> pg 4.d92fd82b (4.2b)-> up ([4JE3] P4) acting ([4jue 3], p4) # needs to add .rbd [root@osd2 current] # ceph osdmap myrbd hello.txt.rbdosdmap e88 pool 'myrbd' (4) object' hello.txt.rbd'-> pg 4.9b9bf373 (4.73)-> up ([3jue 1], p3) acting ([3jue 1] P3) [root@osd2 current] # ssh osd1 [root@osd1 ~] # cd / cephmp2/current/4.73_head/ [root @ osd1 4.73_head] # ll-htotal 8.0K Murray RWFUR-1 root root 112 Nov 4 18:08 hello.txt.rbd__head_9B9BF373__4 [root@osd1 4.73_head] # cat hello.txt.rbd__head_9B9BF373__4 > rb.0.1365.6b8b4567RBD001.005# if it is an ordinary rbd block Format-1 type rbd block: # ceph osd map test test.img.rbd#format-2 type rbd block: # ceph osd map test rbd_id.test.img above is all the content of this article "what are the ceph skills?" Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.