Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to install ceph rbd client in docker

2025-04-11 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces how to install the ceph rbd client in docker. It is very detailed and has a certain reference value. Friends who are interested must finish it!

Ceph rbd client requirement

Client system kernel 2.6.32 or above

In addition, in my environment, k9s-master1 is treated as a ceph client, while the server side is k8s-node1.

Install ceph rbd client [root@k8s-master1 ~] # yum search cephLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.tuna.tsinghua.edu.cn * updates: mirrors.aliyun.com=== NCMG S matched: ceph = = centos-release-ceph-hammer.noarch: Ceph Hammer packages from the CentOS Storage SIG repositorycentos-release-ceph-jewel.noarch: Ceph Jewel packages from the CentOS Storage SIG repositorycentos-release-ceph-luminous.noarch: Ceph Luminous packages from the CentOS Storage SIG repositoryceph-common.x86 _ 64: Ceph Common [root@k8s-master1 ~] # yum-y install centos-release-ceph-luminous.noarch [root@k8s-master1 ~] # yum-y install ceph copies the key ring of the ceph server to the client of ceph

Instead of copying the key ring in production, create a user and give it the appropriate permissions.

I copy the key ring here for convenience.

Log in to any point in the ceph server cluster and copy its keyring to the client of ceph.

[root@k8s-node1 ~] # cd / etc/ceph/ [root@k8s-node1 ceph] # scp ceph.conf 172.16.22.197:/etc/ceph/ [root@k8s-node1 ceph] # scp ceph.client.admin.keyring 172.16.22.197:/etc/ceph/ View rbd in the ceph client

[root@k8s-master1 ~] # rbd-- image data inforbd image 'data':size 1024 MB in 256 objectsorder 22 (4096 kB objects) block_name_prefix: rbd_data.1149238e1f29format: 2features: layering, exclusive-lock, object-map, fast-diff, deep-flattenflags:

The output of the information above indicates that we can use rbd.

Map rbd to the client and mount the

Map rbd to client and mount using rbd map rbd/data

[root@k8s-master1 ~] # rbd map rbd/data # # rbd is the name of pool, and data is the name of block storage rbd: sysfs write failedRBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable" .in some cases useful info is found in syslog-try "dmesg | tail" .rbd: map failed: (6) No such device or address

Seeing that the error is reported above, we need to execute the following command:

[root@k8s-master1 ~] # rbd feature disable rbd/data exclusive-lock object-map fast-diff deep-flatten [root@k8s-master1 ~] # rbd map rbd/data / dev/rbd0 [root@k8s-master1 ~] # fdisk-lDisk / dev/rbd0: 1073 MB, 1073741824 bytes, 2097152 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 4194304 bytes / 4194304 bytes

See that we have mounted the rdb block device of ceph to the k9s-master1 machine.

At this point, / dev/rdb0 is still a bare device, so let's format it and create a file system for it.

[root@k8s-master1 ~] # mkfs.ext4 / dev/rbd0

Mount to / mnt:

[root@k8s-master1 ~] # mount / dev/rbd0 / mnt/

In this way, we can use rbd block devices.

The above is all the contents of the article "how to install the ceph rbd client in docker". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report