Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize hard disk sharing through Ceph-RBD and ISCSI-target

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Today, I will talk to you about how to achieve hard disk sharing through Ceph-RBD and ISCSI-target. Many people may not know much about it. In order to make you understand better, the editor has summarized the following content for you. I hope you can get something according to this article.

1 the question is raised

Now it is required to make use of all the equipment storage in the computer room to establish a storage pool that can meet the requirements of how much to create and the ability to mount the created hard disk across platforms. At present, the computer room of the laboratory has established a cluster of CEPH, and the storage pool has reached 257TB, so imagine creating a RBD (RADOS block device), and then exporting the created RBD through ISCSI-target for client mount.

2 Feasibility analysis

The above problems are mainly implemented in two parts, one is to create RBD, and the other is to export RBD by installing ISCSI-target for mounting hosts outside the Ceph cluster. The current experimental environment has been able to successfully create RBD for mounting hosts with RBD clients in the Ceph cluster, and the rest is the implementation of the second part. Since Ceph already supports the use of rbd through the iscsi protocol, it is theoretically feasible.

3 realization process

3.1 Environmental description

Experimental platform: ubuntu14.04 server, kernel: 3.13.0-32-generic

Ceph version: version 0.94.2

Server ISCSI-target adopts Linux SCSI target framework (tgt) (http://stgt.sourceforge.net/)

Server IP:172.25.1.55

3.2Create RBD

Execute the following command on the ceph-admin node:

Rbd create-- size {megabytes} {pool-name} / {image-name}

For example, create a RBD whose size is 1GB and named ceph-rbd1:

Rbd create-size 1024 ceph-rbd1

If pool-name is not specified, it will be created into the rbd pool by default.

View RBD:rbd ls {poolname} View RBD under rbd pool by default without adding poolname

View RBD details:

Rbd info {pool-name} / {image-name}

For example, check the default rbd pool of ceph-rbd1:rbd info ceph-rbd1 without pool-name.

3.3Mapping RBD

The created RBD needs to be mapped to the client to use, and the client needs the kernel to support Ceph block devices and file systems. Kernel 2.6.34 or later is recommended.

Check the Linux version and support for RBD:

Modprobe rbd

Linux philosophy: no return message is the best news. So modprobe rbd did not return a message indicating that the kernel supports rbd.

Execute the following command on the ceph-osd node:

Rbd map rbd/ceph-rbd1

The format is rbd map {pool-name} / {image-name} without {pool-name} default rbd

Check the device name of the mapped RBD in the operating system:

Rbd showmapped

You can see that the device name of the created RBD in the operating system is / dev/rbd1

Fdisk-l / dev/rbd1# see Partition

If it is used for local mount, perform the following steps:

Mkfs.xfs / dev/rbd1 # format RBDmkdir / mnt/ceph-vol1 # create mount point mount / dev/rbd1 / mnt/ceph-vol1 # mount RBD, you can write data to test the mounted RBD

However, our goal is to mount RBD to hosts outside the Ceph cluster, so you can learn about the above steps. ISCSI-target is needed to mount RBD on hosts outside the cluster.

3.4 configure rbdmap

According to the network documentation, after creating the rbd block device and rbd map, if you do not rbd unmap in time, the system will stay on the umount rbd device when shutting down. So configure rbdmap. There is no introduction to rbdmap in the official document, and this script is not officially released. In order to avoid this problem, download and set boot rbdmap first, and then try not to join this script in the future.

Sudo wget https://raw.github.com/ceph/ceph/a4ddf704868832e119d7949e96fe35ab1920f06a/src/init-rbdmap-O / etc/init.d/rbdmap # get script file sudo chmod + x / etc/init.d/rbdmap # add execution permissions sudo update-rc.d rbdmap defaults # add boot boot

Modify the rbdmap configuration file / etc/ceph/rbdmap to add the mapped rbd. Note that the / etc/ceph/rbdmap path is the current path when wget, and I am under the / etc/ceph path when I wget.

Vi / etc/ceph/rbdmap# RbdDevice Parameters#poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyringrbd/ceph-rbd1 id=client,keyring=/etc/ceph/ceph.client.keyring

If cephx is used, then keyring=/etc/ceph/ceph.client.keyring will be added

3.5 configure ISCSI-target

Linux SCSI target framework (tgt) is used to emulate the Linux system as a function of iSCSI target; install tgt and check if rbd is supported:

Apt-get install tgttgtadm-- lld iscsi-- op show-- mode system | grep rbd

Rbd (bsoflags sync:direct) # returns this message indicating that rbd is supported

Understanding of the document:

/ etc/tgt/targets.conf: the main configuration file, which sets the format and number of disks to be shared

/ usr/sbin/tgt-admin: a setting tool for online query, deletion of target and other functions

/ usr/sbin/tgt-setup-lun: set up target and set up shared disks and available clients and other tools.

/ usr/sbin/tgtadm: manually managed administrator tool (can be replaced by configuration files)

/ usr/sbin/tgtd: the main program that mainly provides iSCSI target services

/ usr/sbin/tgtimg: a tool for building an image file device that is expected to be shared (using an image file to simulate a disk)

SCSI has its own definition of sharing target filenames. Basically, target filenames shared through iSCSI begin with iqn, meaning "iSCSI Qualified Name (iSCSI qualified name)". So what file name should be added after iqn? It usually goes like this:

Iqn.yyyy-mm.:identifier

Reversal of the domain name of the unit network: the target name of this share

For example: target iqn.2015-9.localhost:iscsi will be used in the next configuration.

We mainly modify / etc/tgt/targets.conf to add the created RBD information.

Vim / etc/tgt/targets.conf

# the syntax of this file is as follows:

Backing-store storage device name-1 backing-store storage device name-2 driver iscsi # driver bs-type rbd # backend storage type-default rdwr, optional aio, etc. Choose rbdbacking-store rbd/ceph-rbd1 # here

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report