Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of the Integration of OpenStack Cinder with various back-end Storage Technologies

2025-04-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the example analysis of the integration of OpenStack Cinder and various back-end storage technologies, which has a certain reference value. Interested friends can refer to it. I hope you will learn a lot after reading this article.

Cinder project is born to manage block devices, its most important place is how to achieve a perfect adaptation with a variety of storage back-end, make good use of back-end storage functions.

1.LVM

For the entry storage to start the practical journey of OpenStack Cinder, cinder.conf does not deserve anything. The default is to use the principles of LVM and LVM.

First use pvcreate to make physical volumes, then make multiple physical volumes into a volume group, and then allocate lvm logical volumes through lvcreate when creating volume.

When doing deployment, use dd to create a file (cinder-volumes) with a set size (10G in this case) in the current directory, then map it to loop device (virtual fast device) through the losetup command, then create a logical volume based on this fast device, and then build vg, which can contain multiple pv at a time when building a vg, only one is used in this example.

01.dd if=/dev/zero of=/vol/cinder-volumes bs=1 count= seek=10G

02.# Mount the file.

03.loopdev= `losetup-f`

04.losetup $loopdev / vol/cinder-volumes

05.# Initialize as a physical volume.

06.pvcreate $loopdev

07.# Create the volume group.

08.vgcreate cinder-volumes $loopdev

09.# Verify the volume has been created correctly.

10.pvscan

Once the volume group is established, you can use the initial configuration of cinder.conf.

Restart the cinder-volume service

You can do normal volume creation, mount, uninstall and so on.

How to mount question1:LVM?

It is easy to create, just use the lvcreate command, and mount it a little more complicated. First, you need to export the volume as the scsi storage target device (target, there will be a lun ID), and then connect to the target device through linux scsi initiator software. Here you use two software, scsi tagert management software (there are a variety of software, such as Tgt,Lio,Iet,ISERTgt, default to use Tgt, all provide block-level scsi storage for operating systems with SCSI initiator) and linux scsi initiator. So the two operations correspond to tgtadm and iscsiadm, respectively.

2. FC (Fibre Channel) + SAN device

Requirements: a) the machine where the computing node is located must have a HBA card (optical fiber network card)

Check to see if there is a HBA card method on host:

One way:

1. $lspci

2.20Rod 00.0 Fibre Channel: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter (rev 02)

3.20 rev 00.1 Fibre Channel: Emulex Corporation Zephyr-X LightPulse Fibre Channel Host Adapter (rev 02)

Two methods:

You can view / sys/class/fc_host/

When there are two optical fiber network cards, there are two directories: host1 and host2

1. $cat / sys/class/fc_host/host1/port_name

2.0x10000090fa1b825a wwpn (acts like a MAC address)

B) the network card is connected to the back-end storage through the optical fiber cable. Take the svc of ibm as an example, you must ensure the connection. You can log in to the svc graphical interface to check whether the host is active, or ssh log in to svc and run the command.

1.ww_2145:SVC:superuser > svcinfo lsfabric-delim!-wwpn "10000090fa1b825a"

two。

3.10000090FA1B825Amil0A0C003roomnodeffect 16500800500507680130DBEAA2000A0500activeroomx3560m4-06MFZF1roomHost

Only in this way can we ensure that there is no problem in mounting and unmounting the volume.

The following is a practice, taking Storwize devices as an example:

1.volume_driver = cinder.volume.drivers.storwize_svc.StorwizeSVCDriver

2.san_ip = 10.2.2.123

3.san_login = superuser

4.#san_password = passw0rd

5.san_private_key = / svc_rsa

6.storwize_svc_volpool_name = DS3524_DiskArray1

7.storwize_svc_connection_protocol = FC

San_password and san_private_key can choose one of the two. San_private_key is recommended. This private key file is generated by ssh-keygen. The private key is generated and left behind, and the public key is placed on the san device. In the future, when other host also want to connect to this storage device, you can directly use this private key without repeated generation.

During the testing process, build a volume

1. [root@localhost ~] # cinder create-- display-name test55 1

2. [root@localhost ~] # nova volume-list

3.When you talk about it, you can buy it.

4. | ID | Status | Display Name | Size | Volume Type | Attached to |

5. Talk about music, talk about music, talk about music.

6. | 24f7e457-f71a-43ce-9ca6-4454fbcfa31f | available | test55 | 1 | None | |

7. Talk about music, talk about music, talk about music.

Use the following instance to mount the virtual hard disk on attach, saving the process of boot instance ~

01. [root@localhost ~] # nova list

02. Talk about music, talk about music, talk about music

03. | 77d7293f-7a20-4f36-ac86-95f4c24b29ae | test2 | ACTIVE |-| Running | net_local=10.0.1.5 |

04. Talk about music, talk about music, talk about music

05. [root@localhost ~] # nova volume-attach 77d7293f-7a20-4f36-ac86-95f4c24b29ae 24f7e457-f71a-43ce-9ca6-4454fbcfa31f

06. Talk about music, talk about music, talk about it.

07. | Property | Value |

08. Talk about music, talk about music, talk about it

09. | device | / dev/vdb |

10. | id | 24f7e457-f71a-43ce-9ca6-4454fbcfa31f |

11. | serverId | 77d7293f-7a20-4f36-ac86-95f4c24b29ae |

12. | volumeId | 24f7e457-f71a-43ce-9ca6-4454fbcfa31f |

13. Music, etc.

14. [root@localhost ~] # cinder list

15.What is the color of the music? -+

16. | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |

17.When you talk about it, you can buy it. -+

18. | 24f7e457-f71a-43ce-9ca6-4454fbcfa31f | in-use | test55 | 1 | None | false | 77d7293f-7a20-4f36-ac86-95f4c24b29ae |

19. Talk about music, talk about music.

3. ISCSI+SAN equipment

This is connected to the storage device through the TCP/IP protocol. You only need to ensure that the storage service node can ping the san ip and the computer can ping the iSCSI node ip on the storage device.

Take ibm's svc or v7000 as an example.

The only difference from the FC partial configuration is that

1.storwize_svc_connection_protocol = FC = = "storwize_svc_connection_protocol = iSCSI

The testing process is the same as above, all ok

4. Use VMWARE

This is mainly the use of vcenter to manage fast storage. Cinder actually encapsulates a layer and ultimately invokes the storage management function of vcenter. Just like a transit, modify the following configuration items in cinder.conf

1.volume_driver = cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver

2.vmware_host_ip = $VCENTER_HOST_IP

3.vmware_host_username = $VCENTER_HOST_USERNAME

4.vmware_host_password = $VCENTER_HOST_PASSWORD

5.vmware_wsdl_location = $WSDL_LOCATION

6.# VIM Service WSDL Location

7 # example, 'file:///home/SDK5.5/SDK/vsphere-ws/wsdl/vim25/vimService.wsdl

The testing process is the same as 2, everything is ok.

5.NFS

A very common network file system, the principle can be google, directly start the practice in cinder

The first step: plan the server end of NFS storage, which are distributed in those nodes and directories. Here, plan on the two nodes. As the nfs server side, 10.11.0.16:/var/volume_share and 10.11.1.178:/var/volume_share, set up the directory / var/volume_share on these two machines, and export stores for nfs, and start the nfs service on the two nodes.

Step 2: establish / etc/cinder/share.txt, which is as follows, informing the shared storage that can be used by mount

1.10.11.0.16:/var/volume_share

2.10.11.1.178:/var/volume_share

Modify permissions and user groups

1. $chmod 0640 / etc/cinder/share.txt

2.$ chown root:cinder / etc/cinder/share.txt

Step 3: edit / etc/cinder/cinder.conf

1.volume_driver=cinder.volume.drivers.nfs.NfsDriver

2.nfs_shares_config=/etc/cinder/shares.txt

3.nfs_mount_point_base=$state_path/mnt

Restart the cinder-volume service, ok, and the test process is the same as 2.

On one occasion, when the environment changed, voluem-attach reported an error:

01.2014-06-12 11 41 TRACE oslo.messaging.rpc.dispatcher connector 58.659 19312)

02.2014-06-12 11 line 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher File "/ usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in _ _ exit__

03.2014-06-12 11 self.tb 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher six.reraise (self.type_, self.value)

04.2014-06-12 11 line 41 TRACE oslo.messaging.rpc.dispatcher File 58.659 19312 "/ usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 239, in attach

05.2014-06-12 11 encryption=encryption 41 TRACE oslo.messaging.rpc.dispatcher device_type=self 58.659 19312 ['device_type'])

06.2014-06-12 11 1 TRACE oslo.messaging.rpc.dispatcher File 58.659 19312 TRACE oslo.messaging.rpc.dispatcher File "/ usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1263, in attach_volume

07.2014-06-12 11 41 TRACE oslo.messaging.rpc.dispatcher disk_dev 58.659 19312)

08.2014-06-12 11 line 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher File "/ usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, in _ _ exit__

09.2014-06-12 11 self.tb 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher six.reraise (self.type_, self.value)

10.2014-06-12 11 1 TRACE oslo.messaging.rpc.dispatcher File 58.659 19312 TRACE oslo.messaging.rpc.dispatcher File "/ usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1250, in attach_volume

11.2014-06-12 11 flags 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher virt_dom.attachDeviceFlags (conf.to_xml ())

12.2014-06-12 11 11 TRACE oslo.messaging.rpc.dispatcher File 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher File "/ usr/lib/python2.6/site-packages/eventlet/tpool.py", line 179, in doit

13.2014-06-12 11 self._autowrap 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher result = proxy_call (self._autowrap, f, * args, * * kwargs)

14.2014-06-12 11 line 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher File "/ usr/lib/python2.6/site-packages/eventlet/tpool.py", line 139, in proxy_call

15.2014-06-12 11 11 41R 58.659 19312 TRACE oslo.messaging.rpc.dispatcher rv = execute (fmending argswriting writing kwargs)

16.2014-06-12 11 line 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher File "/ usr/lib/python2.6/site-packages/eventlet/tpool.py", line 77, in tworker

17.2014-06-12 11 args,**kwargs 41V 58.659 19312 TRACE oslo.messaging.rpc.dispatcher rv = meth

18.2014-06-12 11 line 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher File "/ usr/lib64/python2.6/site-packages/libvirt.py", in attachDeviceFlags

19.2014-06-12 11 virDomainAttachDeviceFlags 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher if ret =-1: raise libvirtError ('virDomainAttachDeviceFlags () failed', dom=self)

20.2014-06-12 11 com.redhat_drive_add': Device 41D 58.659 19312 TRACE oslo.messaging.rpc.dispatcher libvirtError: internal error unable to execute QEMU command'_ _ com.redhat_drive_add': Device 'drive-virtio-disk1' could not be initialized

This error comes from libvirt. Just make the following settings. Check whether virt_use_nfs is off or on first.

1.$ / usr/sbin/getsebool virt_use_nfs

If it is off, make the following settings

1.$ / usr/sbin/setsebool-P virt_use_nfs on

6.GlusterFS

Write so much, think this is better, no wonder redhat will buy it, vision ah, it is a distributed file system, can be extended to several PB order of magnitude cluster file system. Multiple different types of storage blocks can be aggregated into a large parallel network file system through Infiniband RDMA or TCP/IP.

Briefly summarize its two characteristics that you have experienced.

1. The ability to scale out is strong, and it can combine brick server of different nodes to form a large parallel network file system.

two。 Can do soft RAID, improve concurrent read and write speed and disaster recovery through stripe technology [stripe] and mirrored volume [replica]

The following provides a full process of cinder+glusterfs practice, interspersed with descriptions and use of the excellent features of glusterfs

Step 1: first install and deploy the gluterfs server environment:

In this example, 10.11.0.16 and 10.11.1.178 are used as connected nodes, and you need to package them first.

Two ways: yum source or RPM package

1:yum-y install glusterfs glusterfs-fuse glusterfs-server

2: go to the following URL to download the package, such as http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.0/RHEL/epel-6.5/x86_64/

Glusterfs-3.5.0-2.el6.x86_64.rpm glusterfs-fuse-3.5.0-2.el6.x86_64.rpm glusterfs-server-3.5.0-2.el6.x86_64.rpm

Glusterfs-cli-3.5.0-2.el6.x86_64.rpm glusterfs-libs-3.5.0-2.el6.x86_64.rpm

I downloaded version 3.5 and installed it using rpm.

Once installed, plan the brick server on many nodes. In this example, the / var/data_cinder and / var/data_cinder2 directories will be created on 10.11.1.178 and 10.11.0.16, respectively, and the storage cluster cfs will be established on 10.11.1.178.

1. Start the glusterd service on 10.11.1.178 and 10.11.0.16

[root@chen ~] # / etc/init.d/glusterd start

two。 View the storage pool status on 10.11.1.178

1. [root @ kvm-10-11-1-178 ~] # gluster peer probe 10.11.0.16

2. [root @ kvm-10-11-1-178 ~] # gluster peer probe 10.11.1.178 # locally or not

3. Create a storage cluster

Usage: $gluster volume create [stripe] [replica] [transport]? [force]

Stripe stripes, similar to RAID0, to improve read and write performance

Replica, as the name implies, mirroring, similar to doing RAID1, data will be mirrored to write.

Stripe+ replica can do RAID10, at this time stripe COUNT * replica COUNT = brick-server COUNT, say too much,

1. [root @ kvm-10-11 Murray 16 var] # mkdir data_cinder

2. [root @ kvm-10-11 Murray 16 var] # mkdir data_cinder2

3. [root @ kvm-10-11-1-178 var] # mkdir data_cinder

4. [root @ kvm-10-11-1-178 var] # mkdir data_cinder2

5. [root @ kvm-10-11-1-178 var] # gluster volume create cfs stripe 2 replica 2 10.11.0.16:/var/data_cinder210.11.1.178:/var/data_cinder 10.11.0.16:/var/data_cinder 10.11.1.178:/var/data_cinder2 force

6.volume create: cfs: success: please start the volume to access data

Note: do not gluster volume create cfs stripe 2 replica 2 10.11.0.16:/var/data_cinder2 10.11.0.16:/var/data_cinder 10.11.1.178:/var/data_cinder 10.11.1.178:/var/data_cinder2 force, because the first two are used for RAID1, so you will not have disaster recovery capability on the same node.

4. Start storage cluster

Usage: $gluster volume start

01. [root @ kvm-10-11-1-178 var] # gluster volume start cfs

02.volume start: cfs: success

03. [root @ kvm-10-11-1-178a] # gluster volume info all

04.Volume Name: cfs

05.Type: Striped-Replicate

06.Volume ID: ac614af9-11b8-4ff3-98e6-fe8c3a2568b6

07.Status: Started

08.Number of Bricks: 1 x 2 x 2 = 4

09.Transport-type: tcp

10.Bricks:

11.Brick1: 10.11.0.16:/var/data_cinder2

12.Brick2: 10.11.1.178:/var/data_cinder

13.Brick3: 10.11.0.16:/var/data_cinder

14.Brick4: 10.11.1.178:/var/data_cinder2

Step 2: the client end, that is, the node where the cinder-volume service is located, should be installed except the glusterfs-server package. This end is the same as the nfs one, which ensures that the mount will be ready when the service starts.

Set up / etc/cinder/share.conf, which is as follows, informing the cluster that can be stored by mount

1.10.11.1.178:/cfs

Modify permissions and user groups

1. $chmod 0640 / etc/cinder/share.conf

2.$ chown root:cinder / etc/cinder/share.conf

Cinder.conf configuration

1.glusterfs_shares_config = / etc/cinder/shares.conf

2.glusterfs_mount_point_base = / var/lib/cinder/volumes

3.volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver

1. [root@chen ~] # for i in api scheduler volume; do sudo service openstack-cinder-$ {I} restart; done

1. [root@chen ~] # cinder create-- display-name chenxiao-glusterfs 1

2. [root@chen ~] # cinder list

3. Music, etc. -- +

4. | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |

5. Music, etc. -- +

6. | 866f7084-c624-4c11-a592-8c00fcabfb23 | available | chenxiao-glusterfs | 1 | None | false | |

7. Music, etc. -- +

There is a distribution of this stored data on each brick server, which is 512m. Only 1G is due to making a RAID0, which is distributed everywhere, with a total of 2G, because of the RAID1. Take one of them as an example:

1. [root @ kvm-10-11-1-178 data_cinder] # ls-al

two。 The total dosage is 20

3.drwxrwxr-x 3 root cinder 4096 June 18 20:43.

4.drwxr-xr-x. 27 root root 4096 June 18 10:24..

5.drw240 root root 4096 June 18 20:39. Glusterfs

6. Volume-866f7084-c624 volume-866f7084-c624 RW-2 root root 536870912 June 18 20:39 RW-4c11-a592-8c00fcabfb23

Boot instance for attach operation.

01. [root@chen data_cinder] # nova volume-attach f5b7527e-2ab8-424c-9842-653bd73e8f26 866f7084-c624-4c11-a592-8c00fcabfb23

02. Talk about music, talk about music, talk about it.

03. | Property | Value |

04. Talk about music, talk about music, talk about it

05. | device | / dev/vdd |

06. | id | 866f7084-c624-4c11-a592-8c00fcabfb23 |

07. | serverId | f5b7527e-2ab8-424c-9842-653bd73e8f26 |

08. | volumeId | 866f7084-c624-4c11-a592-8c00fcabfb23 |

09.When you talk about it, you can talk about it.

1. [root@chen data_cinder] # cinder list

2. "music", "music", "music" -- +

3. | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |

4. Music, music, music- -- +

5. | 866f7084-c624-4c11-a592-8c00fcabfb23 | in-use | chenxiao-glusterfs | 1 | None | false | f5b7527e-2ab8-424c-9842-653bd73e8f26 |

6. Music, music, music- -- +

Thank you for reading this article carefully. I hope the article "sample Analysis of the Integration of OpenStack Cinder and various back-end Storage Technologies" shared by the editor will be helpful to you. At the same time, I also hope that you will support and pay attention to the industry information channel. More related knowledge is waiting for you to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report