Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Openstack docks two sets of Ceph

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Environment description

Openpstack-Pike interfacing with cephRBD single cluster is easy to configure. Please refer to the official website of openstack or ceph.

1.Openstack official website reference configuration:

Https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/ceph-rbd-volume-driver.html

2.Ceph official website reference configuration:

Https://docs.ceph.com/docs/master/install/install-ceph-deploy/

Due to changes in the physical environment and business requirements, the current configuration of cloud computing environment requires a set of openstack docking backend two different versions of cephRBD storage cluster

Deploy the configuration here in the following normal operating environment

1) openstack-Pike

2) Ceph Luminous 12.2.5

3) Ceph Nautilus 14.2.7

Among them, the configuration of openstack docking ceph Luminous is complete and running normally. Based on this openstack+ceph environment, a new set of ceph Nautilus storage cluster is added to enable openstack to call two sets of storage resources at the same time.

Configuration step

1. Copy configuration fil

# copy the configuration file and cinder account key to the cinder node of openstack

/ etc/ceph/ceph3.conf

/ etc/ceph/ceph.client.cinder2.keyring

# use the cinder account here, and only copy the key of the cinder2 account

two。 Create a storage pool

# after the OSD is added, create a storage pool, specify the number of pg/pgp of the storage pool, and configure its corresponding function mode

Ceph osd pool create volumes 512 512

Ceph osd pool create backups 128 128

Ceph osd pool create vms 512 512

Ceph osd pool create images 128 128

Ceph osd pool application enable volumes rbd

Ceph osd pool application enable backups rbd

Ceph osd pool application enable vms rbd

Ceph osd pool application enable images rbd

3. Create a cluster access account

Ceph auth get-or-create client.cinder2 mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'

Ceph auth get-or-create client.cinder2-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'

Ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

4. View process information

# View the cinder component service process of current openstack

Source / root/keystonerc.admin

Cinder service-list

5. Modify the configuration file

# modify cinder configuration file

[DEFAULT]

Enabled_backends = ceph2,ceph3

[ceph2]

Volume_driver = cinder.volume.drivers.rbd.RBDDriver

Volume_backend_name = ceph2

Rbd_pool = volumes1

Rbd_ceph_conf = / etc/ceph2/ceph2.conf

Rbd_flatten_volume_from_snapshot = false

Rbd_max_clone_depth = 5

Rados_connect_timeout =-1

Glance_api_version = 2

Rbd_user = cinder1

Rbd_secret_uuid = * *

[ceph3]

Volume_driver = cinder.volume.drivers.rbd.RBDDriver

Volume_backend_name = ceph3

Rbd_pool = volumes2

Rbd_ceph_conf = / etc/ceph/ceph3/ceph3.conf

Rbd_flatten_volume_from_snapshot = false

Rbd_max_clone_depth = 5

Rados_connect_timeout =-1

Glance_api_version = 2

Rbd_user = cinder2

Rbd_secret_uuid = * *

6. Restart the service

# restart cinder-volume service

Service openstack-cinder-volume restart Redirecting to / bin/systemctl restart openstack-cinder-volume.service

Service openstack-cinder-scheduler restart Redirecting to / bin/systemctl restart openstack-cinder-scheduler.service

7. View the process

Cinder service-list

8. Create a volume test

# Volume type binding

Cinder type-create ceph2

Cinder type-key ceph2 set volume_backend_name=ceph2

Cinder type-create ceph3

Cinder type-key ceph3 set volume_backend_name=ceph3

9. View binding results

Cinder create-volume-type ceph2-display_name {volume-name} {volume-size}

Cinder create-volume-type ceph3-display_name {volume-name} {volume-size}

Configure libvirt

1. Add the second set of ceph keys to the libvirt of the nova-compute node

# in order for VM to access the second set of cephRBD cloud disks, you need to add the keys of the cinder users of the second set of ceph to the libvirt on the nova-compute node

Ceph-c / etc/ceph3/ceph3/ceph3.conf-k / etc/ceph3/ceph.client.cinder2.keyring auth get-key client.cinder2 | tee client.cinder2.key

# uuid of the second ceph cluster in cinder.conf before binding

Cat > secret2.xml

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report