Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use ceph rbd and openstack together

2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you "how to use ceph rbd and openstack", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to use ceph rbd and openstack" this article.

Openstack starts a specific virtual machine by calling qemu through libvirt, and qemu communicates with librados through the library librbd, while librados is the unified API library for ceph clusters, so openstack is associated with ceph rbd.

The actions you need to do on the CEO side are as follows:

1. Create a series of available pools for ceph client.

Create a series of pools for ceph client according to the requirements and classification of client. To create a pools, you need to allocate the number of PGs and the number of object replications reasonably according to the number of OSDs in the current cluster.

# ceph osd pool create volumes 128Create a pool named volumes and the number of PGs in this pool is 128,

# ceph osd pool create images 128

# ceph osd pool create vms 128

2. Create a series of users for ceph client and their permissions to use the cluster.

Different ceph client users have different requirements for the cluster, so it is necessary to set the permissions for client users to access the cluster according to their actual needs.

# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes allow rwx pool=images allow rwx pool=vms'

# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

3. Upload the ceph configuration file and its ceph client user information to the desired client node respectively.

Openstack can use ceph through glance, cinder, and nova, so you need to upload the ceph configuration file and the permissions file for ceph client to the glance, cinder, and nova nodes. The location of the ceph configuration file is under / etc/ceph/ceph.conf of the client node, the permission file of ceph client is located in the / etc/ceph directory of the client node, the permission file of the glance node is located in / etc/ceph/ceph.client.glance.keyring, and the permission file of the cinder node is located in / etc/ceph/ceph.client.cinder.keyring For nova node permissions file is located in / etc/ceph/ceph.client.cinder.keyring (nova node uses cinder user to access ceph cluster)

In addition to the keyring used by ordinary client users, you also need to upload the ceph.client.admin.keyring on the ceph cluster to the / etc/ceph/ directory of all client nodes

4. The configuration of glance node.

For the glance node, you need to modify the configuration file of the glance node so that the glance node can use ceph. The configuration file for the glance node is located in / etc/glance/glance-api.conf. The following information needs to be added to the file:

[glance_store]

Default_store = rbd

Store = rbd

Rbd_store_ceph_conf = / etc/ceph/ceph.conf

Rbd_store_user = glance

Rbd_store_pool = images

Rbd_store_chunk_size = 8

In addition, to support the copy-on-write clone images feature, you need to add the following information to the default section of the configuration file:

[default]

Show_image_direct_url = True

In addition, in order to prevent glance from using local cache, you need to set the following information in the paste_deploy section of the configuration file:

[paste_deploy]

Flavor = keystone

5. Cinder node configuration.

For the cinder node, you need to modify the configuration file of the cinder node so that the cinder node can use ceph. The configuration file for the cinder node is located in / etc/cinder/cinder.conf. The following information needs to be added to the file:

# cinder volume

Volume_driver = cinder.volume.drivers.RBDDriver

Rbd_pool = volumes

Rbd_user = cinder

Rbd_secret_uuid = {uuid}

Rbd_ceph_conf = / etc/ceph/ceph.conf

Rbd_flatten_volume_from_snapshot = True

Rbd_max_clone_depth = 5

Rbd_store_chunk_size = 4

Rados_connect_timeout =-1

Glance_api_version = 2

If rbd_user and rbd_secret_uuid are set, nova is allowed to use ceph through cinder.

6. Nova node configuration.

1) the key value of client.cinder needs to be obtained. This value can be obtained from any node in the ceph cluster with the following command and uploaded to the nova node:

# ceph auth get-key client.cinder | ssh {nova-ip} tee ceph.client.cinder.key

2) you need to generate a random number on the nova node using the uuidgen command:

# uuidgen

3) use ceph for libvirt on the nova node, create a secret.xml file, and add the file and random number to libvirt, as follows:

# cat > secret.xml

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report