Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Ceph combined with openstack to provide example analysis of storage backend

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail the example analysis of the combination of ceph and openstack to provide storage backend, Xiaobian thinks it is quite practical, so share it for everyone to make a reference, I hope you can gain something after reading this article.

CEPH BLOCK DEVICES AND OPENSTACK

Official Document: ceph.com/docs/master/rbd/rbd-openstack/

Environment: ubuntu14.04 server,openstack icehouse version, ceph version 0.94.2

libvirt configures the QEMU interface for librbd, which allows you to use Ceph block device images in OpenStack. Ceph block device mirroring is treated as a clustered object, which means it has better performance than stand-alone servers.

To use Ceph block appliances in OpenStack, you must first install QEMU, libvirt, and OpenStack. It is recommended to use separate physical nodes when installing OpenStack. OpenStack nodes recommend a minimum of 8 gigabytes of RAM and quad-core processors. The following diagram depicts the OpenStack and Ceph technology hierarchy.

Important: To use Ceph in OpenStack, you must first run the Ceph storage cluster

OpenStack and Ceph have three points of convergence:

Mirrors: OpenStack Glance manages virtual machine mirroring. Mirrors are invariant. OpenStack treats images as binary objects and downloads them in this format.

Volume: A volume is a block device. OpenStack uses volumes to boot virtual machines or bind volumes to running virtual machines. OpenStack uses the Cinder service to manage volumes.

Guest disks: Guest disks are guest operating system disks. By default, when a virtual machine is started, its system disks appear as files on the hypervisor system (usually/var/lib/nova/instances/). In previous versions of OpenStack Havana, the only way to boot a VM in Ceph was to use Cinder's boot-from-volume feature, and now being able to boot a VM directly in Ceph without relying on Cinder is advantageous because it makes it easy to hot-migrate VMs. In addition, if the hypervisor goes down, it is easy to trigger nova evacute and seamlessly continue running the virtual machine elsewhere.

You can use OpenStack Glance to store images in a Ceph block appliance, or you can use Cinder to boot virtual machines via copy-on-write of images.

The configuration process for Glance, Cinder and Nova is described in detail below, although they are not necessarily used together. When the virtual machine runs using local disks, the image can be stored in a Ceph block device, or vice versa.

Important: Ceph does not support virtual machine disks in QCOW2 format. So, if you want to boot a VM in Ceph (either from a backend file or from a volume), the Glance image must be RAW format

create a pool

By default, Ceph block devices use rbd pools. Any pool that can be used can be used. It is recommended that you create separate pools for Cinder and Glance. Make sure the Ceph cluster is up and running, and then create the pool.

cephosd pool create volumes 128 cephosd pool create images 128 cephosd pool create backups 128(Note: cinder-backup may not be created if not installed) cephosd pool create vms 128

See Create a Pool and Placement Groups for details on the number of pgs assigned to a given pool and how many pgs should be assigned to your pool.

Configure OpenStack Ceph Client

The node running glance-api, cinder-volume, nova-compute, and cinder-backup (note: not installed) is the Ceph client. Each node requires a ceph.conf file:

ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf

< /etc/ceph/ceph.conf 安装 Ceph 客户端包 在 glance-api 节点,需要为 librbd 绑定 Python sudo apt-get install python-ceph 在 nova-compute,cinder-backup 和 cinder-volume 节点要用到 Python 和 Ceph 客户端命令行工具: sudo apt-get install ceph-common 设置 Ceph 客户端认证 如果使用了 cephx authentication,创建一个新用户为 Nova/Cinder 和 Glance。执行下面的命令: ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' 为 client.cinder,client.glance添加密钥来访问节点并改变所有者: ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyringssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyringceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyringssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring 运行 nova-compute 的节点 nova-compute 进程需要密钥文件。它们也存储 client.cinder 用户的密钥在 libvirt。libvirt 进程在 Cinder 中绑定块设备时需要用到它来访问集群。 创建一个临时的密钥文件副本在运行 nova-compute 的节点: ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key 然后在计算节点,为 libvirt 添加密钥文件并且移除临时的副本密钥: $~:uuidgen457eb676-33da-42ec-9a8c-9293d545c337$~:cat >

secret.xml

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 244

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report