Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use ceph for back-end Storage in openstack pike Edition

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you openstack pike version of how to use ceph for back-end storage, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

Node distribution

10.1.1.1 controller

10.1.1.2 compute

10.1.1.3 middleware

10.1.1.4 network

10.1.1.5 compute2

10.1.1.6 compute3

10.1.1.7 cinder

# # distributed Storage

The backend storage uses ceph,mon_host = 10.1.1.2, 10.1.1.5, 10.1.1.6.

# # create database, service and endpoint for cinder

Mysql-u root-p

Create database cinder

Grant all privileges on cinder.* to 'cinder'@'localhost' identified by' 123456'

Grant all privileges on cinder.* to 'cinder'@'%' identified by' 123456'

Cat admin-openrc

Export OS_USER_DOMAIN_ID=default

Export OS_PROJECT_DOMAIN_ID=default

Export OS_USERNAME=admin

Export OS_PROJECT_NAME=admin

Export OS_PASSWORD=123456

Export OS_IDENTITY_API_VERSION=3

Export OS_IMAGE_API_VERSION=2

Export OS_AUTH_URL= http://controller:35357/v3

Source admin-openrc

Create a cinder user

Openstack user create-domain default-password-prompt cinder

Cinder users join the admin group

Openstack role add-project service-user cinder admin

Create service

Openstack service create-name cinderv2-description "OpentStack Block Storage" volumev2

Openstack service create-name cinderv3-description "OpentStack Block Storage" volumev3

Create API endpoint

Openstack endpoint create-- region RegionOne volumev2 public http://cinder:8776/v2/%\(tenant_id\)s

Openstack endpoint create-- region RegionOne volumev2 internal http://cinder:8776/v2/%\(tenant_id\)s

Openstack endpoint create-- region RegionOne volumev2 admin http://cinder:8776/v2/%\(tenant_id\)s

Openstack endpoint create-- region RegionOne volumev3 public http://cinder:8776/v3/%\(tenant_id\)s

Openstack endpoint create-- region RegionOne volumev3 internal http://cinder:8776/v3/%\(tenant_id\)s

Openstack endpoint create-- region RegionOne volumev3 admin http://cinder:8776/v3/%\(tenant_id\)s

Create ceph pool

Execute the following command on ceph

Ceph osd pool create volumes 128

Ceph osd pool create images 128

Ceph osd pool vms 128

Ceph user Authorization

Because the backend storage uses ceph, it is necessary to authorize the ceph client so that ceph users can access the appropriate ceph pool. Glance,cinder,nova-compute is used to use ceph.

Ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rwx pool=images'

Ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=images'

Ceph auth list

Client.cinder

Key: AQDQEWdaNU9YGBAAcEhKd6KQKHN9HeFIIS4+fw==

Caps: [mon] allow r

Caps: [osd] allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rwx pool=images

Client.glance

Key: AQD4EWdaTdZjJhAAuj8CvNY59evhiGtEa9wLzw==

Caps: [mon] allow r

Caps: [osd] allow class-read object_prefix rbd_children,allow rwx pool=images

Create a / etc/ceph directory on the controller,cinder,compute node

Will be given to the controller,cinder,compute node to create an authorization file

Ceph auth get-or-create client.glance | ssh controller sudo tee / etc/ceph/ceph.client.glance.keyring

Ceph auth get-or-create client.cinder | ssh cinder sudo tee / etc/ceph/ceph.client.cinder.keyring

Ceph auth get-or-create client.cinder | ssh compute sudo tee / etc/ceph/ceph.client.cinder.keyring

Ceph auth get-or-create client.cinder | ssh compute2 sudo tee / etc/ceph/ceph.client.cinder.keyring

Ceph auth get-or-create client.cinder | ssh compute3 sudo tee / etc/ceph/ceph.client.cinder.keyring

Give glance permission to ceph.client.glance.keyring ceph.client.cinder.keyring give cinder permission

Chown glance.glance / etc/ceph/ceph.client.glance.keyring

Chown cinder.cinder / etc/ceph/ceph.client.cinder.keyring

Copy a copy of the ceph configuration file / etc/ceph/ceph.conf to the / etc/ceph directory of the glance,cinder and compute nodes

Install and configure components

Cinder node

Yum install-y openstack-cinder python-ceph ceph-common python-rbd

You should have the following files in the / etc/ceph directory

[root@cinder ~] # ll / etc/ceph/

Total 12

-rw-r--r-- 1 cinder cinder 64 January 26 15:52 ceph.client.cinder.keyring

-rw-r--r-- 1 root root 263 January 26 15:53 ceph.conf

Cp / etc/cinder/cinder.conf {, .bak}

> / etc/cinder/cinder.conf

Cat / etc/cindr/cinder.conf

[DEFAULT]

Auth_strategy = keystone

Transport_url = rabbit://openstack:123456@middleware

Log_dir = / var/log/cinder/api.log

Enabled_backends = ceph

[database]

Connection = mysql+pymysql://cinder:123456@middleware/cinder

[keystone_authtoken]

Auth_uri = http://controller:5000

Auth_url = http://controller:35357

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = cinder

Password = 123456

[oslo_concurrency]

Lock_path = / var/lib/cinder/tmp

[ceph]

Volume_driver = cinder.volume.drivers.rbd.RBDDriver

Volume_backend_name = ceph

Rbd_pool = volumes

Rbd_ceph_conf = / etc/ceph/ceph.conf

Rbd_flatten_volume_from_snapshot = false

Rbd_max_clone_depth = 5

Rbd_store_chunk_size = 4

Rados_connect_timeout =-1

Rbd_user = cinder

Rbd_secret_uuid = f85def47-c1ac-46fe-a1d5-c0139c46d91a

Restart the cinder service

Systemctl restart openstack-api.service

Systemctl restart openstack-scheduler.service

Systemctl restart openstack-volume.service

Glance node

Install the ceph client

Yum install-y python-ceph ceph-common python-rbd

In the / etc/ceph directory, you need the following files

[root@controller ~] # ll / etc/ceph/

Total 12

-rw-r--r-- 1 glance glance 64 January 23 19:31 ceph.client.glance.keyring

-rw-r--r-- 1 root root 416 January 24 10:32 ceph.conf

About the configuration of ceph

/ etc/glance/glance.conf

[DEFAULT]

# enable image locaions and take advantage of copy-on-write cloning for images

Show_image_direct_url = true

[glance_store]

Stores = rbd

Default_store = rbd

Rbd_store_pool = images

Rbd_store_user = glance

Rbd_store_ceph_conf = / etc/ceph/ceph.conf

Rbd_store_chunk_size = 8

Restart the glance service

Systemctl restart openstack-glance-api.service

Compute node

Install the ceph client

Yum install-y python-ceph ceph-common python-rbd

Uuidgen produces uid, consistent with uuid in / etc/cinder/cinder.conf

F85def47-c1ac-46fe-a1d5-c0139c46d91a

Create a secret file

Cat secret.xml

F85def47-c1ac-46fe-a1d5-c0139c46d91a

Client.cinder secret

Define secret

Sudo virsh secret-define-file secret.xml

Sudo virsh secret-set-value-- secret f85def47-c1ac-46fe-a1d5-c0139c46d91a-- base64 $(cat ceph.client.cinder.keyring | awk'/ key/ {print $3}')

Virsh secret-list

UUID Usage

F85def47-c1ac-46fe-a1d5-c0139c46d91a ceph client.cinder secret

/ etc/nova/nova.conf configuration

[libvirt]

Virt_type = qemu

Cpu_mode = none

Images_type = rbd

Images_rbd_pool = vms

Images_rbd_ceph_conf = / etc/ceph/ceph.conf

Rbd_user = cinder

Rbd_secret_uuid = f85def47-c1ac-46fe-a1d5-c0139c46d91a

Disk_cachemodes= "network=writeback"

Live_migration_flag= "VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"

Inject_password = false

Inject_key = false

Inject_partition =-2

You should have the following files in the / etc/ceph directory

[root@cinder ~] # ll / etc/ceph/

Total 12

-rw-r--r-- 1 cinder cinder 64 January 26 15:52 ceph.client.cinder.keyring

-rw-r--r-- 1 root root 263 January 26 15:53 ceph.conf

Restart the nova-compute service

Systemctl restart openstack-nova-compute.service

The above is all the contents of the article "how to use ceph for back-end storage in openstack pike version". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report