Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

OpenStack Building (3)

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

OpenStack enclosure-Cinder

Some of the infrastructure of OpenStack was introduced in the first two documents, but no enclosure was added. This article mainly introduces the storage module Cinder of openstack.

There are three main types of storage:

Block storage: hard disk, storage device, disk column, etc.

File storage: storage such as NFS,FTP is mainly used for file sharing.

Object storage: distributed file system, swift, etc. There is a metadata (metadata) data description as a supported storage method.

Cinder supports many of the above storage methods.

Cinder component

Cinder-api: accepts the request from API and routes it to cinder-volume for execution.

Cinder-volume: respond to requests, read or write database maintenance state information, interact with other processes through message queuing mechanism (such as cinder-scheduler), or directly interact with the hardware or software provided by the upper block storage. Manage storage.

Cinder-scheduler: daemon. Similar to Nova-scheduler, select the optimal block storage node for the storage instance.

Cinder-backup: daemon. The service provides backup volumes of any kind to a backup storage provider. Like the ``cinder- volume`` service, it interacts with a variety of storage providers under a driven architecture.

Cinder database configuration and registration service

Http://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/cinder-controller-install.html

Create the database and authorize:

MariaDB [(none)] > CREATE DATABASE cinder;MariaDB [(none)] > GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY' cinder';MariaDB [(none)] > GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY' cinder'

Create a service certificate and complete these steps:

# source admin-openstack.sh # openstack user create-domain default-password-prompt cinder

Add the admin role to the cinder user:

# openstack role add-project service-user cinder admin

Create cinder and cinderv2 service entities:

# openstack service create-name cinder-description "OpenStack Block Storage" volume+-+--+ | Field | Value | +-+- -+ | description | OpenStack Block Storage | | enabled | True | | id | 27b797388aaa479ea5542048df32b3d8 | | name | cinder | | type | volume | +-+- -- + # openstack service create-- name cinderv2-- description "OpenStack Block Storage" volumev2+-+--+ | Field | Value | +- -- +-+ | description | OpenStack Block Storage | | enabled | True | | id | 85f9890df5444a5d9a989c96b630c7a7 | | name | cinderv2 | | type | volumev2 | +- -+

To create an API entry point for block device storage service, you need to register two versions:

Openstack endpoint create-region RegionOne volume public http://172.16.10.50:8776/v1/%\(tenant_id\)s openstack endpoint create-region RegionOne volume internal http://172.16.10.50:8776/v1/%\(tenant_id\)s openstack endpoint create-region RegionOne volume admin http://172.16.10.50:8776/v1/%\(tenant_id\)s openstack endpoint create-region RegionOne volumev2 public http://172.16.10 .50: 8776 s openstack endpoint create v2 http://172.16.10.50:8776/v2/%\(tenant_id\)s%\ (tenant_id\) s openstack endpoint create-region RegionOne volumev2 internal http://172.16.10.50:8776/v2/%\(tenant_id\)s openstack endpoint create-region RegionOne volumev2 admin http://172.16.10.50:8776/v2/%\(tenant_id\)s

Cinder installation configuration

Install the cinder components on the control node:

# yum install-y openstack-cinder

Edit / etc/cinder/cinder.conf and complete the following actions:

Configuration database (password is cinder):

Connection = database for the mysql+pymysql://cinder:cinder@172.16.10.50/cinder Sync Block device service:

# su-s / bin/sh-c "cinder-manage db sync" cinder

Confirm that the database synchronization is successful:

# mysql-h 172.16.10.50-ucinder-pcinder-e "use cinder;show tables;"

In the "[DEFAULT]" and "[oslo_messaging_rabbit]" sections, configure "RabbitMQ" message queue access:

[DEFAULT] rpc_backend = Rabbit [Oslo _ messaging_rabbit]... rabbit_host = 172.16.10.50rabbit_userid = openstackrabbit_password = openstack in the "[DEFAULT]" and "[keystone_authtoken]" sections, configure authentication service access:

[DEFAULT] auth_strategy = Keystone [Keystone _ authtoken]... auth_uri = http://172.16.10.50:5000auth_url = http://172.16.10.50:35357memcached_servers = 172.16.10.50:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = cinderpassword = cinder in the [oslo_concurrency] section, configure the lock path:

Lock_path = / var/lib/cinder/tmp

Edit the file / etc/nova/nova.conf and add to it as follows:

[cinder] os_region_name= RegionOne

Restart nova-api:

# systemctl restart openstack-nova-api.service

Start cinder-api (port 8776) and cinder-scheduler:

# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

Install and configure the storage node

Storage nodes can be configured on computing nodes, or there can be other dedicated storage services, where computer nodes are used to provide storage services.

A plug-in disk is required on the compute node.

Create a LVM physical volume / dev/sdb:

Create a LVM volume group cinder-volumes:

# pvcreate / dev/sdb Physical volume "/ dev/sdb" successfully created# vgcreate cinder-volumes / dev/sdb Volume group "cinder-volumes" successfully created

Set that only the instance can access the block storage volume group:

By default, the LVM volume scanning tool scans the ``/ dev`` directory for block storage devices that contain volumes. If a project uses LVM on their volumes, the scanning tool will try to cache them when it detects them, which can cause various problems on the underlying operating system and project volumes. LVM must be reconfigured to scan only devices that contain the ``cinder- volume`` volume group. Edit the ``/ etc/lvm/ lvm.conf`` file and complete the following:

In the ``devices`` section, add a filter that only accepts ``/ dev/ sdb`` devices and rejects all other devices:

Devices {... filter = ["a/sdb/", "rrhythmo.filter /"] the elements in each filter group begin with ``a``, which is accept, or r, which is a regular expression rule for * * reject**, and includes a device name. The filter group must end with ``rbin.filters / ``to filter all reserved devices. You can use: command: `filter-vvvv` to test the filter.

Install and configure cinder on the storage node

Install the package:

Yum install-y openstack-cinder targetcli python-keystone

Configure the cinder of the storage node:

The cinder configuration on the storage node is not very different from that on the control node, and the permissions can be copied and modified directly from the control node:

# scp / etc/cinder/cinder.conf 172.16.10.51:/etc/cinder/ is in the ``[lvm]`` section of cinder.conf. The configuration of LVM backend ends with LVM driver, volume group ``cinder- volumes ``, iSCSI protocol and correct iSCSI service. If this module is not available, you can add it manually:

[lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDrivervolume_group = cinder-volumesiscsi_protocol = iscsiiscsi_helper = lioadm in the [DEFAULT] section, enable the LVM backend:

Enabled_backends = lvm starts the block storage volume service and its dependent services:

# systemctl enable openstack-cinder-volume.service target.service# systemctl start openstack-cinder-volume.service target.service

Verify whether the configuration is successful at the control node. It is normally up. If it is not UP, you cannot add a cloud disk:

# source admin-openstack.sh # cinder service-list+-+--+-+ | Binary | | Host | Zone | Status | State | Updated_at | Disabled Reason | +-+-- -+ | cinder-scheduler | node1 | nova | enabled | up | 2016-11-02T09:16:34.000000 |-| | cinder-volume | node2@lvm | nova | enabled | up | 2016-11-02T09:16:39.000000 |-| +- -+

Add a volume to the virtual machine

Http://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/launch-instance-cinder.html

Horizon has been installed before, and you can add cloud disk operations directly through dashboard. You can also add cloud disks according to the official documentation command line.

If the cloud disk information appears in the Web management interface, it means the cloud disk has been added successfully.

Check to see if this virtual hard disk is available on the virtual machine:

Sudo fdisk-lDisk / dev/vdb: 1073 MB, 1073741824 bytes16 heads, 63 sectors/track, 2080 cylinders, total 2097152 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000

Format the hard drive and mount:

$sudo fdisk / dev/vdb n p w $sudo mkfs.ext4 / dev/vdb1 $sudo mount / dev/vdb1 / data/

Cloud disk can be added to a running virtual machine. Dynamic expansion or reduction of disk capacity is not recommended, which may result in data loss. In a real production environment, it is not recommended to use the complex features of cinder.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report