Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

OpenStack Queens version deployment

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Detailed instructions for deployment and installation of community OpenStack Queens version (additional nodes install all components)

I. deployment of the software environment

Operating system:

Centos7

Kernel version:

[root@controller] # uname-m

X86_64

[root@controller ~] # uname-r

3.10.0-693.21.1.el7.x86_64

Configuration between nodes and network card

Controller node

[root@controller ~] # ip a

1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1

Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet 127.0.0.1/8 scope host lo

Valid_lft forever preferred_lft forever

Inet6:: 1/128 scope host

Valid_lft forever preferred_lft forever

Compute node

[root@compute ~] # ip a

1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1

Storage Cinder node

1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1

Link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

Inet 127.0.0.1/8 scope host lo

Valid_lft forever preferred_lft forever

Inet6:: 1/128 scope host

Valid_lft forever preferred_lft forever

Description: this deployment uses three physical nodes to build a community openstack Queens environment

II. Overview of OpenStack

The OpenStack project is an open source cloud computing platform that supports all types of cloud environments. The project aims to achieve simplicity, large-scale scalability and rich functionality.

OpenStack provides infrastructure-as-a-service (IaaS) solutions through a variety of complementary services. Each service provides an application programming interface (API) to facilitate this integration.

This article covers the step-by-step deployment of major Linux services using a functional sample architecture for new OpenStack users with sufficient OpenStack experience. Used only to learn the OpenStack minimization environment.

III. Overview of OpenStack architecture

1. Conceptual architecture

The following figure shows the relationship between OpenStack services:

two。 Logical architecture

The following figure shows the most common but not the only possible architecture in the OpenStack cloud:

For designing, deploying, and configuring OpenStack, learners must understand the logical architecture.

As shown in the conceptual architecture, OpenStack consists of several separate parts called OpenStack services. All services are authenticated through the keystone service.

Services interact with each other through the public API unless privileged administrator commands are required.

Internally, OpenStack services are made up of multiple processes. All services have at least one API process that listens for API requests, preprocesses them, and passes them to the rest of the service. In addition to identity services, the actual work is done by different processes.

For communication between processes of a service, an AMQP message broker is used. The status of the service is stored in the database. When deploying and configuring the OpenStack cloud, you can choose from a variety of message broker and database solutions, such as RabbitMQ,MySQL,MariaDB and SQLite.

Users can access OpenStack through the Web-based user interface implemented by Horizon Dashboard, through command-line clients and through tools such as browser plug-ins or curl to issue API requests. For applications, there are several SDK available. Eventually, all of these access methods make REST API calls to various OpenStack services.

IV. Deployment of OpenStack component services

Deployment prerequisites (the following commands are executed on all nodes)

1. Configure Node Network Card IP (abbreviated)

two。 Set hostname

Hostnamectl set-hostname Hostnam

Bash # # make the settings take effect immediately

3. Configure domain name resolution, edit / etc/hosts file, and add the following configuration

10.71.11.12 controller

10.71.11.13 compute

10.71.11.14 cinder

4. Verify network connectivity

Execute at the control node

Root@controller ~] # ping-c 4 openstack.org

PING openstack.org (162.242.140.107) 56 (84) bytes of data.

Execute at the compute node

[root@compute] # ping-c 4 openstack.org

PING openstack.org (162.242.140.107) 56 (84) bytes of data.

5. Configure Ali yum Feed

Backup

Mv / etc/yum.repos.d/CentOS-Base.repo / etc/yum.repos.d/CentOS-Base.repo.backup

download

Wget-O / etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

Or

Curl-o / etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

6. Install the NTP clock service (all nodes)

# # controller Node # #

Install the package

Yum install chrony-y

Edit / etc/chrony.conf file, configure clock source synchronization server

Server controlelr iburst # # all nodes synchronize time to controller nodes

Allow 10.71.11.0 Compact 24 # # set time synchronization network segment

Set the NTP service to boot

Systemctl enable chronyd.service

Systemctl start chronyd.service

Other nodes

Install the package

Yum install chrony-y

Configure all nodes to point to controller synchronization time

Vi / etc/chrony.conf

Server controlelr iburst

Restart NTP service (brief)

Verify clock synchronization service

Execute on the controller node

[root@controller ~] # chronyc sources

210 Number of sources = 4

MS Name/IP address Stratum Poll Reach LastRx Last sample

^ * time4.aliyun.com 2 10 377 1015 + 115us [+ 142us] + /-14ms

The content in the MS column should indicate the server to which the * NTP service is currently synchronized.

Execute on other nodes

[root@compute ~] # chronyc sources

210 Number of sources = 4

MS Name/IP address Stratum Poll Reach LastRx Last sample

^ * leontp.ccgs.wa.edu.au 1 10 377 752

[root@cinder ~] # chronyc sources

210 Number of sources = 4

MS Name/IP address Stratum Poll Reach LastRx Last sample

^ + 61-216153-104.HINET-IP. > 3 10 377 748-3373us [-

Note: clock drift is often encountered in daily operation and maintenance, which leads to brain fissure of cluster service.

Installation and configuration of openstack services

Description: without special instructions, the following operations are performed on all nodes

1. Download and install openstack Software Repository (queens version)

Yum install centos-release-openstack-queens-y

two。 Update all node packages

Yum upgrade

3. Two nodes install the openstack client side

Yum install python-openstackclient-y

4. Install openstack-selinux

Yum install openstack-selinux-y

Install the database (controller node execution)

Most OpenStack services use a SQL database to store information, which usually runs on the controller node. This article mainly uses MariaDB or MySQL.

Install the package

Yum install mariadb mariadb-server python2-PyMySQL-y

Edit / etc/my.cnf.d/mariadb-server.cnf and complete the following

[root@controller ~] # vim / etc/my.cnf.d/mariadb-server.cnf

#

These groups are read by MariaDB server.

[server]

This is only for the mysqld standalone daemon

[mysqld]

Datadir=/var/lib/mysql

Socket=/var/lib/mysql/mysql.sock

Log-error=/var/log/mariadb/mariadb.log

Pid-file=/var/run/mariadb/mariadb.pid

Bind-address = 192.168.10.102

Default-storage-engine = innodb

Innodb_file_per_table = on

Max_connections = 4096

Collation-server = utf8_general_ci

Character-set-server = utf8

Description: bind-address uses the management IP of the controller node

Set the service to boot

Systemctl enable mariadb.service

Systemctl start mariadb.service

Protect the database service by running the mysql_secure_installation script. Password 123456

[root@controller ~] # mysql_secure_installation

Thanks for using MariaDB!

Install and configure RabbitMQ on the controller node

1. Install and configure message queuing components

Yum install rabbitmq-server-y

two。 Set the service to boot

Systemctl enable rabbitmq-server.service

Systemctl start rabbitmq-server.service

3. Add openstack user

Rabbitmqctl add_user openstack openstack

Permission configuration for 4.openstack user

Rabbitmqctl set_permissions openstack ".

8.RabbitMQ message queuing installation and configuration (control node)

Yum install rabbitmq-server-y/usr/lib/rabbitmq/bin/rabbitmq-plugins list / / View plug-in installation / usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management

/ / enable rabbitmq_management service

Systemctl restart rabbitmq-server.servicesystemctl enable rabbitmq-serverrabbitmqctl add_user openstack openstack

/ / add openstack user, openstack is password

Rabbitmqctl set_permissions openstack ".

/ / configure write and read permissions for openstack users

Visit httpd://192.168.0.17:15672 to see the web management page

If it cannot be accessed, the permission is granted to it.

Rabbitmqctl set_user_tags openstack administratorrabbitmqctl list_users # # View permissions

Install the cached database Memcached (controller node)

Description: the authentication service of the service uses Memcached cache tokens. The memcached service typically runs on the controller node. For production deployments, we recommend enabling a combination of firewall, authentication, and encryption to protect it.

1. Install configuration components

Yum install memcached python-memcached-y

two。 Edit / etc/sysconfig/memcached

Vim / etc/sysconfig/memcached

OPTIONS= "- l 10.71.11.12 Magazine 1 controller"

3. Set the service to boot

Systemctl enable memcached.service

Systemctl start memcached.service

Etcd service installation (controller)

1. Installation service

Yum install etcd-y

two。 Edit / etc/etcd/etcd.conf file

Vim / etc/etcd/etcd.conf

ETCD_INITIAL_CLUSTER

ETCD_INITIAL_ADVERTISE_PEER_URLS

ETCD_ADVERTISE_CLIENT_URLS

ETCD_LISTEN_CLIENT_URLS

# [Member]

ETCD_DATA_DIR= "/ var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS= "http://10.71.11.12:2380"

ETCD_LISTEN_CLIENT_URLS= "http://10.71.11.12:2379"

ETCD_NAME= "controller"

# [Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS= "http://10.71.11.12:2380"

ETCD_ADVERTISE_CLIENT_URLS= "http://10.71.11.12:2379"

ETCD_INITIAL_CLUSTER= "controller= http://10.71.11.12:2380"

ETCD_INITIAL_CLUSTER_TOKEN= "etcd-cluster-01"

ETCD_INITIAL_CLUSTER_STATE= "new"

3. Set the service to boot

Systemctl enable etcd

Systemctl start etcd

Install keystone components (controller)

1. Create a keystone database and authorize

Mysql-u root-p

CREATE DATABASE keystone

GRANT ALL PRIVILEGES ON keystone. TO 'keystone'@'localhost' IDENTIFIED BY' keystone'

GRANT ALL PRIVILEGES ON keystone. TO 'keystone'@'%' IDENTIFIED BY' 123456'

two。 Install, configure components

Yum install openstack-keystone httpd mod_wsgi-y

3. Edit vim / etc/keystone/keystone.conf

[database] 737

Connection = mysql+pymysql://keystone:123456@controller/keystone

[token] 2878

Provider = fernet

4. Synchronize keystone database

Su-s / bin/sh-c "keystone-manage db_sync" keystone

5. Database initialization

Keystone-manage fernet_setup-- keystone-user keystone--keystone-group keystone

Keystone-manage credential_setup-- keystone-user keystone--keystone-group keystone

6. Guide identity authentication service

Keystone-manage bootstrap--bootstrap-password 123456-bootstrap-admin-url http://controller:35357/v3/-bootstrap-internal-url http://controller:5000/v3/-bootstrap-public-url http://controller:5000/v3/-bootstrap-region-id RegionOne

Configure the apache http service

1. Edit vim / etc/httpd/conf/httpd.conf and configure ServerName parameters

ServerName controller

two。 Create / usr/share/keystone/wsgi-keystone.conf link file

Ln-s / usr/share/keystone/wsgi-keystone.conf / etc/httpd/conf.d/

3. Set the service to boot

Systemctl enable httpd.service

Systemctl restart httpd.service

Error report when starting service

[root@controller ~] # systemctl start httpd.service

After judging, it is the problem caused by SELinux.

Solution: turn off the firewall

[root@controller ~] # vi / etc/selinux/config

SELINUX=disabled

SELINUXTYPE= can take one of three two values:

SELINUXTYPE=targeted

Restart the service again for error resolution

[root@controller ~] # systemctl enable httpd.service;systemctl start httpd.service

4. Configure administrative account

Export OS_USERNAME=admin

Export OS_PASSWORD=123456

Export OS_PROJECT_NAME=admin

Export OS_USER_DOMAIN_NAME=Default

Export OS_PROJECT_DOMAIN_NAME=Default

Export OS_AUTH_URL= http://controller:35357/v3

Export OS_IDENTITY_API_VERSION=3

Create domain, projects, users, roles

1. Create a domain

Openstack domain create-description "Domain" example

[root@controller ~] # openstack domain create-- description "Domain" example

+-- +

| | Field | Value |

+-- +

| | description | Domain |

| | enabled | True |

| | id | 199658b1d0234c3cb8785c944aa05780 |

| | name | example |

| | tags | [] |

+-- +

Create a service item

Openstack project create-domain default-description "Service Project" service [root@controller ~] # openstack project create-domain default-description "Service Project" service+-+--+ | Field | Value | +-+- -+ | description | Service Project | | domain_id | default | | enabled | True | | id | 03e700ff43e44b29b97365bac6c7d723 | | is_domain | False | | name | service | | parent_id | default | | tags | [] |

+-- +

3. Create a platform demo project

Openstack project create-domain default-description "Demo Project" demo

[root@controller] # openstack project create-- domain default-- description "Demo Project" demo

+-- +

| | Field | Value |

+-- +

| | description | Demo Project |

| | domain_id | default |

| | enabled | True |

| | id | 61f8c9005ca84477b5bdbf485be1a546 |

| | is_domain | False |

| | name | demo |

| | parent_id | default |

| | tags | [] |

+-- +

4. Create demo user password demo

Openstack user create-domain default-password-prompt demo

[root@controller] # openstack user create-- domain default-- password-prompt demo

User Password:

Repeat User Password:

+-- +

| | Field | Value |

+-- +

| | domain_id | default |

| | enabled | True |

| | id | fa794c034a53472c827a94e6a6ad12c1 |

| | name | demo |

| | options | {} | |

| | password_expires_at | None |

+-- +

5. Create user roles

Openstack role create user

[root@controller ~] # openstack role create user

+-- +

| | Field | Value |

+-- +

| | domain_id | None |

| | id | 15ea413279a74770b79630b75932a596 |

| | name | user |

+-- +

6. Add user roles to demo projects and users

Openstack role add-project demo-user demo user

Description: no parameters are returned after the successful execution of this command

Verification operation

1. Cancel the environment variable

Unset OS_AUTH_URL OS_PASSWORD

Authenticated token password returned by 2.admin user 123456

[root@controller ~] # unset OS_AUTH_URL OS_PASSWORD

[root@controller] # openstack-os-auth-url http://controller:35357/v3-os-project-domain-name Default-os-user-domain-name Default-os-project-name admin-os-username admin token issue

[root@controller ~] # openstack-- os-auth-url http://controller:35357/v3\

-- os-project-domain-name Default-- os-user-domain-name Default\

-os-project-name admin-os-username admin token issue

Password:

+- -+

| | Field | Value |

Authenticated token password deno returned by 3.demo user

[root@controller ~] # openstack-- os-auth-url http://controller:5000/v3\

-- os-project-domain-name Default-- os-user-domain-name Default\

-os-project-name demo-os-username demo token issue

Password:

+- -+

| | Field | Value |

+- -+

Create openstack client environment script

1. Create a vim admin-openrc script

Export OS_PROJECT_DOMAIN_NAME=Default

Export OS_USER_DOMAIN_NAME=Default

Export OS_PROJECT_NAME=admin

Export OS_USERNAME=admin

Export OS_PASSWORD=123456

Export OS_AUTH_URL= http://controller:5000/v3

Export OS_IDENTITY_API_VERSION=3

Export OS_IMAGE_API_VERSION=2

two。 Create a vim demo-openrc script

Export OS_PROJECT_DOMAIN_NAME=Default

Export OS_USER_DOMAIN_NAME=Default

Export OS_PROJECT_NAME=demo

Export OS_USERNAME=demo

Export OS_PASSWORD=demo

Export OS_AUTH_URL= http://controller:5000/v3

Export OS_IDENTITY_API_VERSION=3

Export OS_IMAGE_API_VERSION=2

3. Use the script, return the authentication token, grant the script permissions, and execute the script

[root@controller ~] # openstack token issue

+- -+

| | Field | Value |

+- -+

| | expires | 2018-04-01T08:17:29+0000 |

| | id | gAAAAABawIeJ0z-3R2ltY6ublCGqZX80AIi4tQUxqEpw0xvPsFP9BLV8ALNsB2B7bsVivGB14KvhUncdoRl_G2ng5BtzVKAfzHyB-OxwiXeqAttkpQsuLCDKRHd3l-K6wRdaDqfNm-D1QjhtFoxHOTotOcjtujBHF12uP49TjJtl1Rrd6uVDk0g |

| | project_id | 4205b649750d4ea68ff5bea73de0faae |

| | user_id | 475b31138acc4cc5bb42ca64af418963 |

+- -+

Install the Glance service (controller)

1. Create a glance database and authorize

Mysql-u root-p

CREATE DATABASE glance

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY' glance'

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY' 123456'

two。 Get the environment variable of the admin user and create the service authentication

. Admin-openrc

Create glance user password 123456

[root@controller] # openstack user create-- domain default-- password-prompt glance

User Password:

Repeat User Password:

+-- +

| | Field | Value |

+-- +

| | domain_id | default |

| | enabled | True |

| | id | dd2363d365624c998dfd788b13e1282b |

| | name | glance |

| | options | {} | |

| | password_expires_at | None |

+-- +

Add admin users to glance users and projects

Openstack role add-project service-user glance admin

Description: this command is executed without return.

Create a glance service

[root@controller] # openstack service create-- name glance-- description "OpenStack Image" image

+-- +

| | Field | Value |

+-- +

| | description | OpenStack Image |

| | enabled | True |

| | id | 5927e22c745449869ff75b193ed7d7c6 |

| | name | glance |

| | type | image |

+-- +

3. Create a mirror service API endpoint

[root@controller ~] # openstack endpoint create-- region RegionOne image public http://controller:9292

+-- +

| | Field | Value |

+-- +

| | enabled | True |

| | id | 0822449bf80f4f6897be5e3240b6bfcc |

| | interface | public |

| | region | RegionOne |

| | region_id | RegionOne |

| | service_id | 5927e22c745449869ff75b193ed7d7c6 |

| | service_name | glance |

| | service_type | image |

| | url | http://controller:9292 |

+-- +

[root@controller ~] # openstack endpoint create-- region RegionOne image internal http://controller:9292

+-- +

| | Field | Value |

+-- +

| | enabled | True |

| | id | f18ae583441b4d118526571cdc204d8a |

| | interface | internal |

| | region | RegionOne |

| | region_id | RegionOne |

| | service_id | 5927e22c745449869ff75b193ed7d7c6 |

| | service_name | glance |

| | service_type | image |

| | url | http://controller:9292 |

+-- +

[root@controller ~] # openstack endpoint create-- region RegionOne image admin http://controller:9292

+-- +

| | Field | Value |

+-- +

| | enabled | True |

| | id | 79eadf7829274b1b9beb2bfb6be91992 |

| | interface | admin |

| | region | RegionOne |

| | region_id | RegionOne |

| | service_id | 5927e22c745449869ff75b193ed7d7c6 |

| | service_name | glance |

| | service_type | image |

| | url | http://controller:9292 |

+-- +

Install and configure components

1. Install the package

Yum install openstack-glance-y

two。 Edit the vim / etc/glance/glance-api.conf file

[database] 1924

Connection = mysql+pymysql://glance:glance@controller/glance

[keystone_authtoken] 3472

Auth_uri = http://controller:5000

Auth_url = http://controller:35357

Memcached_servers = controller:11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = glance

Password = 123456

[paste_deploy]

Flavor = keystone

[glance_store] 2039

Stores = file,http

Default_store = file

Filesystem_store_datadir = / var/lib/glance/images/

3. Edit vim / etc/glance/glance-registry.conf

[database] 1170

Connection = mysql+pymysql://glance:glance@controller/glance

[keystone_authtoken] 1285

Auth_uri = http://controller:5000

Auth_url = http://controller:35357

Memcached_servers = controller:11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = glance

Password = 123456

[paste_deploy] 2272

Flavor = keystone

4. Synchronous mirroring service database

Su-s / bin/sh-c "glance-manage db_sync" glance

Systemctl enable openstack-glance-api.service openstack-glance-registry.service

Systemctl start openstack-glance-api.service openstack-glance-registry.service

Verification operation

Use CirrOS to verify the operation of the Image service, which is a small Linux image that helps you test your OpenStack deployment.

For more information about how to download and build an image, see the OpenStack virtual machine image guide https://docs.openstack.org/image-guide/

For information about how to manage images, refer to the OpenStack end user Guide https://docs.openstack.org/queens/user/

1. Get the environment variable of the admin user and download the image

. Admin-openrc

Wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

two。 Upload image

Upload an image to the Image service using QCOW2 disk format, naked container format, and public visibility so that all projects can access it:

[root@controller] # openstack image create "cirros"-- file cirros-0.3.5-x86_64-disk.img-- disk-format qcow2-- container-format bare-- public

+-+

FieldValue

3. View the uploaded image

[root@controller ~] # openstack image list

+-+

| | ID | Name | Status | |

+-+

| | 916faa2b-e292-46e0-bfe4-0f535069a1a0 | cirros | active | |

+-+

Description: glance specific configuration options: https://docs.openstack.org/glance/queens/configuration/index.html

Controller node installs and configures compute service

1. Create nova_api, nova, nova_cell0 databases

Mysql-u root-p

CREATE DATABASE nova_api

CREATE DATABASE nova

CREATE DATABASE nova_cell0

Database login authorization

GRANT ALL PRIVILEGES ON nova_api. TO 'nova'@'localhost' IDENTIFIED BY' nova'

GRANT ALL PRIVILEGES ON nova_api. TO 'nova'@'%' IDENTIFIED BY' nova'

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller' IDENTIFIED BY' nova'

GRANT ALL PRIVILEGES ON nova. TO 'nova'@'localhost' IDENTIFIED BY' nova'

GRANT ALL PRIVILEGES ON nova. TO 'nova'@'%' IDENTIFIED BY' nova'

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller' IDENTIFIED BY' nova'

GRANT ALL PRIVILEGES ON nova_cell0. TO 'nova'@'localhost' IDENTIFIED BY' nova'

GRANT ALL PRIVILEGES ON nova_cell0. TO 'nova'@'%' IDENTIFIED BY' nova'

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller' IDENTIFIED BY' nova'

Build nova user password 123456

[root@controller] #. Admin-openrc

[root@controller] # openstack user create-- domain default-- password-prompt nova

User Password:

Repeat User Password:

+-- +

| | Field | Value |

+-- +

| | domain_id | default |

| | enabled | True |

| | id | 8e72103f5cc645669870a630ffb25065 |

| | name | nova |

| | options | {} | |

| | password_expires_at | None |

+-- +

3. Add admin user as nova user

Openstack role add-project service-user nova admin

4. Create a nova service endpoint

[root@controller] # openstack service create-- name nova-- description "OpenStack Compute" compute

+-- +

| | Field | Value |

+-- +

| | description | OpenStack Compute |

| | enabled | True |

| | id | 9f8f8d8cb8e542b09694bee6016cc67c |

| | name | nova |

| | type | compute |

+-- +

5. Create a compute API service endpoint

[root@controller ~] # openstack endpoint create-- region RegionOne compute public http://controller:8774/v2.1

+-- +

| | Field | Value |

+-- +

| | enabled | True |

| | id | cf260d5a56344c728840e2696f44f9bc |

| | interface | public |

| | region | RegionOne |

| | region_id | RegionOne |

| | service_id | 9f8f8d8cb8e542b09694bee6016cc67c |

| | service_name | nova |

| | service_type | compute |

| | url | http://controller:8774/v2.1 |

+-- +

[root@controller ~] # openstack endpoint create-- region RegionOne compute internal http://controller:8774/v2.1

+-- +

| | Field | Value |

+-- +

| | enabled | True |

| | id | f308f29a78e04b888c7418e78c3d6a6d |

| | interface | internal |

| | region | RegionOne |

| | region_id | RegionOne |

| | service_id | 9f8f8d8cb8e542b09694bee6016cc67c |

| | service_name | nova |

| | service_type | compute |

| | url | http://controller:8774/v2.1 |

+-- +

[root@controller ~] # openstack endpoint create-- region RegionOne compute admin http://controller:8774/v2.1

+-- +

| | Field | Value |

+-- +

| | enabled | True |

| | id | 022d96fa78de4b73b6212c09f13d05be |

| | interface | admin |

| | region | RegionOne |

| | region_id | RegionOne |

| | service_id | 9f8f8d8cb8e542b09694bee6016cc67c |

| | service_name | nova |

| | service_type | compute |

| | url | http://controller:8774/v2.1 |

+-- +

Create a placement service user password of 123456

[root@controller] # openstack user create-- domain default-- password-prompt placement

User Password:

Repeat User Password:

+-- +

| | Field | Value |

+-- +

| | domain_id | default |

| | enabled | True |

| | id | fa239565fef14492ba18a649deaa6f3c |

| | name | placement |

| | options | {} | |

| | password_expires_at | None |

+-- +

6. Add placement users to serve the project admin role

Openstack role add-project service-user placement admin

7. Create create a Placement API service in the service directory

[root@controller] # openstack service create-- name placement-- description "Placement API" placement

+-- +

| | Field | Value |

+-- +

| | description | Placement API |

| | enabled | True |

| | id | 32bb1968c08747ccb14f6e4a20cd509e |

| | name | placement |

| | type | placement |

+-- +

8. Create a Placement API service endpoint

[root@controller ~] # openstack endpoint create-- region RegionOne placement public http://controller:8778

+-- +

| | Field | Value |

+-- +

| | enabled | True |

| | id | b856962188484f4ba6fad500b26b00ee |

| | interface | public |

| | region | RegionOne |

| | region_id | RegionOne |

| | service_id | 32bb1968c08747ccb14f6e4a20cd509e |

| | service_name | placement |

| | service_type | placement |

| | url | http://controller:8778 |

+-- +

[root@controller ~] # openstack endpoint create-- region RegionOne placement internal http://controller:8778

+-- +

| | Field | Value |

+-- +

| | enabled | True |

| | id | 62e5a3d82a994f048a8bb8ddd1adc959 |

| | interface | internal |

| | region | RegionOne |

| | region_id | RegionOne |

| | service_id | 32bb1968c08747ccb14f6e4a20cd509e |

| | service_name | placement |

| | service_type | placement |

| | url | http://controller:8778 |

+-- +

[root@controller ~] # openstack endpoint create-- region RegionOne placement admin http://controller:8778

+-- +

| | Field | Value |

+-- +

| | enabled | True |

| | id | f12f81ff7b72416aa5d035b8b8cc2605 |

| | interface | admin |

| | region | RegionOne |

| | region_id | RegionOne |

| | service_id | 32bb1968c08747ccb14f6e4a20cd509e |

| | service_name | placement |

| | service_type | placement |

| | url | http://controller:8778 |

+-- +

Install and configure components

1. Install the package

Yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api

two。 Edit vim / etc/nova/nova.conf

[DEFAULT]

Enabled_apis = osapi_compute,metadata

Transport_url = rabbit://openstack:openstack@controller

My_ip = 10.71.11.12

Use_neutron = True

Firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]

Connection = mysql+pymysql://nova:nova@controller/nova_api

[database]

Connection = mysql+pymysql://nova:nova@controller/nova

[api]

Auth_strategy = keystone

[keystone_authtoken]

Auth_uri = http://controller:5000

Auth_url = http://controller:35357

Memcached_servers = controller:11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = nova

Password = 123456

[vnc]

Enabled = true

Server_listen = $my_ip

Server_proxyclient_address = $my_ip

[glance]

Api_servers = http://controller:9292

[oslo_concurrency]

Lock_path = / var/lib/nova/tmp

[placement]

Os_region_name = RegionOne

Project_domain_name = Default

Project_name = service

Auth_type = password

User_domain_name = Default

Auth_url = http://controller:35357/v3

Username = placement

Password = 123456

3. Because of a bug of the package, you need to add the following configuration to the / etc/httpd/conf.d/00-nova-placement-api.conf file

= 2.4 >

Require all granted

Order allow,deny

Allow from all

4. Re-http service

Systemctl restart httpd

5. Synchronize nova-api database

Su-s / bin/sh-c "nova-manage api_db sync" nova

Synchronous database error

[root@controller] # su-s / bin/sh-c "nova-manage api_db sync" nova

Traceback (most recent call last):

File "/ usr/bin/nova-manage", line 10, in

Sys.exit (main ())

File "/ usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1597, in main

Config.parse_args (sys.argv)

File "/ usr/lib/python2.7/site-packages/nova/config.py", line 52, in parse_args

Default_config_files=default_config_files)

File "/ usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2502, in call

Else sys.argv [1:])

File "/ usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3166, in _ parse_cli_opts

Return self._parse_config_files ()

File "/ usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 3183, in _ parse_config_files

ConfigParser._parse_file (config_file, namespace)

File "/ usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 1950, in _ parse_file

Raise ConfigFileParseError (pe.filename, str (pe))

Oslo_config.cfg.ConfigFileParseError: Failed to parse / etc/nova/nova.conf: at / etc/nova/nova.conf:8, No':'or'= 'found in assignment:' / etc/nova/nova.conf'

According to the error report, comment out the eighth line in / etc/nova/nova.conf to solve the error report.

[root@controller] # su-s / bin/sh-c "nova-manage api_db sync" nova

/ usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option (s) ['use_tpool'] not supported

Exception.NotSupportedWarning

6. Register the cell0 database

Su-s / bin/sh-c "nova-manage cell_v2 map_cell0" nova

[root@controller] # su-s / bin/sh-c "nova-manage cell_v2 map_cell0" nova

/ usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option (s) ['use_tpool'] not supported

Exception.NotSupportedWarning

7. Create cell1 cell

[root@controller] # su-s / bin/sh-c "nova-manage cell_v2 create_cell-- name=cell1-- verbose" nova

/ usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option (s) ['use_tpool'] not supported

Exception.NotSupportedWarning

6c689e8c-3e13-4e6d-974c-c2e4e22e510b

8. Synchronize nova database

[root@controller] # su-s / bin/sh-c "nova-manage db sync" nova

/ usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option (s) ['use_tpool'] not supported

Exception.NotSupportedWarning

/ usr/lib/python2.7/site-packages/pymysql/cursors.py:165: Warning: (1831, u'Duplicate index block_device_mapping_instance_uuid_virtual_name_device_name_idx. This is deprecated and will be disallowed in a future release.')

Result = self._query (query)

/ usr/lib/python2.7/site-packages/pymysql/cursors.py:165: Warning: (1831, u'Duplicate index uniq_instances0uuid. This is deprecated and will be disallowed in a future release.')

Result = self._query (query)

9. Verify that the nova, cell0, and cell1 databases are registered correctly

[root@controller ~] # nova-manage cell_v2 list_cells

/ usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option (s) ['use_tpool'] not supported

Exception.NotSupportedWarning

+- -+

| | Name | UUID | Transport URL | Database Connection | |

+- -+

| | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:@controller/nova_cell0 |

| | cell1 | 6c689e8c-3e13-4e6d-974c-c2e4e22e510b | rabbit://openstack:@controller | mysql+pymysql://nova:****@controller/nova | |

+- -+

10. Set the service to boot.

Systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

Systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

Compute nodes install and configure compute node services

1. Install the package

Yum install openstack-nova-compute-y

two。 Edit vim / etc/nova/nova.conf

[DEFAULT]

Enabled_apis = osapi_compute,metadata

Transport_url = rabbit://openstack:123456@controller

My_ip = 10.71.11.13

Use_neutron = True

Firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]

Auth_strategy = keystone

[keystone_authtoken]

Auth_uri = http://controller:5000

Auth_url = http://controller:35357

Memcached_servers = controller:11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = nova

Password = 123456

[vnc]

Enabled = True

Server_listen = 0.0.0.0

Server_proxyclient_address = $my_ip

Novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]

Api_servers = http://controller:9292

[oslo_concurrency]

Lock_path = / var/lib/nova/tmp

[placement]

Os_region_name = RegionOne

Project_domain_name = Default

Project_name = service

Auth_type = password

User_domain_name = Default

Auth_url = http://controller:35357/v3

Username = placement

Password = 123456

3. Set the service to boot

Systemctl enable libvirtd.service openstack-nova-compute.service

Systemctl start libvirtd.service openstack-nova-compute.service

Note: if the nova-compute service cannot be started, please check / var/log/nova/nova-compute.log and the following error message will appear

2018-04-01 12 ovs 0315 18612 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge

2018-04-01 12 purl 03Perfuge 43.431 18612 WARNING oslo_config.cfg [-]

Controller: the error message on 5672 that the AMQP server cannot access may indicate that the firewall on the controller node is blocking access to port 5672. Configure the firewall to open port 5672 on the controller node and restart the nova-compute service on the compute node.

Clear the firewall for controller

[root@controller] # iptables-F

[root@controller] # iptables-X

[root@controller] # iptables-Z

Restart the computing service successfully

4. Add a compute node to the cell database (controller)

Verify that there are several compute nodes in the database

[root@controller]. Admin-openrc

[root@controller] # openstack compute service list-- service nova-compute

+-+-+

| | ID | Binary | Host | Zone | Status | State | Updated At | |

+-+-+

| | 8 | nova-compute | compute | nova | enabled | up | 2018-04-01T22:24:14.000000 |

+-+-+

5. Discover compute nodes

[root@controller] # su-s / bin/sh-c "nova-manage cell_v2 discover_hosts-- verbose" nova

/ usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option (s) ['use_tpool'] not supported

Exception.NotSupportedWarning

Found 2 cell mappings.

Skipping cell0 since it does not contain hosts.

Getting compute nodes from cell 'cell1': 6c689e8c-3e13-4e6d-974c-c2e4e22e510b

Found 1 unmapped computes in cell: 6c689e8c-3e13-4e6d-974c-c2e4e22e510b

Checking host mapping for compute host 'compute': 32861a0d-894e-4af9-a57c-27662d27e6bd

Creating host mapping for compute host 'compute': 32861a0d-894e-4af9-a57c-27662d27e6b

Validate computing service operations on the controller node

1. List service components

[root@controller] #. Admin-openrc

[root@controller ~] # openstack compute service list

+-+-+

IDBinaryHostZoneStatusStateUpdated At

+-+-+

1nova-consoleauthcontrollerinternalenabledup2018-04-01T22:25:29.000000

2nova-conductorcontrollerinternalenabledup2018-04-01T22:25:33.000000

3nova-schedulercontrollerinternalenabledup2018-04-01T22:25:30.000000

6nova-conductoransible-serverinternalenabledup2018-04-01T22:25:55.000000

7nova-scheduleransible-serverinternalenabledup2018-04-01T22:25:59.000000

8nova-computecomputenovaenabledup2018-04-01T22:25:34.000000

9nova-consoleauthansible-serverinternalenabledup2018-04-01T22:25:57.000000

+-+-+

two。 List the API endpoints in the identity service to verify the connection to the identity service:

[root@controller ~] # openstack catalog list

+-+

NameTypeEndpoints

+-+

PlacementplacementRegionOne

Internal: http://controller:8778

RegionOne

Public: http://controller:8778

RegionOne

Admin: http://controller:8778keystoneidentityRegionOnepublic: http://controller:5000/v3/RegionOneadmin: http://controller:35357/v3/RegionOneinternal: http://controller:5000/v3/glanceimageRegionOnepublic: http://controller:9292RegionOneadmin: http://controller:9292RegionOneinternal: http://controller:9292novacomputeRegionOneadmin: http://controller:8774/v2.1RegionOnepublic: http://controller:8774/v2.1RegionOneinternal: http://controller:8774/v2.1

+-+

3. List Mirror

[root@controller ~] # openstack image list

+-+

| | ID | Name | Status | |

+-+

| | 916faa2b-e292-46e0-bfe4-0f535069a1a0 | cirros | active | |

+-+

4. Check whether cells and placement API are normal

[root@controller ~] # nova-status upgrade check

/ usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option (s) ['use_tpool'] not supported

Exception.NotSupportedWarning

Option "os_region_name" from group "placement" is deprecated. Use option "region-name" from group "placement".

+-+

| | Upgrade Check Results |

+-+

| | Check: Cells v2 | |

| | Result: Success |

| | Details: None |

+-+

| | Check: Placement API |

| | Result: Success |

| | Details: None |

+-+

| | Check: Resource Providers |

| | Result: Success |

| | Details: None |

+-+

Nova knowledge point https://docs.openstack.org/nova/queens/admin/index.html

Install and configure controller node neutron network configuration

1. Create a nuetron database and authorization

Mysql-u root-p

CREATE DATABASE neutron

GRANT ALL PRIVILEGES ON neutron. TO 'neutron'@'localhost' IDENTIFIED BY' neutron'

GRANT ALL PRIVILEGES ON neutron. TO 'neutron'@'%' IDENTIFIED BY' 123456'

two。 Create a service

. Admin-openrc password 123456

Openstack user create-domain default-password-prompt neutron

Add admin role as neutron user

Openstack role add-project service-user neutron admin

Create a neutron service

Openstack service create-name neutron-description "OpenStack Networking" network

3. Create a network service endpoint

Openstack endpoint create-- region RegionOne network public http://controller:9696

Openstack endpoint create-- region RegionOne network internal http://controller:9696

Openstack endpoint create-- region RegionOne network admin http://controller:9696

Configure the network section (controller node)

1. Install component

Yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

two。 Configure service components, edit vim / etc/neutron/neutron.conf

[database]

Connect

[DEFAULT]

Auth_strategy = keystone

Core_plugin = ml2

Service_plugins =

Transport_url = rabbit://openstack:openstack@controller

Notify_nova_on_port_status_changes = true

Notify_nova_on_port_data_changes = true

[keystone_authtoken]

Auth_uri = http://controller:5000

Auth_url = http://controller:35357

Memcached_servers = controller:11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = neutron

Password = 123456

[nova]

Auth_url = http://controller:35357

Auth_type = password

Project_domain_name = default

User_domain_name = default

Region_name = RegionOne

Project_name = service

Username = nova

Password = 123456

[oslo_concurrency]

Lock_path = / var/lib/neutron/tmp

Configure the network layer 2 plug-in

Edit vim / etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

Type_drivers = flat,vlan

Tenant_network_types = vxlan

Mechanism_drivers = linuxbridge, l2population

Extension_drivers = port_security

[ml2_type_flat]

Flat_networks = provider

[ml2_type_vxlan]

Vni_ranges = 1PUR 1000

[securitygroup]

Enable_ipset = true

Configure Linux Brid

Edit vim / etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

Physical_interface_mappings = provider:ens37

[vxlan]

When enable_vxlan = false equals true, write the following two lines

L2_population = true

Local_ip = 192.168.10.18

[securitygroup]

Enable_security_group = true

Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[root@controller ~] # vim / etc/neutron/l3_agent.ini

Interface_driver = linuxbridge

Configure the DHCP service

Edit vim / etc/neutron/dhcp_agent.ini

[DEFAULT]

Interface_driver = linuxbridge

Dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

Enable_isolated_metadata = true

Configure metadata

Edit vim / etc/neutron/metadata_agent.ini

DEFAULT]

Nova_metadata_host = controller

Metadata_proxy_shared_secret = 123456

Configure computing services to use network services

Edit / etc/nova/nova.conf

[neutron]

Url = http://controller:9696

Auth_url = http://controller:35357

Auth_type = password

Project_domain_name = default

User_domain_name = default

Region_name = RegionOne

Project_name = service

Username = neutron

Password = 123456

Service_metadata_proxy = true

Metadata_proxy_shared_secret = 123456

Complete the installation

1. Create a service soft connection

Ln-s / etc/neutron/plugins/ml2/ml2_conf.ini / etc/neutron/plugin.ini

two。 Synchronize database

Su-s / bin/sh-c "neutron-db-manage-- config-file / etc/neutron/neutron.conf-- config-file / etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

3. Restart the compute API service

Systemctl restart openstack-nova-api.service

4. Configure the network service to boot

Systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

Systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

Configure the compute node network service

1. Install component

Yum-y install openstack-neutron-linuxbridge ebtables ipset

two。 Configure common components

Edit / etc/neutron/neutron.conf

[DEFAULT]

Auth_strategy = keystone

Transport_url = rabbit://openstack:123456@controller

[keystone_authtoken]

Auth_uri = http://controller:5000

Auth_url = http://controller:35357

Memcached_servers = controller:11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = neutron

Password = 123456

[oslo_concurrency]

Lock_path = / var/lib/neutron/tmp

Configure the network

1. Configure Linux bridge, edit / etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

Physical_interface_mappings = provider:ens6f0

[vxlan]

Enable_vxlan = false

[securitygroup]

Enable_security_group = true

Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

Configure compute node network services

Edit / etc/nova/nova.conf

[neutron]

Url = http://controller:9696

Auth_url = http://controller:35357

Auth_type = password

Project_domain_name = default

User_domain_name = default

Region_name = RegionOne

Project_name = service

Username = neutron

Password = 123456

Complete the installation

1. Restart the compute service

Systemctl restart openstack-nova-compute.service

two。 Set up the bridge service to boot

Systemctl enable neutron-linuxbridge-agent.service

Systemctl start neutron-linuxbridge-agent.servic

Verification

[root@controller ~] # source admin-openrc

[root@controller] # openstack extension list-- network

[root@controller ~] # openstack network agent list

Install the Horizon service on the controller node

1. Install the package

Yum install openstack-dashboard-y

Edit vim / etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*']

Configure memcache session Stora

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {

'default': {

'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache'

'LOCATION': 'controller:11211'

}

}

Enable authentication API version v3

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3"% OPENSTACK_HO

Enable domains version support

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

Configure the API version

OPENSTACK_API_VERSIONS = {

"identity": 3

"image": 2

"volume": 2

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

:

OPENSTACK_NEUTRON_NETWORK = {

'enable_router': False,'enable_quotas': False,'enable_distributed_router': False,'enable_ha_router': False,'enable_lb': False,'enable_firewall': False,'enable_***': False,'enable_fip_topology_check': False

}

To prevent the server from reporting 500 errors, add the following

[root@controller ~] # vim / etc/httpd/conf.d/openstack-dashboard.conf

WSGIProcessGroup% {Global}

two。 Complete the installation, restart the web service and session storage

Systemctl restart httpd.service memcached.service

Enter http://10.71.11.12/dashboard., in the browser to access the web page of openstack

Default

Admin

123456

Control node installation configuration cinder

Mysql-u root-p123456

354 source admin-openrc

357 openstack user create-domain default-password-prompt cinder

358 openstack role add-project service-user cinder admin

359 openstack service create-name cinderv2-description "OpenStack Block Storage" volumev2

Openstack service create-- name cinderv3-- description "OpenStack Block Storage" volumev3

361 openstack endpoint create-region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s

362openstack endpoint create-region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s

363 openstack endpoint create-region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s

364 openstack endpoint create-region RegionOne volumev3 public http://controller:8776/v2/%\(project_id\)s

365openstack endpoint create-region RegionOne volumev3 internal http://controller:8776/v2/%\(project_id\)s

366 openstack endpoint create-region RegionOne volumev3 admin http://controller:8776/v2/%\(project_id\)s

367 yum install openstack-cinder python-keystone-y

368 vim / etc/cinder/cinder.conf

369 clear

370 su-s / bin/sh-c "cinder-manage db sync" cinder

371 mysql-uroot-p123456-e "use cinder;show tables;"

372 clear

373 vim / etc/nova/nova.conf

374 systemctl restart openstack-nova-api.service

375 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

376 systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

377 history

Install and configure the Cinder node

This section describes how to install and configure storage nodes for Block Storage services. For simplicity, this configuration uses an empty local block storage device to reference a storage node.

The service uses the LVM driver to configure logical volumes on the device and provides it to the instance through iSCSI transport. You can follow these instructions to make small modifications to scale your environment horizontally with other storage nodes.

1. Install supported software packages

Install LVM

Yum install lvm2 device-mapper-persistent-data

Set the LVM service to boot

Systemctl enable lvm2-lvmetad.service

Systemctl restart lvm2-lvmetad.service

two。 Create LVM physical logical volume / dev/sdb

[root@cinder ~] # pvcreate / dev/sdb1

Device / dev/sdb not found (or ignored by filtering).

Solution:

Edit vim / etc/lvm/lvm.conf, find the global_filter line, and configure as follows

Global_filter = ["a |. * / |", "a | sdb1 |"]

Then execute the pvcreate command, and the problem is solved.

[root@cinder ~] # pvcreate / dev/sdb1

Physical volume "/ dev/sdb1" successfully created.

3. Create a cinder-volumes logical volume group

[root@cinder ~] # vgcreate cinder-volumes / dev/sdb1

Volume group "cinder-volumes" successfully created

4. Install and configure components

Install the package

Yum install openstack-cinder targetcli python-keystone-y

Edit vim / etc/cinder/cinder.conf

[DEFAULT]

Transport_url = rabbit://openstack:123456@controller

Auth_strategy = keystone

My_ip = 10.71.11.14

Enabled_backends = lvm

Glance_api_servers = http://controller:9292

[database]

Connection = mysql+pymysql://cinder:123456@controller/cinder

[keystone_authtoken]

Auth_uri = http://controller:5000

Auth_url = http://controller:35357

Memcached_servers = controller:11211

Auth_type = password

Project_domain_id = default

User_domain_id = default

Project_name = service

Username = cinder

Password = 123456

In the [lvm] section, configure the LVM backend using the LVM driver, cinder-volumes volume group, iSCSI protocol, and corresponding iSCSI services. If the [lvm] section does not exist, create it:

[lvm]

Volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver

Volume_group = cinder-volumes

Iscsi_protocol = iscsi

Iscsi_helper = lioadm

[oslo_concurrency]

Lock_path = / var/lib/cinder/tmp

Set the storage service to boot

Systemctl enable openstack-cinder-volume.service target.service

Systemctl restart openstack-cinder-volume.service target.service

Control node verification

Source admin-openrc

Openstack volume service list

Log in to the Dashboard interface

The community Queens Web interface displays three roles

Project

Administrator

Identity management

VI. Upload the mirror image on the command line

Upload native iso image to controller node

two。 Convert native ISO image format to qcow2

[root@controller] # openstack image create-- disk-format qcow2-- container-format bare-- public-- file / root/CentOS-7-x86_64-Minimal-1708.iso CentOS-7-x86_64

3. View the image information of the production

7. Create a virtual machine process

Create a network

. Admin-openrc

Openstack network create-share-external-provider-physical-network provider--provider-network-type flat provider

Parameters.

-- share allows all projects to use virtual networks

-- external defines external virtual network. If you need to create a public network, use-- internal.

-- provider-physical-network provider & &-- provider-network-type flat connects to the flat virtual network

two。 Create a subnet

Openstack subnet create-- network provider-- allocation-pool start=10.71.11.50,end=10.71.11.60-- dns-nameserver 114.114.114.114-- gateway 10.71.11.254-- subnet-range 10.71.11.0 subnet-range 24 provider

3. Create flavor

Openstack flavor create-- id 1-- vcpus 4-- ram 128-- disk 1 m2.nano

4. The control node generates a secret key pair. Before starting the instance, you need to add the public key to the Compute service.

. Demo-openrc

Ssh-keygen-Q-N ""

Openstack keypair create-public-key ~ / .ssh/id_rsa.pub liukey

5. Add security groups to allow ICMP (ping) and security shell (SSH)

Openstack security group rule create-proto icmp default

6. Allow secure shell (SSH) acc

Openstack security group rule create-proto tcp-dst-port 22 default

7. List flavor

Openstack flavor list

8. List available mirrors

9. List the network

10. List security groups

11. Create a virtual machine

twelve。 View real column status

Control the components installed by the node:

78 yum install centos-release-openstack-queens-y

79 yum install python-openstackclient-y

80 yum install openstack-selinux-y

81 yum install mariadb mariadb-server python2-PyMySQL-y

82 yum install rabbitmq-server-y

83 yum install memcached python-memcached-y

84 yum install etcd-y

85 yum install openstack-keystone httpd mod_wsgi-y

86 yum install openstack-glance-y

87 yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api

88 yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

89 yum install openstack-dashboard-y

90 yum install openstack-cinder-y

Components installed by the compute node:

75 yum install centos-release-openstack-queens-y

76 yum install python-openstackclient-y

77 yum install openstack-selinux-y

78 yum install openstack-nova-compute

81 yum install openstack-neutron-linuxbridge ebtables ipset

89 yum-y istall libvirt* # # install this item before installing, otherwise an error will be reported

91 yum install-y openstack-nova-compute

Components installed by the storage node

53 yum install centos-release-openstack-queens-y

54 yum-y install lvm2 openstack-cinder targetcli python-keystone

The client connects using VNC

[root@192 ~] # yum-y install vnc

[root@192 ~] # yum-y install vncview

[root@192 ~] # vncviewer 192.168.0.19 purl 5901

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report