Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Centos7.3 Openstack-liberty installation and deployment record

2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

I. the environment 1.1 is safe

This guide will show you how to install OpenStack using Red Hat Enterprise Linux 7 and its derived EPEL repository.

Note: at present, Centos7.3 version is used to install Openstack-liberty version. The test experimented with the creation and installation of virtual machines in the KVM environment.

1.2 Host network 1. Control node / compute node turns off firewall and SELinux

Systemctl stop iptables

Systemctl stop firewalld

Systemctl disable firewalld

Setenforce 0

Sed-I's setting SELINUXFORCING'/ etc/sysconfig/selinux

Yum install vim net-tools

two。 Configure the hosts file

Control node / compute node configuration hosts

Echo "192.168.0.231 controller" > > / etc/hosts

Echo "192.168.0.232 compute1" > > / etc/hosts

1.3 set time synchronization 1) Control Node

# yum install-y chrony

# vim / etc/chrony.conf

Allow 192.168 Compact 16 # allows those servers to synchronize time with themselves

# systemctl enable chronyd.service # Boot

# systemctl start chronyd.service

# timedatectl set-timezone Asia/Shanghai # set time zone

# timedatectl status

2) Compute node

# yum install-y chrony

# vim / etc/chrony.conf

Server 192.168.1.17 iburst # leaves only one line

# systemctl enable chronyd.service

# systemctl start chronyd.service

# timedatectl set-timezone Asia/Shanghai

# chronyc sources

Install Openstack package 1. 4. Prepare the OpenStack installation package yum source

# vi CentOS-OpenStack-liberty.repo

[centos-openstack-liberty]

Name=CentOS-7-OpenStack liberty

Baseurl= http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-liberty/

Gpgcheck=0

Enabled=1

Gpgkey= file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Centos-7

[centos-openstack-liberty-test]

Name=CentOS-7-OpenStack liberty Testing

Baseurl= http://buildlogs.centos.org/centos/7/cloud/$basearch/openstack-liberty/

Gpgcheck=0

Enabled=0

# or use CentOS7 to install the epel source provided by OpenStack

# yum install-y centos-release-openstack-liberty

two。 Install openstack1) Control Node install openstack

# Base

Yum install-y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm

Yum install-y centos-release-openstack-liberty

Yum install-y python-openstackclient

# # MySQL

Yum install-y mariadb mariadb-server MySQL-python

# # RabbitMQ

Yum install-y rabbitmq-server

# # Keystone

Yum install-y openstack-keystone httpd mod_wsgi memcached python-memcached

# # Glance

Yum install-y openstack-glance python-glance python-glanceclient

# # Nova

Yum install-y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

# # Neutron linux-node1.example.com

Yum install-y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset

# # Dashboard

Yum install-y openstack-dashboard

# # Cinder

Yum install-y openstack-cinder python-cinderclient

2) Compute node installs openstack

# # Base

Yum install-y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm

Yum install centos-release-openstack-liberty

Yum install python-openstackclient

# # Nova

Yum install-y openstack-nova-compute sysfsutils

# # Neutron

Yum install-y openstack-neutron openstack-neutron-linuxbridge ebtables ipset

# # Cinder

Yum install-y openstack-cinder python-cinderclient targetcli python-oslo-policy

1.5 install SQL database 1. Install the database

[root@controller ~] # yum install mariadb mariadb-server MySQL-python

[root@controller ~] # vi / etc/my.cnf.d/mariadb_openstack.cnf

[mysqld]

Bind-address = 192.168.0.231

Default-storage-engine = innodb

Innodb_file_per_table

Collation-server = utf8_general_ci

Init-connect = 'SET NAMES utf8'

Character-set-server = utf8

Max_connections=1000

[root@controller ~] # systemctl enable mariadb.service

[root@controller ~] # systemctl start mariadb.service

two。 Create password: openstack

[root@controller ~] # mysql_secure_installation

3. Adjust the maximum number of database connections:

[root@controller ~] # vi / usr/lib/systemd/system/mariadb.service

[Service] add two new lines of parameters as follows:

LimitNOFILE=10000

LimitNPROC=10000

Systemctl-system daemon-reload

Systemctl restart mariadb.service

Mysql-uroot-popenstack

SQL > show variables like 'max_connections'

4. Create a database

# mysql-u root-p

CREATE DATABASE keystone

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY' openstack'

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY' openstack'

CREATE DATABASE glance

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY' openstack'

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY' openstack'

CREATE DATABASE nova

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY' openstack'

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY' openstack'

CREATE DATABASE neutron

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY' openstack'

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY' openstack'

CREATE DATABASE cinder

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY' cinder'

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY' openstack'

FLUSH PRIVILEGES

SHOW DATABASES

1.6 message queuing rabbitmq supports clustering. 1) install and start rabbitmq on the control node and listen on port 5672

Yum install rabbitmq-server

Systemctl enable rabbitmq-server.service

Systemctl start rabbitmq-server.service

# netstat-tunlp | grep 5672

Tcp 0 0 0.0.0.0 15672 0.0.0. 0 LISTEN 1694/beam.smp

Tcp 0 0 0.0.0.0 25672 0.0.0. 0 LISTEN 1694/beam.smp

Tcp6 0 0: 5672: * LISTEN 1694/beam.smp

2) add openstack users:

# rabbitmqctl add_user openstack openstack

3) authorize users to read, write and configure openstack

# rabbitmqctl set_permissions openstack ". *"

4) View the installation plug-in

# rabbitmq-plugins list

5) enable the management plug-in

# rabbitmq-plugins enable rabbitmq_management

The following plugins have been enabled:

Mochiweb

Webmachine

Rabbitmq_web_dispatch

Amqp_client

Rabbitmq_management_agent

Rabbitmq_management

Applying plugin configuration to rabbit@controller... Started 6 plugins.

# rabbitmq-plugins list

# the following services are enabled:

Configured: e = explicitly enabled; e = implicitly enabled

| | Status: * = running on rabbit@controller |

| /

[e *] amqp_client 3.6.5

[e *] mochiweb 2.13.1

[E*] rabbitmq_management 3.6.5

[e *] rabbitmq_management_agent 3.6.5

[e *] rabbitmq_web_dispatch 3.6.5

[e*] webmachine 1.10.3

6) restart

# systemctl start rabbitmq-server.service

The web interface of port 15672 will be launched.

7) Log in and add openstack as administrator

Http://192.168.0.231:15672

Guest/guest default password, also for administrator

Note: configure openstack/openstack to tags and administrator

2. Configure keystone

Keystone is installed on the controller node. In order to improve service performance, apache is used to provide WEB requests, and memcached is used to save Token information.

2.1 install the keystone package

# yum install openstack-keystone httpd mod_wsgi memcached python-memcached

2.2 configure keystone

Note: the default configuration of keystone with different version numbers may be different.

Openssl rand-hex 10

C885b63d0ce5760ff23e

A random value. Change to admin_ token value

Cat / etc/keystone/keystone.conf | grep-v "^ #" | grep-v "^ $"

[DEFAULT]

Admin_token = c885b63d0ce5760ff23e

[database]

Connection = mysql://keystone:openstack@192.168.0.231/keystone

[memcache]

Servers = 192.168.0.231purl 11211

[revoke]

Driver = sql

[token]

Provider = uuid

Driver = memcache

2.3 initialize database 1) initialize database

# chown-R keystone:keystone / var/log/keystone

# su-s / bin/sh-c "keystone-manage db_sync" keystone

#

A keystone.log log is generated under / var/log/keystone/, and keystone writes the file when it starts.

#

2) verify the database

# mysql-h 192.168.0.231-ukeystone-popenstack-e "use keystone;show tables;"

3) start memcache

Systemctl enable memcached.service

Systemctl start memcached.service

# netstat-tunlp | grep 11211

Tcp 0 0 127.0.0.1 11211 0.0.0.0 * LISTEN 3288/memcached

Tcp6 0 0:: 1 11211:: * LISTEN 3288/memcached

Udp 0 0 127.0.0.1 11211 0.0.0.0 * 3288/memcached

Udp6 0 0:: 1 11211:: * 3288/memcached

2.4 configure HTTP server 1) modify server name

# vi / etc/httpd/conf/httpd.conf

ServerName 192.168.0.231:80

2) add services for keystone

# vi / etc/httpd/conf.d/wsgi-keystone.conf

Listen 5000

Listen 35357

WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=% {GROUP}

WSGIProcessGroup keystone-public

WSGIScriptAlias / / usr/bin/keystone-wsgi-public

WSGIApplicationGroup% {GLOBAL}

WSGIPassAuthorization On

= 2.4 >

ErrorLogFormat "{cu} t M"

ErrorLog / var/log/httpd/keystone-error.log

CustomLog / var/log/httpd/keystone-access.log combined

= 2.4 >

Require all granted

Order allow,deny

Allow from all

WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=% {GROUP}

WSGIProcessGroup keystone-admin

WSGIScriptAlias / / usr/bin/keystone-wsgi-admin

WSGIApplicationGroup% {GLOBAL}

WSGIPassAuthorization On

= 2.4 >

ErrorLogFormat "{cu} t M"

ErrorLog / var/log/httpd/keystone-error.log

CustomLog / var/log/httpd/keystone-access.log combined

= 2.4 >

Require all granted

Order allow,deny

Allow from all

3) start the HTTP server

Systemctl enable httpd.service

Systemctl start httpd.service

Verify:

[root@controller ~] # ss-ntl | grep-E "5000 | 35357"

LISTEN 0 128 *: 35357 *: *

LISTEN 0 128 *: 5000 *: *

2.5 Registration service entities and API2.5.1 configuration environment variables

[root@controller ~] #

Export OS_TOKEN=c885b63d0ce5760ff23e

Export OS_URL= http://192.168.0.231:35357/v3

Export OS_IDENTITY_API_VERSION=3

2.5.2 keystone service registration

[root@controller] # openstack service create-- name keystone-- description "OpenStack Identity" identity

2.5.3 keystone API registers admin managed, public public, internal internal

[root@controller ~] # openstack endpoint create-- region RegionOne identity public http://192.168.0.231:5000/v2.0

[root@controller ~] # openstack endpoint create-- region RegionOne identity internal http://192.168.0.231:5000/v2.0

[root@controller ~] # openstack endpoint create-- region RegionOne identity admin http://192.168.0.231:35357/v2.0

[root@controller ~] # openstack endpoint list

+- -+

| | ID | Region | Service Name | Service Type | Enabled | Interface | URL | |

+- -+

| | 05a5e9b559664d848b45d353d12594c1 | RegionOne | keystone | identity | True | admin | http://192.168.0.231:35357/v2.0 | |

| | 9a240664c4dc438aa8b9f892c668cb27 | RegionOne | keystone | identity | True | internal | http://192.168.0.231:5000/v2.0 | |

| | e63642b80e4f45b69866825e9e1b9837 | RegionOne | keystone | identity | True | public | http://192.168.0.231:5000/v2.0 | |

+- -+

2.6.Create projects, users and rules 2.6.1 create admin projects\ admin users\ admin roles and associate admin projects, admin roles, and admin users

[root@controller] # openstack project create-- domain default-- description "Admin Project" admin

[root@controller] # openstack user create-- domain default-- password=openstack admin

[root@controller ~] # openstack role create admin

[root@controller] # openstack role add-- project admin-- user admin admin

[root@controller ~] # openstack user list

+-+ +

| | ID | Name |

+-+ +

| | e2bae88d31b54e4ab1a4cb2251da8a6a | admin |

+-+ +

2.6.2 create demo project\ create demo user\ create user role\ and associate demo project, user role, demo user

[root@controller] # openstack project create-- domain default-- description "Demo Project" demo

[root@controller] # openstack user create-- domain default-- password=openstack demo

[root@controller ~] # openstack role create user

[root@controller] # openstack role add-- project demo-- user demo user

[root@controller ~] # openstack user list

+-+ +

| | ID | Name |

+-+ +

| | 4151e2b9b78842d282250d4cfb31ebba | demo |

| | 508b377f6f3a478f80a5a019e2c5b10a | admin |

+-+ +

2.6.3 create a service project

[root@controller] # openstack project create-- domain default-- description "Service Project" service

View the project:

[root@controller ~] # openstack project list

+-+ +

| | ID | Name |

+-+ +

| | 184655bf46de4c3fbc0f8f13d1d9bfb8 | service |

| | 3bfa1c4208d7482a8f21709d458f924e | demo |

| | 77f86bae2d344a658f26f71d03933c45 | admin |

+-+ +

[root@linux-node1 ~] # openstack endpoint delete ID

2.7 password verification of keystone correctness

In order to verify, temporarily change the environment variable, to use user name password authentication, do not need token authentication, to remove the environment variable.

[root@controller ~] # unset OS_TOKEN OS_URL

Request token for admin user

[root@controller ~] # openstack-- os-auth-url http://192.168.0.231:35357/v3\

-- os-project-domain-id default-- os-user-domain-id default\

-os-project-name admin-os-username admin-os-auth-type password token issue

Password:

+-- +

| | Field | Value |

+-- +

| | expires | 2017-05-10T03:12:15.764769Z |

| | id | b28410f9c6314cd8aebeca0beb478bf9 |

| | project_id | 79d295e81e5a4255a02a8ea26ae4606a |

| | user_id | 4015e1151aee4ab7811f320378ce6031 |

+-- +

Request token for domo user

[root@controller ~] # openstack-- os-auth-url http://192.168.0.231:5000/v3\

-- os-project-domain-name default-- os-user-domain-name default\

-os-project-name demo-os-username demo token issue

Password:

+-- +

| | Field | Value |

+-- +

| | expires | 2017-05-10T03:12:59.252178Z |

| | id | 110b9597c5fd49ac9ac3c1957648ede7 |

| | project_id | ce0af495eb844e199db649d7f7baccb4 |

| | user_id | afd908684eee42aaa7d73e22671eee24 |

+-- +

2.8 use environment variable script 1. Create an environment script for an admin user

[root@controller ~] # vim admin-openrc.sh

Export OS_PROJECT_DOMAIN_ID=default

Export OS_USER_DOMAIN_ID=default

Export OS_PROJECT_NAME=admin

Export OS_TENANT_NAME=admin

Export OS_USERNAME=admin

Export OS_PASSWORD=openstack

Export OS_AUTH_URL= http://192.168.0.231:35357/v3

Export OS_IDENTITY_API_VERSION=3

two。 Create an environment script for a demo user

[root@controller ~] # vim demo-openrc.sh

Export OS_PROJECT_DOMAIN_ID=default

Export OS_USER_DOMAIN_ID=default

Export OS_PROJECT_NAME=demo

Export OS_TENANT_NAME=demo

Export OS_USERNAME=demo

Export OS_PASSWORD=openstack

Export OS_AUTH_URL= http://192.168.0.231:5000/v3

Export OS_IDENTITY_API_VERSION=3

3. Use script to test

[root@controller ~] # source admin-openrc.sh

[root@controller ~] # openstack token issue

+-- +

| | Field | Value |

+-- +

| | expires | 2017-05-10T03:19:08.928697Z |

| | id | df25646c15cb433ab7251dcd0308ecbf |

| | project_id | 79d295e81e5a4255a02a8ea26ae4606a |

| | user_id | 4015e1151aee4ab7811f320378ce6031 |

+-+ 3. Install the image service (glance)

Glance provides users with the services of discovering, registering and retrieving virtual machine images. The image is stored in the / var/lib/glance/p_w_picpaths/ directory by default.

3.1 the environment is ready to use admin permissions

[root@controller ~] # source admin-openrc.sh

1) create glance users\ associate service projects, admin roles, and glance users

[root@controller] # openstack user create-- domain default-- password=openstack glance

[root@controller] # openstack role add-- project service-- user glance admin

2) register a service named p_w_picpath

[root@controller] # openstack service create-- name glance-- description "OpenStack Image service" p_w_picpath

3) Register API endpoints

[root@controller ~] # openstack endpoint create-- region RegionOne p_w_picpath public http://192.168.0.231:9292

[root@controller ~] # openstack endpoint create-- region RegionOne p_w_picpath internal http://192.168.0.231:9292

[root@controller ~] # openstack endpoint create-- region RegionOne p_w_picpath admin http://192.168.0.231:9292

3.2 install and configure 3.2.1 installation package

[root@controller ~] # yum install openstack-glance python-glance python-glanceclient

3.2.2 modify glance configuration 1. Configure the creation of the image, delete the recycling service

Cat / etc/glance/glance-api.conf | grep-v "^ #" | grep-v "^ $"

[DEFAULT]

Verbose=True

Notification_driver = noop

[database]

Connection = mysql://glance:openstack@192.168.0.231/glance

[glance_store]

Default_store=file

Filesystem_store_datadir=/var/lib/glance/p_w_picpaths/

[keystone_authtoken]

Auth_uri= http://192.168.0.231:5000

Auth_url= http://192.168.0.231:35357

Auth_plugin=password

Project_domain_id=default

User_domain_id=default

Project_name=service

Username=glance

Password=openstack

[paste_deploy]

Flavor=keystone

two。 Configure the image registration service of the cloud system

Cat / etc/glance/glance-registry.conf | grep-v "^ #" | grep-v "^ $"

[DEFAULT]

Verbose=True

Notification_driver = noop

[database]

Connection = mysql://glance:openstack@192.168.0.231/glance

[keystone_authtoken]

Auth_uri= http://192.168.0.231:5000

Auth_url= http://192.168.0.231:35357

Auth_plugin=password

Project_domain_id=default

User_domain_id=default

Project_name=service

Username=glance

Password=openstack

[paste_deploy]

Flavor=keystone

3.2.3 Import data 1) initialize the database

[root@controller] # su-s / bin/sh-c "glance-manage db_sync" glance

No handlers could be found for logger "oslo_config.cfg"

/ usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py:450: Warning: Duplicate index `ix_p_w_picpath_properties_p_w_picpath_id_ name. This is deprecated and will be disallowed in a future release.

Cursor.execute (statement, parameters)

This error can be ignored, and you can use mysql to test whether the database login and creation is successful.

2) verify the database

# mysql-h 192.168.0.231-uglance-popenstack-e "use glance;show tables;"

3.3 start the openstack-glance service

# systemctl enable openstack-glance-api.service openstack-glance-registry.service

# systemctl start openstack-glance-api.service openstack-glance-registry.service

Verify:

[root@controller ~] # ss-ntl | grep-E "9191 | 9292"

LISTEN 0 128 *: 9292 *: *

LISTEN 0 128 *: 9191 *: *

3.4 glance installation verification

We use a very small system image to verify the successful deployment of glance

1. Modify environment variable script

[root@controller ~] # echo "export OS_IMAGE_API_VERSION=2" | tee-an admin-openrc.sh demo-openrc.sh

[root@controller ~] # source admin-openrc.sh

[root@controller ~] # wget http://cloud.centos.org/centos/7/p_w_picpaths/CentOS-7-x86_64-GenericCloud.qcow2

two。 Upload image to glance

[root@controller] # glance p_w_picpath-create-- name "CentOS-7-x86_64"-- file CentOS-7-x86_64-GenericCloud.qcow2-- disk-format qcow2-- container-format bare\

-visibility public-progress

[= >] 100%

+-- +

| | Property | Value |

+-- +

| | checksum | 212b6a881800cad892347073f0de2117 |

| | container_format | bare |

| | created_at | 2017-05-22T10:13:24Z |

| | disk_format | qcow2 |

| | id | e7e2316a-f585-488e-9fd9-85ce75b098d4 |

| | min_disk | 0 | |

| | min_ram | 0 | |

| | name | CentOS-7-x86_64 |

| | owner | be420231d13848809da36178cbac4d22 |

| | protected | False |

| | size | 741539840 | |

| | status | active |

| | tags | [] |

| | updated_at | 2017-05-22T10:13:31Z |

| | virtual_size | None |

| | visibility | public |

+-- +

3. View uploaded images

[root@controller ~] # glance p_w_picpath-list

+-+ +

| | ID | Name |

+-+ +

| | 2ac90c0c-b923-43ff-8f99-294195a64ced | CentOS-7-x86_64 | |

+-+ +

View the files on disk:

[root@controller ~] # ll / var/lib/glance/p_w_picpaths/

The total dosage is 12980

-rw-r-. 1 glance glance 1569390592 Aug 26 12:50 2ac90c0c-b923-43ff-8f99-294195a64ced

Installation of Computing Services (nova) 4.1 installation and configuration of Control Node (controller)

This section describes the deployment of nova on the control node (compute).

4.2 Control Node to create nova users and register 1. Use admin user rights

[root@controller ~] # source admin-openrc.sh

two。 Create nova user\ role\ service

[root@controller] # openstack user create-- domain default-- password=openstack nova

[root@controller] # openstack role add-- project service-- user nova admin

[root@controller] # openstack service create-- name nova-- description "OpenStack Compute" compute

3. Register for API

[root@controller ~] # openstack endpoint create-- region RegionOne compute public http://192.168.0.231:8774/v2/%\(tenant_id\)s

[root@controller ~] # openstack endpoint create-- region RegionOne compute internal http://192.168.0.231:8774/v2/%\(tenant_id\)s

[root@controller ~] # openstack endpoint create-- region RegionOne compute admin http://192.168.0.231:8774/v2/%\(tenant_id\)s

4.3 Control Node installation and configuration components 4.3.1 installation components

# yum install openstack-nova-api openstack-nova-cert\

Openstack-nova-conductor openstack-nova-console\

Openstack-nova-novncproxy openstack-nova-scheduler\

Python-novaclient

4.3.2 modify nova configuration

Cat / etc/nova/nova.conf | grep-v "^ #" | grep-v "^ $"

[DEFAULT]

My_ip=192.168.0.231

Enabled_apis=osapi_compute,metadata

Auth_strategy=keystone

Allow_resize_to_same_host=True

Network_api_class=nova.network.neutronv2.api.API

Linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver

Security_group_api=neutron

Scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

Firewall_driver=nova.virt.firewall.NoopFirewallDriver

Verbose=true

Rpc_backend=rabbit

[database]

Connection=mysql://nova:openstack@192.168.0.231/nova

[glance]

Host=192.168.0.231

[keystone_authtoken]

Auth_uri= http://192.168.0.231:5000

Auth_url= http://192.168.0.231:35357

Auth_plugin=password

Project_domain_id=default

User_domain_id=default

Project_name=service

Username=nova

Password=openstack

[libvirt]

Virt_type=kvm

[neutron]

Url= http://192.168.0.231:9696

Auth_url= http://192.168.0.231:35357

Auth_plugin=password

Project_domain_id=default

User_domain_id=default

Region_name=RegionOne

Project_name=service

Username=neutron

Password=openstack

Service_metadata_proxy=true

Metadata_proxy_shared_secret=METADATA_SECRET

Lock_path=/var/lib/nova/tmp

[oslo_concurrency]

Lock_path=/var/lib/nova/tmp

[oslo_messaging_rabbit]

Rabbit_host=192.168.0.231

Rabbit_port=5672

Rabbit_userid=openstack

Rabbit_password=openstack

[vnc]

Vncserver_listen=$my_ip

Vncserver_proxyclient_address=$my_ip

4.3.3 data Import

[root@controller] # su-s / bin/sh-c "nova-manage db sync" nova

No handlers could be found for logger "oslo_config.cfg"

/ usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py:450: Warning: Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_ idx`. This is deprecated and will be disallowed in a future release.

Cursor.execute (statement, parameters)

/ usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py:450: Warning: Duplicate index `uniq_ instances0uuid`. This is deprecated and will be disallowed in a future release.

Cursor.execute (statement, parameters)

# mysql-h 192.168.0.231-unova-popenstack-e "use nova;show tables;"

4.3.4 complete the installation

# systemctl enable openstack-nova-api.service\

Openstack-nova-cert.service openstack-nova-consoleauth.service\

Openstack-nova-scheduler.service openstack-nova-conductor.service\

Openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service\

Openstack-nova-cert.service openstack-nova-consoleauth.service\

Openstack-nova-scheduler.service openstack-nova-conductor.service\

Openstack-nova-novncproxy.service

[root@controller ~] # openstack host list

+-+

| | Host Name | Service | Zone | |

+-+

| | controller | consoleauth | internal | / / consoleauth is used for console verification |

| | controller | conductor | internal | / / conductor is used to access the database |

| | controller | cert | internal | / / cert is used for authentication |

| | controller | scheduler | internal | / / scheduler is used for scheduling |

+-+

4.4 install and configure compute nodes (compute)

This section describes the deployment of nova on the compute node (compute).

4.4.1 installation Compute Node installation

[root@compute1 ~] # yum install openstack-nova-compute sysfsutils

4.4.2 copy the control node nova.conf and edit / etc/nova/nova.conf

[DEFAULT]

My_ip=192.168.0.232

[vnc]

Enabled = True

Vncserver_listen = 0.0.0.0

Vncserver_proxyclient_address = $my_ip

Novncproxy_base_url = http://192.168.0.231:6080/vnc_auto.html

Keymap=en-us

[glance]

Host = 192.168.0.231

[libvirt]

Virt_type=kvm

Check to see if the configuration file is working:

[root@compute1 ~] # cat / etc/nova/nova.conf | grep-v "^ #" | grep-v "^ $"

[DEFAULT]

My_ip=192.168.0.232

Enabled_apis=osapi_compute,metadata

Auth_strategy=keystone

Allow_resize_to_same_host=True

Network_api_class=nova.network.neutronv2.api.API

Linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver

Security_group_api=neutron

Scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

Firewall_driver=nova.virt.firewall.NoopFirewallDriver

Verbose=true

Rpc_backend=rabbit

[database]

Connection=mysql://nova:openstack@192.168.0.231/nova

[glance]

Host=192.168.0.231

[keystone_authtoken]

Auth_uri= http://192.168.0.231:5000

Auth_url= http://192.168.0.231:35357

Auth_plugin=password

Project_domain_id=default

User_domain_id=default

Project_name=service

Username=nova

Password=openstack

[libvirt]

Virt_type=kvm

Inject_password = true

Inject_key = true

[neutron]

Url= http://192.168.0.231:9696

Auth_url= http://192.168.0.231:35357

Auth_plugin=password

Project_domain_id=default

User_domain_id=default

Region_name=RegionOne

Project_name=service

Username=neutron

Password=openstack

Service_metadata_proxy=true

Metadata_proxy_shared_secret=METADATA_SECRET

Lock_path=/var/lib/nova/tmp

[oslo_concurrency]

Lock_path=/var/lib/nova/tmp

[oslo_messaging_rabbit]

Rabbit_host=192.168.0.231

Rabbit_port=5672

Rabbit_userid=openstack

Rabbit_password=openstack

[vnc]

Novncproxy_base_url= http://192.168.0.231:6080/vnc_auto.html

Vncserver_listen=0.0.0.0

Vncserver_proxyclient_address=$my_ip

Enabled=true

Check that the server supports hardware virtualization:

[root@compute1 ~] # egrep-c'(vmx | svm)'/ proc/cpuinfo

four

If the number displayed is 0, hardware virtualization is not supported.

4.4.3 complete installation of startup service

[root@compute1 ~] # systemctl enable libvirtd.service openstack-nova-compute.service

[root@compute1 ~] # systemctl start libvirtd.service openstack-nova-compute.service

4.4.4 verify, pull the environment variable configuration script

[root@compute1] # scp controller:~/*openrc.sh.

Root@controller's password:

Admin-openrc.sh 100% 289 0.3KB/s 00:00

Demo-openrc.sh 100% 285 0.3KB/s 00:00

[root@compute1 ~] # source admin-openrc.sh

1. Whether the installation is successful and register whether the test and glance are normal

[root@compute1 ~] # nova p_w_picpath-list

+-+

| | ID | Name | Status | Server | |

+-+

| | 2ac90c0c-b923-43ff-8f99-294195a64ced | CentOS-6-x86_64 | ACTIVE |

+-+

[root@compute1 ~] # openstack host list

+-+

| | Host Name | Service | Zone | |

+-+

| | controller | consoleauth | internal | |

| | controller | conductor | internal | |

| | controller | cert | internal | |

| | controller | scheduler | internal | |

| | compute1 | compute | nova | |

+-+

two。 View nova service components

[root@compute1 ~] # nova service-list

+-+

| | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |

+-+

| | 1 | nova-consoleauth | controller | internal | enabled | up | 2017-05-10T09:17:29.000000 |-|

| | 2 | nova-conductor | controller | internal | enabled | up | 2017-05-10T09:17:31.000000 |-|

| | 4 | nova-cert | controller | internal | enabled | up | 2017-05-10T09:17:29.000000 |-|

| | 5 | nova-scheduler | controller | internal | enabled | up | 2017-05-10T09:17:29.000000 |-|

| | 6 | nova-compute | compute1 | nova | enabled | up | 2017-05-10T09:17:33.000000 |-|

+-+

3. View API endpoints (you can ignore WARNING-level information)

[root@compute1 ~] # nova endpoints

Install Network components (neutron) 5.1 install and configure controller node 1. Use admin permissions

[root@controller ~] # source admin-openrc.sh

[root@controller] # openstack user create-- domain default-- password=openstack neutron

[root@controller] # openstack role add-- project service-- user neutron admin

[root@controller] # openstack service create-- name neutron-- description "OpenStack Networking" network

[root@controller ~] # openstack endpoint create-- region RegionOne network public http://192.168.0.231:9696

[root@controller ~] # openstack endpoint create-- region RegionOne network internal http://192.168.0.231:9696

[root@controller ~] # openstack endpoint create-- region RegionOne network admin http://192.168.0.231:9696

two。 Configure the network, this example uses a flat network (1) to install related components

[root@controller ~] # yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset

(2) configure neutron server components

Server component configuration includes database, authentication, message queue, topology change notification, plug-in

Vi / etc/neutron/neutron.conf

[DEFAULT]

State_path = / var/lib/neutron

Core_plugin = ml2

Service_plugins = router

Rpc_backend=rabbit

Auth_strategy=keystone

Notify_nova_on_port_status_changes=True

Notify_nova_on_port_data_changes=True

Nova_url= http://192.168.0.231:8774/v2

Verbose=True

[database]

Connection = mysql://neutron:openstack@192.168.0.231/neutron

[oslo_messaging_rabbit]

Rabbit_host = 192.168.0.231

Rabbit_port = 5672

Rabbit_userid = openstack

Rabbit_password = openstack

[oslo_concurrency]

Lock_path = $state_path/lock

[keystone_authtoken]

Auth_uri= http://192.168.0.231:5000

Auth_url= http://192.168.0.231:35357

Auth_plugin=password

Project_domain_id=default

User_domain_id=default

Project_name=service

Username=neutron

Password=openstack

Admin_tenant_name =% SERVICE_TENANT_NAME%

Admin_user =% SERVICE_USER%

Admin_password =% SERVICE_PASSWORD%

[nova]

Auth_url= http://192.168.0.231:35357

Auth_plugin=password

Project_domain_id=default

User_domain_id=default

Region_name=RegionOne

Project_name=service

Username=nova

Password=openstack

(3) configure ML2 plug-in (layer 2 network plug-in)

Vi / etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

# Note: when ML2 is enabled, deleting the value of type_drivers will result in database exception

Type_drivers = flat,vlan,gre,vxlan,geneve

Tenant_network_types = vlan,gre,vxlan,geneve

Mechanism_drivers = openvswitch,linuxbridge

Extension_drivers = port_security

[ml2_type_flat]

Flat_networks = physnet1

[securitygroup]

Enable_ipset = True

(4) configure Linux bridge agent

Vi / etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

Physical_interface_mappings = physnet1:eth0

[vxlan]

Enable_vxlan = False

[agent]

Prevent_arp_spoofing = True

[securitygroup]

Enable_security_group = True

Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

(5) configure DHCP Agent

Vi / etc/neutron/dhcp_agent.ini

[DEFAULT]

Interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

Dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

Enable_isolated_metadata = True

(6) configure metadata agent

Vi / etc/neutron/metadata_agent.ini

[DEFAULT]

Auth_uri = http://192.168.0.231:5000

Auth_url = http://192.168.0.231:35357

Auth_region = RegionOne

Auth_plugin = password

Project_domain_id = default

User_domain_id = default

Project_name = service

Username = neutron

Password = openstack

Nova_metadata_ip = 192.168.0.231

Metadata_proxy_shared_secret = METADATA_SECRET

Verbose = True

Admin_tenant_name =% SERVICE_TENANT_NAME%

Admin_user =% SERVICE_USER%

Admin_password =% SERVICE_PASSWORD%

(7) complete the installation and establish a link

[root@controller] # ln-s / etc/neutron/plugins/ml2/ml2_conf.ini / etc/neutron/plugin.ini

Synchronous data

[root@controller] # su-s / bin/sh-c "neutron-db-manage-- config-file / etc/neutron/neutron.conf\

-config-file / etc/neutron/plugins/ml2/ml2_conf.ini upgrade head "neutron

Restart the nova-api service

[root@controller ~] # systemctl restart openstack-nova-api.service

Startup and configuration Boot Boot

[root@controller ~] # systemctl enable neutron-server.service\

Neutron-linuxbridge-agent.service neutron-dhcp-agent.service\

Neutron-metadata-agent.service

[root@controller ~] # systemctl start neutron-server.service\

Neutron-linuxbridge-agent.service neutron-dhcp-agent.service\

Neutron-metadata-agent.service

[root@controller ~] # source admin-openrc.sh

[root@controller ~] # neutron agent-list

It takes more than 60 seconds to get out.

+-+

| | id | agent_type | host | alive | admin_state_up | binary | |

+-+

| | 5d05a4fc-3a5e-49ef-b9da-28c7f4969532 | DHCP agent | controller |: -) | True | neutron-dhcp-agent |

| | 6e1979c0-c576-42d1-a7d7-5d28cfa74793 | Metadata agent | controller |: -) | True | neutron-metadata-agent |

| | f4af7059-0f36-430a-beee-f168ff55fd90 | Linux bridge agent | controller |: -) | True | neutron-linuxbridge-agent |

+-+

5.2 install and configure compute node 1. Component installation

# yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset

two。 Common component configuration

The network common component configuration includes authentication, message queuing, and plug-ins, which are copied directly from the control node.

[root@controller ~] # scp / etc/neutron/neutron.conf 192.168.0.232:/etc/neutron/

[root@controller ~] # scp / etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.0.232:/etc/neutron/plugins/ml2/

[root@controller ~] # scp / etc/neutron/plugins/ml2/ml2_conf.ini 192.168.0.232:/etc/neutron/plugins/ml2/

Complete the installation and establish a link

[root@compute1] # ln-s / etc/neutron/plugins/ml2/ml2_conf.ini / etc/neutron/plugin.ini

[root@compute1 ~] # vi / etc/neutron/neutron.conf

[database]

# comment out all configurations of this module, as there is no need for compute nodes to connect directly to the database

Restart the compute service:

[root@compute1 ~] # systemctl restart openstack-nova-compute.service

Start Linux bridge agent and set Boot self-boot

[root@compute1 ~] # systemctl enable neutron-linuxbridge-agent.service

[root@compute1 ~] # systemctl start neutron-linuxbridge-agent.service

5.3 Verification

The following command is executed on the controller node

[root@controller ~] # source admin-openrc.sh

[root@controller ~] # neutron ext-list

+-+-

| | alias | name |

+-+-

| | dns-integration | DNS Integration |

| | ext-gw-mode | Neutron L3 Configurable external gateway mode |

| | binding | Port Binding |

| | agent | agent |

| | subnet_allocation | Subnet Allocation |

| | l3_agent_scheduler | L3 Agent Scheduler |

| | external-net | Neutron external network |

| | flavors | Neutron Service Flavors |

| | net-mtu | Network MTU |

| | quotas | Quota management support |

| | l3-ha | HA Router extension |

| | provider | Provider Network |

| | multi-provider | Multi Provider Network |

| | extraroute | Neutron Extra Route |

| | router | Neutron L3 Router |

| | extra_dhcp_opt | Neutron Extra DHCP opts |

| | security-group | security-group |

| | dhcp_agent_scheduler | DHCP Agent Scheduler |

| | rbac-policies | RBAC Policies |

| | port-security | Port Security |

| | allowed-address-pairs | Allowed Address Pairs |

| | dvr | Distributed Virtual Router |

+-+-

[root@controller ~] # neutron agent-list

+-+

| | id | agent_type | host | alive | admin_state_up | binary | |

+-+

| | 5d05a4fc-3a5e-49ef-b9da-28c7f4969532 | DHCP agent | controller |: -) | True | neutron-dhcp-agent |

| | 6e1979c0-c576-42d1-a7d7-5d28cfa74793 | Metadata agent | controller |: -) | True | neutron-metadata-agent |

| | f0aa7ff3-01c9-450f-bcc4-63ffee250bd7 | Linux bridge agent | compute1 |: -) | True | neutron-linuxbridge-agent |

| | f4af7059-0f36-430a-beee-f168ff55fd90 | Linux bridge agent | controller |: -) | True | neutron-linuxbridge-agent |

+-+

The following one should see four agent,3 on the controller node and one on the compute1 node

Create a virtual machine instance 6.1 create a virtual network 6.1.1 create a shared network

[root@controller ~] # source admin-openrc.sh

[root@controller] # neutron net-create public-shared-provider:physical_network physnet1-provider:network_type flat

Created a new network:

+-+

| | Field | Value |

+-+

| | admin_state_up | True |

| | id | 6759f3eb-a4c8-4503-b92b-da6daacf0ab4 |

| | mtu | 0 | |

| | name | public |

| | port_security_enabled | True |

| | provider:network_type | flat |

| | provider:physical_network | physnet1 |

| | provider:segmentation_id |

| | router:external | False |

| | shared | True |

| | status | ACTIVE |

| | subnets |

| | tenant_id | 10952875490e43938d80d921337cb053 |

+-+

-- shared says all projects are allowed to use the network

6.1.2 create a subnet

[root@controller ~] # neutron subnet-create public 192.168.0.0Universe 24-name public-subunet-- allocation-pool start=192.168.0.200,end=192.168.0.210\

-- dns-nameserver 202.100.192.68-- gateway 192.168.0.253

Created a new subnet:

+-+

| | Field | Value |

+-+

| | allocation_pools | {"start": "192.168.0.200", "end": "192.168.0.210"} |

| | cidr | 192.168.0.0and24 | |

| | dns_nameservers | 202.100.192.68 | |

| | enable_dhcp | True |

| | gateway_ip | 192.168.0.253 |

| | host_routes |

| | id | da75b2db-56f4-45d2-b3f3-2ccf172f8798 |

| | ip_version | 4 |

| | ipv6_address_mode |

| | ipv6_ra_mode |

| | name | public-subunet |

| | network_id | 2e098da8-70f9-40bc-a393-868ed9a446cf |

| | subnetpool_id |

| | tenant_id | be420231d13848809da36178cbac4d22 |

+-+

6.1.3 View Subnet

[root@controller ~] # neutron net-list

+-+

| | id | name | subnets | |

+-+

| | 2e098da8-70f9-40bc-a393-868ed9a446cf | public | da75b2db-56f4-45d2-b3f3-2ccf172f8798 192.168.0.0Univer 24 |

+-+

[root@controller ~] # neutron subnet-list

+-+

| | id | name | cidr | allocation_pools | |

+-+

| | da75b2db-56f4-45d2-b3f3-2ccf172f8798 | public-subunet | 192.168.0.0 end 24 | {"start": "192.168.0.200", "end": "192.168.0.210"} |

+-+

6.2 generate key pair 6.2.1 use admin permissions

[root@controller ~] # source admin-openrc.sh

6.2.2 generate key pair

If you already have a key, you can regenerate it without using ssh-keygen

[root@controller ~] # ssh-keygen-Q-N ""

Enter file in which to save the key (/ root/.ssh/id_rsa):

[root@controller] # nova keypair-add-- pub-key ~ / .ssh/id_rsa.pub mykey

6.2.3 check what keys are available

[root@controller ~] # nova keypair-list

+-+

| | Name | Fingerprint |

+-+

| | mykey | bc:ca:8e:bb:61:01:7f:8a:ab:5e:d8:b2:2c:35:b7:83 |

+-+

6.3 add a security rule group

By default, the security rule group default applies to all instances and denies all remote access through firewall rules. Generally speaking, we usually allow access to both ICMP and SSH protocols.

[root@controller] # nova secgroup-add-rule default icmp-1-1 0.0.0.0 Universe 0

[root@controller ~] # nova secgroup-add-rule default tcp 22 22 0.0.0.0 Universe 0

The control panel (horizon) 7.1accesses the instance through the virtual console. The WEB-based management interface is generally installed on the controller node.

[root@controller ~] # yum install openstack-dashboard

7.2Configuring dashboard parameters

[root@controller ~] # vi / etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*',]

CACHES = {

'default': {

'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache'

'LOCATION':' 127.0.0.1purl 11211'

}

}

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {

"identity": 3

"volume": 2

}

OPENSTACK_NEUTRON_NETWORK = {

...

'enable_router': False

'enable_quotas': False

'enable_distributed_router': False

'enable_ha_router': False

'enable_lb': False

'enable_firewall': False

'enable_***': False

'enable_fip_topology_check': False

}

# time zone setting

TIME_ZONE = "Asia/Shanghai"

# you can change the password when creating a virtual machine

OPENSTACK_HYPERVISOR_FEATURES = {

'can_set_mount_point': True

'can_set_password': True

'requires_keypair': True

}

7.3 configure boot boot

[root@controller ~] # systemctl enable httpd.service memcached.service

[root@controller ~] # systemctl restart httpd.service memcached.service

7.4 verify that the installation is successful

Open it with a browser: http://192.168.0.231/dashboard

Domain: default

User: the admin or demo password is the password you created.

7.5 create a virtual machine instance 7.5.1 create a virtual machine instance process

Description: officially downloaded centos7 image. If you don't know the password, you need to specify a password when creating it. By default, centos7 can log in to SSH, but you cannot allow root to log in directly to ssh by default. You need to cancel root ssh login when you create a virtual machine instance.

If you want SSH, you can log in with a password. Then you need to modify the ssh root login settings with a script, and that cirros image test is not recommended.

#! / bin/sh

Sed-I 's/PasswordAuthentication no/PasswordAuthentication yes/g' / etc/ssh/sshd_config

Systemctl restart sshd

7.5.2 virtual machine console login

Test SSH login

7.5.1 Section

Through the process of installing openstack, we can understand the working principle and specific implementation of each component of openstack, and we can expand other contents on this basis.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report