Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Install the openstack mitaka version on centos7

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Openstack is really a behemoth, so it is not easy to eat thoroughly, so when you have an understanding of openstack, you should deploy it. Although openstack is installed with one-click installation tools such as rdo or devstack, it is best to dabble. After you have some experience in using it, you should install it from beginning to end, otherwise you will not be calm enough to report errors and solve the problems. So, when you have a basic understanding of openstack, start installing it.

Note: openstack's official document is really written, good do not want, but always feel a little bad in English, so I wrote this note on the basis of the official document.

Reference: http://docs.openstack.org/mitaka/install-guide-rdo/

First of all, it should be a general plan, which requires several nodes, what operating system to choose, and how to divide the network.

Here is my general plan.

Number of nodes: 2 (control node, compute node)

Operating system: CentOS Linux release 7.2.1511 (Core)

Network configuration:

Control node: 10.0.0.101 192.168.15.101

Settlement node: 10.0.0.102 192.168.15.102

Prerequisites:

The following minimum requirements should support a proof-of-concept environment with core services and several CirrOS instances:

Controller Node: 1 processor, 4 GB memory, and 5 GB storage

Compute Node: 1 processor, 2 GB memory, and 10 GB storage

Officials recommend minimum hardware requirements for a proof-of-concept.

Control node 1 processor, 4 GB memory, 5 GB hard disk

Compute node 1 processor, 2 GB memory, 10 GB hard disk

Reference: http://docs.openstack.org/mitaka/install-guide-rdo/environment.html

Note: if you use manual step by step to create the operating system and configure the network, then I have to despise you ~ ~ study vagrant, through the following configuration file you can generate two virtual machines with one command, and configure the network, vagrant simple tutorial reference: http://youerning.blog.51cto.com/10513771/1745102

#-*-mode: ruby-*-# vi: set ft=ruby: Vagrant.configure (2) do | config | config.vm.box = "centos7" node_servers = {: control = > ['10.0.0.101pr 192.168.15.101'],: compute = > [' 10.0.0.102ju Magna 192.168.15.102']} node_servers.each do | node_name Node_ip | config.vm.define node_name do | node_config | node_config.vm.host_name = node_name.to_s node_config.vm.network: private_network,ip: node_ip [0] node_config.vm.network: private_network,ip: node_ip [1] Virtualbox_inet: true config.vm.boot_timeout = 300 node_config.vm.provider "virtualbox" do | v | v.memory = 4096 v.cpus = 1 endend endend

Through a vagrant up command, wait a moment, two hot virtual machines will come out, and our environment will be OK.

The environment is as follows

Operating system: CentOS Linux release 7.2.1511 (Core)

Network configuration:

Control node: 10.0.0.101 192.168.15.101

Settlement node: 10.0.0.102 192.168.15.102

Note: the above config.vm.box = "centos7". You first need to have a box of centos7.

Before we start the deployment, let's straighten out the openstack installation steps.

The first is the preparation of the software environment. We need to configure some general software and source repositories, which are basically as follows.

NTP server

Control node, other node

Openstack installation package repository

Common components:

SQL database = = > MariaDB

NoSQL database = > MongoDB (basic components are not required,)

Message queuing = > RabbitMQ

Memcached

Then there are the various components under the whole framework of openstack. The basic components are as follows.

Authentication service = = > Keystone

Mirror service = = > Glance

Computing resource service = = > Nova

Network Resource Service = = > Neutron

Dashboard = = > Horizon

Block Storage Service = = > Cinder

Other storage services are as follows

File sharing service = = > Manila

Object Storage Service = = > Swift

Other components, as follows

Orchestration service = = > Heat

Telemetry service = = > Ceilometer

Database service = = > Trove

Environmental preparation

Domain name resolution:

Edit the hosts file in each node and add the following configuration

10.0.0.101 controller

10.0.0.102 compute

Ntp time server

Control node

1) install the chrony package

Yum install chrony

2) Edit the configuration file / etc/chrony.conf and add the following. 202.108.6.95 can be changed according to your own needs.

Server 202.108.6.95 iburst

Allow 10.0.0.0/24

3) join self-startup and start

# systemctl enable chronyd.service

# systemctl start chronyd.service

Other nodes

1) install the chrony package

Yum install chrony

2) Edit the configuration file / etc/chrony.conf and add the following

Server controller iburst

Allow 10.0.0.0/24

3) join self-startup and start

# systemctl enable chronyd.service

# systemctl start chronyd.service

Verify:

Control node

Chronyc sources

210 Number of sources = 2

MS Name/IP address Stratum Poll Reach LastRx Last sample

=

^-192.0.2.11 2 7 12 137-2814us [- 3000us] + /-43ms

^ * 192.0.2.12 2 6 177 46 + 17us [- 23us] + /-68ms

Other nodes

# chronyc sources

210 Number of sources = 1

MS Name/IP address Stratum Poll Reach LastRx Last sample

=

^ * controller 3 9 377 421 + 15us [- 87us] + /-15ms

Openstack installation package repository

Install the appropriate openstack version yum source

Yum install centos-release-openstack-mitaka

System update

Yum upgrade

Note: if the system kernel is updated, it needs to be rebooted.

Install openstackclient,openstack-selinux

Yum install python-openstackclientyum install openstack-selinux

Note: if you sign up for any Package does not match intended download, then yum clean all or download the rpm package directly to install it.

Refer to download address: http://ftp.usf.edu/pub/centos/7/cloud/x86_64/openstack-kilo/common/

SQL database

Installation

Yum install mariadb mariadb-server python2-PyMySQL

Create a / etc/my.cnf.d/openstack.cnf configuration file and add the following

# bind IP [mysqld] bind-address = 10.0.0.1 setting character set, etc. Default-storage-engine = innodb .innodb _ file_per_tablecollation-server = utf8_general_ci character-set-server = utf8

Configure startup items, startup, etc.

Systemctl enable mariadb.service systemctl start mariadb.service

Initialize the database, create a root password, etc., as follows

Mysql_secure_installation

Enter current password for root (enter for none): [Enter]

Set root password? [Y/n] Y

New password: openstack

Re-enter new password:openstack

Remove anonymous users? [Y/n] Y

Disallow root login remotely? [Y/n] n

Remove test database and access to it? [Y/n] Y

Reload privilege tables now? [Y/n] Y

Message queuing rabbitmq

Installation

Yum install rabbitmq-server

Configure startup items, start

Systemctl enable rabbitmq-server.servicesystemctl start rabbitmq-server.service

Add openstack user

Rabbitmqctl add_user openstack RABBIT_PASS

Set the permissions of openstack users, which are write, read, and access, respectively.

Rabbitmqctl set_permissions openstack ". *"

NoSQL Mongodb

Installation

Yum install mongodb-server mongodb

Configuration / etc/mongod.conf profile

Bind_ip = 10.0.0.11#smallfile=true optional smallfiles = true

Configure startup items, start

# systemctl enable mongod.service# systemctl start mongod.service

Memcached

Installation

# yum install memcached python-memcached

Configure startup items, start

# systemctl enable memcached.service# systemctl start memcached.service

At this point, the software environment of the entire openstack framework is basically done, and the following are the components.

Installing components is interesting, except that keystone is basically the same step, the only difference is that the name specified at the time of creation is different, basically the following steps.

1) configure the database

Create database xxxGRANT ALL PRIVILEGES ON keystone.* TO 'xxxx'@'localhost'\ IDENTIFIED BY' XXXX_DBPASS';GRANT ALL PRIVILEGES ON keystone.* TO 'xxxx'@'%'\ IDENTIFIED BY' XXXX_DBPASS'

2) installation

Yum install xxx

3) configuration file

Configure connections to various services, such as database, rabbitmq, etc.

Authentication configuration

Specific configuration

5) Database synchronization

Create the required table

4) add startup item and start

# systemctl enable openstack-xxx.service# systemctl start openstack-xxxx.service

5) create users, service,endpoint, etc.

Openstack user create xxxopenstack service create xxxopenstack endpoint create xxx

6) verify whether the service is successful

Note: the configuration of the configuration file is recommended to be backed up first, and then in order to omit unnecessary space, the editing of the configuration file is described here, as follows.

[DEFAULT]

...

Admin_token = ADMIN_TOKEN

The above indicates that admin_token = ADMIN_TOKEN is added to the paragraph of [DEFAULT].

Installation of each component

Authentication Service Keystone

Configuration database

$mysql-u root-pCREATE DATABASE keystone;GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'\ IDENTIFIED BY' KEYSTONE_DBPASS';GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'\ IDENTIFIED BY' KEYSTONE_DBPASS'

Installation

# yum install openstack-keystone httpd mod_wsgi

Configuration file / etc/keystone/keystone.conf

Admin token

[DEFAULT]... Admin_token = ADMIN_TOKEN

Database

[database]... connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

Token generation method

[token]... provider = fernet

Note: the above ADMIN_TOKEN can be generated using the openssl rand-hex 10 command, or fill in a custom string

Database synchronization

# su-s / bin/sh-c "keystone-manage db_sync" keystone

Initialize the fernet key.

For more information on how tokens are generated, please see: http://blog.csdn.net/miss_yang_cloud/article/details/49633719

# keystone-manage fernet_setup-- keystone-user keystone--keystone-group keystone

Configure Apache

Edit / etc/httpd/conf/httpd.conf

Change the content.

ServerName controller

Create a / etc/httpd/conf.d/wsgi-keystone.conf configuration file and add the following

Listen 5000Listen 35357 WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=% {GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / / usr/bin/keystone-wsgi-public WSGIApplicationGroup% {GLOBAL} WSGIPassAuthorization On ErrorLogFormat "% {cu} t% M" ErrorLog / var/log/httpd/keystone-error.log CustomLog / var/log/httpd/keystone-access.log combined Require all granted WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=% {GROUP} WSGIProcessGroup keystone -admin WSGIScriptAlias / / usr/bin/keystone-wsgi-admin WSGIApplicationGroup% {GLOBAL} WSGIPassAuthorization On ErrorLogFormat "% {cu} t% M" ErrorLog / var/log/httpd/keystone-error.log CustomLog / var/log/httpd/keystone-access.log combined Require all granted

Configure startup items, start

# systemctl enable httpd.service# systemctl start httpd.service

Create service,API endpoint

To avoid unnecessary space, configure admin_token,endpoint url to the environment variable.

$export OS_TOKEN=ADMIN_TOKEN$ export OS_URL= http://controller:35357/v3$ export OS_IDENTITY_API_VERSION=3

Create service

$openstack service create\-name keystone-description "OpenStack Identity" identity

Create an endpoint, followed by public,internal,admin

$openstack endpoint create-region RegionOne\ identity public http://controller:5000/v3$ openstack endpoint create-region RegionOne\ identity internal http://controller:5000/v3$ openstack endpoint create-region RegionOne\ identity admin http://controller:35357/v3

Create domain, project, user, role domain,project,user,role

Create domain

Openstack domain create-description "Default Domain" default

Create project

Openstack user create-- domain default\-- password-prompt admin

Create admin role

Openstack role create admin

Add the admin role to the admin project

Openstack role add-project admin-user admin admin

Create a service project

Openstack project create-domain default\-description "Service Project" service

Create a demo project

Openstack project create-domain default\-description "Demo Project" demo

Create a demo user

Openstack user create-- domain default\-- password-prompt demo

Create a user role

Openstack role create user

Add the user role to the demo project

Openstack role add-project demo-user demo user

Note: remember the password when you created the user.

Authenticate admin user

Unset OS_TOKEN OS_URLopenstack-os-auth-url http://controller:35357/v3\-os-project-domain-name default-os-user-domain-name default\-os-project-name admin-os-username admin token issue

Password:

+-+ +

| | Field | Value |

+-+ +

| | expires | 2016-02-12T20:14:07.056119Z |

| | id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |

| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |

| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |

| | project_id | 343d245e850143a096806dfaefa9afdc |

| | user_id | ac3377633149401296f6c0d92d79dc16 |

+-+ +

Authenticate demo user

Openstack-- os-auth-url http://controller:5000/v3\-- os-project-domain-name default-- os-user-domain-name default\-- os-project-name demo-- os-username demo token issue

Password:

+-+ +

| | Field | Value |

+-+ +

| | expires | 2016-02-12T20:15:39.014479Z |

| | id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |

| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |

| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |

| | project_id | ed0b60bf607743088218b0a533d5943f |

| | user_id | 58126687cbcc4888bfa9ab73a2256f27 |

+-+ +

If the above format is returned, the verification is passed.

Admin,demo user's environment variable script

Normally, of course, parameters such as os-xxxx are placed in environment variables to create environment scripts for faster switching between admin,demo users

Create admin-openrc

Export OS_PROJECT_DOMAIN_NAME=defaultexport OS_USER_DOMAIN_NAME=defaultexport OS_PROJECT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=ADMIN_PASSexport OS_AUTH_URL= http://controller:35357/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2

Create demo-openrc

Export OS_PROJECT_DOMAIN_NAME=defaultexport OS_USER_DOMAIN_NAME=defaultexport OS_PROJECT_NAME=demoexport OS_USERNAME=demoexport OS_PASSWORD=DEMO_PASSexport OS_AUTH_URL= http://controller:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2

Verify admin here

First of all. Admin-openrc

$openstack token issue

+-+ +

| | Field | Value |

+-+ +

| | expires | 2016-02-12T20:44:35.659723Z |

| | id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h4fIagi5NoEmh31U72SrRv2trl |

| | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |

| | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E |

| | project_id | 343d245e850143a096806dfaefa9afdc |

| | user_id | ac3377633149401296f6c0d92d79dc16 |

+-+ +

Mirror service Glance

Configuration database

$mysql-u root-pCREATE DATABASE glance;GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'\ IDENTIFIED BY' GLANCE_DBPASS';GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'\ IDENTIFIED BY' GLANCE_DBPASS'

Create service,user,role

$. Admin-openrc$ openstack user create-domain default-password-prompt glance$ openstack role add-project service-user glance admin

Create an endpoint, followed by public,internal,admin

$openstack service create-- name glance\-- description "OpenStack Image" paired wrought picpaths$ openstack endpoint create-- region RegionOne\ p_w_picpath public http://controller:9292$ openstack endpoint create-- region RegionOne\ p_w_picpath internal http://controller:9292$ openstack endpoint create-- region RegionOne\ p_w_picpath admin http://controller:9292

Installation

# yum install openstack-glance

Configuration file / etc/glance/glance-api.conf

Database

[database]... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

Keystone certification

[keystone_authtoken]... auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = glancepassword = GLANCE_PA SS [deploy _ deploy]... flavor = keystone

Glance storage

[glance_store]... stores = file,httpdefault_store = filefilesystem_store_datadir = / var/lib/glance/p_w_picpaths/

Configuration file / etc/glance/glance-registry.conf

Database

[database]... connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

Keystone certification

[keystone_authtoken]... auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = glancepassword = GLANCE_PA SS [deploy _ deploy]... flavor = keystone

Synchronize database

# su-s / bin/sh-c "glance-manage db_sync" glance

Start

# systemctl enable openstack-glance-api.service\ openstack-glance-registry.service# systemctl start openstack-glance-api.service\ openstack-glance-registry.service

Verification

$. Admin-openrc

Download cirros image

$wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

Create a mirror

$openstack p_w_picpath create "cirros"\-file cirros-0.3.4-x86_64-disk.img\-disk-format qcow2-container-format bare\-public

If you execute the following command, which is shown below, it is successful.

$openstack p_w_picpath list+--+-+ | ID | Name | + -+ | 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | +-- +

Computing Resource Service nova

Control node

Database

$mysql-u root-pCREATE DATABASE nova_api;CREATE DATABASE nova;GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost'\ IDENTIFIED BY' NOVA_DBPASS';GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%'\ IDENTIFIED BY' NOVA_DBPASS';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost'\ IDENTIFIED BY' NOVA_DBPASS';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%'\ IDENTIFIED BY' NOVA_DBPASS'

Create service,user,role

$. Admin-openrc$ openstack user create-- domain default\-- password-prompt nova$ openstack role add-- project service-- user nova admin$ openstack service create-- name nova\-description "OpenStack Compute" compute

Create an endpoint, followed by public,internal,admin

$openstack endpoint create-region RegionOne\ compute public http://controller:8774/v2.1/%\(tenant_id\)s$ openstack endpoint create-region RegionOne\ compute internal http://controller:8774/v2.1/%\(tenant_id\)s$ openstack endpoint create-region RegionOne\ compute admin http://controller:8774/v2.1/%\(tenant_id\)s

Installation

# yum install openstack-nova-api openstack-nova-conductor\ openstack-nova-console openstack-nova-novncproxy\ openstack-nova-scheduler

Configuration file / etc/nova/nova.conf

Enabled api

[DEFAULT]... enabled_apis = osapi_compute, metadata [API _ database]... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

Database

[database]... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

Rabbitmq queue

[DEFAULT]... rpc_backend = Rabbit [Oslo _ messaging_rabbit]... rabbit_host = controllerrabbit_userid = openstackrabbit_password = RABBIT_PASS

Keystone certification

[DEFAULT]... auth_strategy = Keystone [Keystone _ authtoken]... auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = NOVA_PASS

Bind ip

[DEFAULT]... my_ip = 10.0.0.101

Support for neutron

[DEFAULT]... use_neutron = Truefirewall_driver = nova.virt.firewall.NoopFirewallDriver

Vnc configuration

[vnc]... vncserver_listen = $my_ipvncserver_proxyclient_address = $my_ip

Glance configuration

[glance]... api_servers = http://controller:9292

Concurrent lock

[oslo_concurrency]... lock_path = / var/lib/nova/tmp

Synchronize database

# su-s / bin/sh-c "nova-manage api_db sync" nova# su-s / bin/sh-c "nova-manage db sync" nova

Start

# systemctl enable openstack-nova-api.service\ openstack-nova-consoleauth.service openstack-nova-scheduler.service\ openstack-nova-conductor.service openstack-nova-novncproxy.service# systemctl start openstack-nova-api.service\ openstack-nova-consoleauth.service openstack-nova-scheduler.service\ openstack-nova-conductor.service openstack-nova-novncproxy.service

Computing node

Installation

# yum install openstack-nova-compute

Configuration file / etc/nova/nova.conf

Rabbitmq queue

[DEFAULT]... rpc_backend = Rabbit [Oslo _ messaging_rabbit]... rabbit_host = controllerrabbit_userid = openstackrabbit_password = RABBIT_PASS

Keystone certification

[DEFAULT]... auth_strategy = Keystone [Keystone _ authtoken]... auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = NOVA_PASS

Bind ip

[DEFAULT]... my_ip = 10.0.0.102

Support for neutron

[DEFAULT]... use_neutron = Truefirewall_driver = nova.virt.firewall.NoopFirewallDriver

Configure VNC

[vnc]... enabled = Truevncserver_listen = 0.0.0.0vncserver_proxyclient_address = $my_ipnovncproxy_base_url = http://controller:6080/vnc_auto.html

Configure Glance

[glance]... api_servers = http://controller:9292

Concurrent lock

[oslo_concurrency]... lock_path = / var/lib/nova/tmp

Virtualization driven

[libvirt]... virt_type = qemu

Start

# systemctl enable libvirtd.service openstack-nova-compute.service# systemctl start libvirtd.service openstack-nova-compute.service

Verification

$. Admin-openrc$ openstack compute service list

+-+-+

| | Id | Binary | Host | Zone | Status | State | Updated At | |

+-+-+

| | 1 | nova-consoleauth | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |

| | 2 | nova-scheduler | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |

| | 3 | nova-conductor | controller | internal | enabled | up | 2016-02-09T23:11:16.000000 |

| | 4 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 |

+-+-+

Network Service neutron

Control node

Database

$mysql-u root-pCREATE DATABASE neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'\ IDENTIFIED BY' NEUTRON_DBPASS';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%'\ IDENTIFIED BY' NEUTRON_DBPASS'

Create service,user,role

$. Admin-openrc$ openstack user create-domain default-password-prompt neutron$ openstack role add-project service-user neutron admin$ openstack service create-name neutron\-description "OpenStack Networking" network

Create an endpoint, followed by public,internal,admin

$openstack endpoint create-region RegionOne\ network public http://controller:9696$ openstack endpoint create-region RegionOne\ network internal http://controller:9696$ openstack endpoint create-region RegionOne\ network admin http://controller:9696

Configure provider Network provider network

Reference: http://docs.openstack.org/mitaka/install-guide-rdo/neutron-controller-install-option1.html

Installation

# yum install openstack-neutron openstack-neutron-ml2\ openstack-neutron-linuxbridge ebtables

Configuration file / etc/neutron/neutron.conf

Database

[database]... connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

Enable layer 2 plug-ins and disable other plug-ins

[DEFAULT]... core_plugin = ml2service_plugins =

Rabbitmq queue

[DEFAULT]... rpc_backend = Rabbit [Oslo _ messaging_rabbit]... rabbit_host = controllerrabbit_userid = openstackrabbit_password = RABBIT_PASS

Keystone certification

[DEFAULT]... auth_strategy = Keystone [Keystone _ authtoken]... auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = NEUTRON_PASS

Concurrent lock

[oslo_concurrency]... lock_path = / var/lib/neutron/tmp

Configuration file / etc/neutron/plugins/ml2/ml2_conf.ini

Drive

[ml2]... type_drivers = flat,vlan

Disable personal (selfservice) network

[ml2]... tenant_network_types =

Enable linux Brid

[ml2]... mechanism_drivers = linuxbridge

Port installation extension

[ml2]... extension_drivers = port_security

Flat network

[ml2_type_flat]... flat_networks = provider

Enable ipset

[securitygroup]... enable_ipset = True

Configuration file / etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge] physical_interface_mappings = provider:PROVIDER_INTERFACE_ name [VXlan] enable_vxlan = false [securitygroup]... enable_security_group = Truefirewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

Note: PROVIDER_INTERFACE_NAME is a network interface, such as eth 1

Configuration file / etc/neutron/dhcp_agent.ini

[DEFAULT]... interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriverdhcp_driver = neutron.agent.linux.dhcp.Dnsmasqenable_isolated_metadata = True

Configuration file / etc/neutron/metadata_agent.ini

[DEFAULT]... nova_metadata_ip = controllermetadata_proxy_shared_secret = METADATA_SECRET

Configuration file / etc/nova/nova.conf

[neutron]... url = http://controller:9696auth_url = http://controller:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = NEUTRON_PASSservice_metadata_proxy = Truemetadata_proxy_shared_secret = METADATA_SECRET

Soft connection

Ln-s / etc/neutron/plugins/ml2/ml2_conf.ini / etc/neutron/plugin.ini

Database synchronization

Su-s / bin/sh-c "neutron-db-manage-- config-file / etc/neutron/neutron.conf\-- config-file / etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

Restart nova-api

Systemctl restart openstack-nova-api.service

Start

# systemctl enable neutron-server.service\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service\ neutron-metadata-agent.service# systemctl start neutron-server.service\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service\ neutron-metadata-agent.service# systemctl enable neutron-l3-agent.service# systemctl start neutron-l3-agent.service

Computing node

Installation

Yum install openstack-neutron-linuxbridge ebtables

Configuration file / etc/neutron/neutron.conf

Rabbitmq queue

[DEFAULT]... rpc_backend = Rabbit [Oslo _ messaging_rabbit]... rabbit_host = controllerrabbit_userid = openstackrabbit_password = RABBIT_PASS

Keystone certification

[DEFAULT]... auth_strategy = Keystone [Keystone _ authtoken]... auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = NEUTRON_PASS

Concurrent lock

[oslo_concurrency]... lock_path = / var/lib/neutron/tmp

Configuration file / etc/nova/nova.conf

[neutron]... url = http://controller:9696auth_url = http://controller:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = NEUTRON_PASS

Restart nova-compute

# systemctl restart openstack-nova-compute.service

Start

# systemctl enable neutron-linuxbridge-agent.service# systemctl start neutron-linuxbridge-agent.service

Verification

$. Admin-openrc$ neutron ext-list

+-+

| | alias | name |

+-+

| | default-subnetpools | Default Subnetpools |

| | network-ip-availability | Network IP Availability |

| | network_availability_zone | Network Availability Zone |

| | auto-allocated-topology | Auto Allocated Topology Services |

| | ext-gw-mode | Neutron L3 Configurable external gateway mode |

| | binding | Port Binding |

.

Dashboard horizon

Note: must be in the control node

Installation

# yum install openstack-dashboard

Configuration file / etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller" ALLOWED_HOSTS = ['*',] SESSION_ENGINE = 'django.contrib.sessions.backends.cache'CACHES = {' default': {'BACKEND':' django.core.cache.backends.memcached.MemcachedCache', 'LOCATION':' controller:11211' } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3"% OPENSTACK_HOSTOPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = TrueOPENSTACK_API_VERSIONS = {" identity ": 3," p_w_picpath ": 2," volume ": 2,} OPENSTACK_KEYSTONE_DEFAULT_DOMAIN =" default "OPENSTACK_KEYSTONE_DEFAULT_ROLE =" user "OPENSTACK_NEUTRON_NETWORK = {. 'enable_router': False,' enable_quotas': False, 'enable_distributed_router': False,' enable_ha_router': False, 'enable_lb': False,' enable_firewall': False, 'enable_***': False,' enable_fip_topology_check': False,} TIME_ZONE = "Asia/Shanghai"

Start

# systemctl restart httpd.service memcached.service

Verification

Visit http://controller/dashboard

Block storage cinder

Database

$mysql-u root-pCREATE DATABASE cinder;GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'\ IDENTIFIED BY' CINDER_DBPASS';GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'\ IDENTIFIED BY' CINDER_DBPASS'

Create service,user,role

$. Admin-openrc$ openstack user create-domain default-password-prompt cinder$ openstack role add-project service-user cinder admin

Notice that two service are created here

$openstack service create-- name cinder\-- description "OpenStack Block Storage" volume$ openstack service create-- name cinderv2\-- description "OpenStack Block Storage" volumev2

Create an endpoint, followed by public,internal,admin

$openstack endpoint create-region RegionOne\ volume public http://controller:8776/v1/%\(tenant_id\)s$ openstack endpoint create-region RegionOne\ volume internal http://controller:8776/v1/%\(tenant_id\)s$ openstack endpoint create-region RegionOne\ volume admin http://controller:8776/v1/%\(tenant_id\)s

Note that each service corresponds to three endpoint

$openstack endpoint create-region RegionOne\ volumev2 public http://controller:8776/v2/%\(tenant_id\)s$ openstack endpoint create-region RegionOne\ volumev2 internal http://controller:8776/v2/%\(tenant_id\)s$ openstack endpoint create-region RegionOne\ volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

Installation

Control node

# yum install openstack-cinder

Configuration file / etc/cinder/cinder.conf

Database

[database]... connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

Rabbitmq queue

[DEFAULT]... rpc_backend = Rabbit [Oslo _ messaging_rabbit]... rabbit_host = controllerrabbit_userid = openstackrabbit_password = RABBIT_PASS

Keystone certification

[DEFAULT]... auth_strategy = Keystone [Keystone _ authtoken]... auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = cinderpassword = CINDER_PASS

Bind ip

[DEFAULT]... my_ip = 10.0.0.11

Parallel lock

[oslo_concurrency]... lock_path = / var/lib/cinder/tmp

Synchronize database

# su-s / bin/sh-c "cinder-manage db sync" cinder

Configuration file / etc/nova/nova.conf

[cinder] os_region_name = RegionOne

Restart nova-api

# systemctl restart openstack-nova-api.service

Start

# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

For other nodes, you can add a hard disk to the computing node

Note: another hard drive is required

Installation

# yum install lvm2# systemctl enable lvm2-lvmetad.service# systemctl start lvm2-lvmetad.service

Create a logical volume

# pvcreate / dev/sdbPhysical volume "/ dev/sdb" successfully created# vgcreate cinder-volumes / dev/sdbVolume group "cinder-volumes" successfully created

Configuration file / etc/lvm/lvm.conf

Devices {... filter = ["a/sdb/", "rrhythmo.charts /"]

Note: the newly added hard disk is generally sdb. If there is a sdc,sde, it is filter = ["a/sdb/", "rUnip .drives /"], and so on.

Installation

# yum install openstack-cinder targetcli

Configuration file / etc/cinder/cinder.conf

Database

[database]... connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

Rabbitmq queue

[DEFAULT]... rpc_backend = Rabbit [Oslo _ messaging_rabbit]... rabbit_host = controllerrabbit_userid = openstackrabbit_password = RABBIT_PASS

Keystone certification

[DEFAULT]... auth_strategy = Keystone [Keystone _ authtoken]... auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = cinderpassword = CINDER_PASS

Bind ip

[DEFAULT]... my_ip = 10.0.0.102

Add [lvm] and its content

[lvm]... volume_driver = cinder.volume.drivers.lvm.LVMVolumeDrivervolume_group = cinder-volumesiscsi_protocol = iscsiiscsi_helper = lioadm

Enable lvm at the backend

[DEFAULT]... enabled_backends = lvm

Configure Glance API

[DEFAULT]... glance_api_servers = http://controller:9292

Parallel lock

[oslo_concurrency]... lock_path = / var/lib/cinder/tmp

Start

# systemctl enable openstack-cinder-volume.service target.service# systemctl start openstack-cinder-volume.service target.service

Verification

$. Admin-openrc$ cinder service-list

+-+

| | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | |

+-+

| | cinder-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None |

| | cinder-volume | block1@lvm | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |

So far. Basically complete, all the installation, you can first create a network with admin users on dashboard, and then create a new instance with

Postscript: although manual installation of the whole set is a bit exaggerated, yum is still used here ~ but at least once manually. Other times, script or install tools, copy and paste make me dazzled.

The other components will have another article, and it is worth noting that the official document is the best document.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report