In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Experimental environment
System: CentOS-7-x86_64-DVD-1804
Experimental environment: vmware
Hostname ip function
Node1.heleicool.cn 172.16.175.11 Management Node
Node2.heleicool.cn 172.16.175.12 compute node
Environment settin
Install the necessary software:
Yum install-y vim net-tools wget telnet
Configure / etc/hosts files separately:
172.16.175.11 node1.heleicool.cn
172.16.175.12 node2.heleicool.cn
Configure / etc/resolv.conf files separately:
Nameserver 8.8.8.8
Turn off the firewall:
Systemctl disable firewalld
Systemctl stop firewalld
Close selinux: (should be omitted)
Setenforce 0
Vim / etc/selinux/config
SELINUX=disabled
Install the openstack package
Install the corresponding version of the epel library:
Yum install centos-release-openstack-rocky-y
Install the openstack client:
Yum install python-openstackclient-y
RHEL and CentOS enable SELinux by default. Install the openstack-selinux package to automatically manage the security policy for the OpenStack service:
Yum install openstack-selinux-y
Database installation
Install the package:
Yum install mariadb mariadb-server python2-PyMySQL-y
Create and edit the configuration file / etc/my.cnf.d/openstack.cnf:
[mysqld]
Bind-address = 172.16.175.11
Default-storage-engine = innodb
Innodb_file_per_table = on
Max_connections = 4096
Collation-server = utf8_general_ci
Character-set-server = utf8
Start the database:
Systemctl enable mariadb.service
Systemctl start mariadb.service
Protect the database service by running the mysql_secure_installation script. In particular, select the appropriate password for the database root account:
Mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!In order to log into MariaDB to secure it, we'll need the currentpassword for the root user. If you've just installed MariaDB, andyou haven't set the root password yet, the password will be blank,so you should just press enter here.Enter current password for root (enter for none): OK, successfully used password, moving on...Setting the root password ensures that nobody can log into the MariaDBroot user without the proper authorisation.Set root password? [industry] y # whether to set the root password New password: # enter the root password twice Re-enter new password:Password updated fulfilled Reloading privilege tables.. ... Success!By default, a MariaDB installation has an anonymous user, allowing anyoneto log into MariaDB without having to have a user account created forthem. This is intended only for testing, and to make the installationgo a bit smoother. You should remove them before moving into aproduction environment.Remove anonymous users? [YPop] y # whether to delete anonymous users. Successful normally, root should only be allowed to connect from 'localhost'. Thisensures that someone cannot guess at the root password from the network.Disallow root login remotely? [YPop] y # whether to prohibit root remote login. Successful by default, MariaDB comes with a database named 'test' that anyone canaccess. This is also intended only for testing, and should be removedbefore moving into a production environment.Remove test database and access to it? [▽] y # whether to delete the test library ▽-Dropping test database... ▽... Success!-Removing privileges on test database... ... Success!Reloading the privilege tables will ensure that all changes made so farwill take effect immediately.Reload privilege tables now? [YBO] y # load permission table. Success!Cleaning up...All done! If you've completed all of the above steps, your MariaDBinstallation should now be secure.Thanks for using MariaDB!
Install message queuing
Install rabbitmq
Yum install rabbitmq-server-y
Start rabbitmy
Systemctl enable rabbitmq-server.service
Systemctl start rabbitmq-server.service
Add openstack user
# the user name I added is openstack, and so is the password.
Rabbitmqctl add_user openstack openstack
Read and write authorization to openstack users:
Rabbitmqctl set_permissions openstack ". *"
# install Memcached
Install Memacached
Yum install memcached python-memcached-y
Edit / etc/sysconfig/memcached, modify configuration
OPTIONS= "- l 127.0.0.1" Magazine 1172.16.175.11 "
Start memcached
Systemctl enable memcached.service
Systemctl start memcached.service
The port information so far is as follows
# rabbitmq port tcp 0 00.0.0.0 LISTEN 1690/beam# mariadb-server port tcp 0 172.16.175.11 LISTEN 1690/beam# mariadb-server port 3306 0.0.0.0 LISTEN 1506/mysqld# memcached port tcp 00172.16.175.11 0.0.0LISTEN 1/systemdtcp * LISTEN 2236/memcachedtcp 0 0127.0.0.1 LISTEN 1/systemdtcp 11211 0.0.0.0 LISTEN 1/systemdtcp: 22 0.0.0.0 LISTEN 766/sshdtcp * LISTEN 1690/beamtcp6 0 0 127.0.1 LISTEN 1690/beamtcp6 25 0.0.0.0 LISTEN 1690/beamtcp6 0 0: 5672: 1LISTEN 1050/master 11211: * LISTEN 2236/memcachedtcp6 0 0: 22: * LISTEN 766/sshdtcp6 0 0:: 1:25:: * LISTEN 1050/master
Start installing the openstack service
Keystone service installation
Configure the keystone database:
Use the database access client to connect to the database server as root:
Mysql-u root-p
Create a keystone database and grant appropriate access to the keystone database:
CREATE DATABASE keystone
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY' keystone'
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY' keystone'
Install and configure keystone
Run the following command to install the package:
Yum install openstack-keystone httpd mod_wsgi-y
Edit the / etc/keystone/keystone.conf file and complete the following:
[database]
Connection = mysql+pymysql://keystone:keystone@172.16.175.11/keystone
[token]
Provider = fernet
Populate the Identity service database:
Su-s / bin/sh-c "keystone-manage db_sync" keystone
# verify the database table
Mysql-ukeystone-pkeystone-e "use keystone; show tables;"
Initialize the Fernet KeyStore:
Keystone-manage fernet_setup-- keystone-user keystone--keystone-group keystone
Keystone-manage credential_setup-- keystone-user keystone--keystone-group keystone
Guide the identity service:
# ADMIN_PASS is the password of the administrative user. Here, the password is set.
Keystone-manage bootstrap--bootstrap-password admin\
-- bootstrap-admin-url http://172.16.175.11:5000/v3/\
-- bootstrap-internal-url http://172.16.175.11:5000/v3/\
-- bootstrap-public-url http://172.16.175.11:5000/v3/\
-- bootstrap-region-id RegionOne
Configure the Apache HTTP service
Edit / etc/httpd/conf/httpd.conf
ServerName 172.16.175.11
Create a link to the / usr/share/keystone/wsgi-keystone.conf file:
Ln-s / usr/share/keystone/wsgi-keystone.conf / etc/httpd/conf.d/
Start the service
Start the Apache HTTP service and configure it to start at system boot:
Systemctl enable httpd.service
Systemctl start httpd.service
Configure the administrative account
Export OS_USERNAME=admin
Export OS_PASSWORD=admin
Export OS_PROJECT_NAME=admin
Export OS_USER_DOMAIN_NAME=Default
Export OS_PROJECT_DOMAIN_NAME=Default
Export OS_AUTH_URL= http://172.16.175.11:5000/v3
Export OS_IDENTITY_API_VERSION=3
Create domain,projects,users and roles
Although a default domain already exists in the keystone-manage bootstrap step in this guide, the formal way to create a new domain is:
# openstack domain create-- description "An Example Domain" example
Using the default domain, create service project: for use as a service.
Openstack project create-- domain default\
Description "Service Project" service
Create myproject projects: unprivileged projects and users should be used for regular (non-administrator) tasks.
Openstack project create-- domain default\
Description "Demo Project" myproject
To create a myuser user:
# password is required to create a user
Openstack user create-- domain default\
-- password-prompt myuser
Create a myrole role:
Openstack role create myrole
Add myuser to the myproject project and give myrole the role:
Openstack role add-project myproject-user myuser myrole
Authenticate user
Unset the temporary variables OS_AUTH_URL and OS_PASSWORD environment variables:
Unset OS_AUTH_URL OS_PASSWORD
As an admin user, request an authentication token:
# input admin password after execution
Openstack-- os-auth-url http://172.16.175.11:5000/v3\
-- os-project-domain-name Default-- os-user-domain-name Default\
-os-project-name admin-os-username admin token issue
As a myuser user, request an authentication token:
# input admin password after execution
Openstack-- os-auth-url http://172.16.175.11:5000/v3\
-- os-project-domain-name Default-- os-user-domain-name Default\
-os-project-name myproject-os-username myuser token issue
Create openstack client environment script
The openstack client interacts with the Identity service by adding parameters or using environment variables. To improve efficiency, create environment scripts:
Create an admin user environment script: admin-openstack.sh
Export OS_USERNAME=admin
Export OS_PASSWORD=admin
Export OS_PROJECT_NAME=admin
Export OS_USER_DOMAIN_NAME=Default
Export OS_PROJECT_DOMAIN_NAME=Default
Export OS_AUTH_URL= http://172.16.175.11:5000/v3
Export OS_IDENTITY_API_VERSION=3
Create a myuser user environment script: demo-openstack.sh
Export OS_USER_DOMAIN_NAME=Default
Export OS_PROJECT_NAME=myproject
Export OS_USERNAME=myuser
Export OS_PASSWORD=myuser
Export OS_AUTH_URL= http://172.16.175.11:5000/v3
Export OS_IDENTITY_API_VERSION=3
Export OS_IMAGE_API_VERSION=2
Use script
Source admin-openstack.sh
Openstack token issue
Glance service installation
Configure the glance database:
Root users login to the database:
Mysql-u root-p
Create a glance database and user authorization:
CREATE DATABASE glance
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY' glance'
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY' glance'
Create glance service credentials, using the admin user:
Source admin-openstack.sh
To create a glance user:
# need to enter the glance user password, mine is glance
Openstack user create-domain default-password-prompt glance
Add the glance user to the service project and give admin the role:
Openstack role add-project service-user glance admin
Create a glance service entity:
Openstack service create-- name glance\
Description "OpenStack Image" image
Create an Image service API endpoint:
Openstack endpoint create-- region RegionOne image public http://172.16.175.11:9292
Openstack endpoint create-- region RegionOne image internal http://172.16.175.11:9292
Openstack endpoint create-- region RegionOne image admin http://172.16.175.11:9292
Install and configure glance
Install the package:
Yum install openstack-glance-y
Edit the / etc/glance/glance-api.conf file and complete the following:
# configure database access:
[database]
Connection = mysql+pymysql://glance:glance@172.16.175.11/glance
# configure identity service access:
[keystone_authtoken]
Www_authenticate_uri = http://172.16.175.11:5000
Auth_url = http://172.16.175.11:5000
Memcached_servers = 172.16.175.11purl 11211
Auth_type = password
Project_domain_name = Default
User_domain_name = Default
Project_name = service
Username = glance
Password = glance
[paste_deploy]
Flavor = keystone
# configure the location of local file system storage and image files:
[glance_store]
Stores = file,http
Default_store = file
Filesystem_store_datadir = / var/lib/glance/images/
Edit the / etc/glance/glance-registry.conf file and complete the following:
# configure database access:
[database]
Connection = mysql+pymysql://glance:glance@172.16.175.11/glance
# configure identity service access:
[keystone_authtoken]
Www_authenticate_uri = http://172.16.175.11:5000
Auth_url = http://172.16.175.11:5000
Memcached_servers = 172.16.175.11purl 11211
Auth_type = password
Project_domain_name = Default
User_domain_name = Default
Project_name = service
Username = glance
Password = glance
[paste_deploy]
Flavor = keystone
Populate the Image service database and verify that:
Su-s / bin/sh-c "glance-manage db_sync" glance
Mysql-uglance-pglance-e "use glance; show tables;"
Start the service:
Systemctl enable openstack-glance-api.service\
Openstack-glance-registry.service
Systemctl start openstack-glance-api.service\
Openstack-glance-registry.service
Verification service
Source admin credentials to access the administrator-only CLI command:
Source admin-openstack.sh
Download the source image:
Wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
Upload an image to the Image service using QCOW2 disk format, bare container format, and common visibility so that all projects can access it:
# make sure the cirros-0.4.0-x86_64-disk.img file is in the current directory
Openstack image create "cirros"\
-- file cirros-0.4.0-x86_64-disk.img\
-- disk-format qcow2-- container-format bare\
-- public
Confirm the uploaded image and verify the properties:
Openstack image list
Nova service installation
Nova control node installation
Create nova database information:
Mysql-u root-p
Create nova_api,nova,nova_cell0, and placement databases:
CREATE DATABASE nova_api
CREATE DATABASE nova
CREATE DATABASE nova_cell0
CREATE DATABASE placement
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY' nova'
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY' nova'
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY' nova'
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY' nova'
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY' nova'
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY' nova'
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY' placement'
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY' placement'
Access with admin permissions:
Source admin-openstack.sh
To create a nova user:
Openstack user create-domain default-password-prompt nova
Add the admin role to the nova user:
Openstack role add-project service-user nova admin
Create a nova service entity:
Openstack service create-name nova-description "OpenStack Compute" compute
Create a Compute API service endpoint:
Openstack endpoint create-- region RegionOne compute public http://172.16.175.11:8774/v2.1
Openstack endpoint create-- region RegionOne compute internal http://172.16.175.11:8774/v2.1
Openstack endpoint create-- region RegionOne compute admin http://172.16.175.11:8774/v2.1
To create a placement user:
# need to set the password of the user name, my password is placement
Openstack user create-domain default-password-prompt placement
Use the admin role to add the Placement user to the service item:
Openstack role add-project service-user placement admin
Create a placement service entity:
Openstack service create-name placement-description "Placement API" placement
Create a Placement API service endpoint:
Openstack endpoint create-- region RegionOne placement public http://172.16.175.11:8778
Openstack endpoint create-- region RegionOne placement internal http://172.16.175.11:8778
Openstack endpoint create-- region RegionOne placement admin http://172.16.175.11:8778
# install nova
Yum install openstack-nova-api openstack-nova-conductor\
Openstack-nova-console openstack-nova-novncproxy\
Openstack-nova-scheduler openstack-nova-placement-api-y
Edit the / etc/nova/nova.conf file and complete the following:
# enable only computing and metadata API
[DEFAULT]
Enabled_apis = osapi_compute,metadata
# configure database access
[api_database]
Connection = mysql+pymysql://nova:nova@172.16.175.11/nova_api
[database]
Connection = mysql+pymysql://nova:nova@172.16.175.11/nova
[placement_database]
Connection = mysql+pymysql://placement:placement@172.16.175.11/placement
# configure RabbitMQ message queuing access
[DEFAULT]
Transport_url = rabbit://openstack:openstack@172.16.175.11
# configure identity service access
[api]
Auth_strategy = keystone
[keystone_authtoken]
Auth_url = http://172.16.175.11:5000/v3
Memcached_servers = 172.16.175.11purl 11211
Auth_type = password
Project_domain_name = default
User_domain_name = default
Project_name = service
Username = nova
Password = nova
# enable support for network services
[DEFAULT]
Use_neutron = true
Firewall_driver = nova.virt.firewall.NoopFirewallDriver
# configure the VNC agent to use the management interface IP address of the controller node
[vnc]
Enabled = true
Server_listen = 0.0.0.0
Server_proxyclient_address = 172.16.175.11
# configure the location of the Image service API
[glance]
Api_servers = http://172.16.175.11:9292
# configure Lock path
[oslo_concurrency]
Lock_path = / var/lib/nova/tmp
# configure Placement API
[placement]
Region_name = RegionOne
Project_domain_name = Default
Project_name = service
Auth_type = password
User_domain_name = Default
Auth_url = http://172.16.175.11:5000/v3
Username = placement
Password = placement
The configuration is added to the following to enable access to Placement API / etc/httpd/conf.d/00-nova-placement-api.conf:
Add to the end of the configuration file
= 2.4 >
Require all granted
Order allow,deny
Allow from all
Restart the httpd service
Systemctl restart httpd
Populate the nova-api and placement databases:
Su-s / bin/sh-c "nova-manage api_db sync" nova
Register the cell0 database:
Su-s / bin/sh-c "nova-manage cell_v2 map_cell0" nova
Create a cell1 cell:
Su-s / bin/sh-c "nova-manage cell_v2 create_cell-name=cell1-verbose" nova
Populate the nova database:
Su-s / bin/sh-c "nova-manage db sync" nova
Verify that nova cell0 and cell1 are registered correctly:
Su-s / bin/sh-c "nova-manage cell_v2 list_cells" nova
Verify the database:
Mysql-unova-pnova-e "use nova; show tables;"
Mysql-unova-pnova-e "use nova_api; show tables;"
Mysql-unova-pnova-e "use nova_cell0; show tables;"
Mysql-uplacement-pplacement-e "use placement; show tables;"
Start the nova control node service
Systemctl enable openstack-nova-api.service\
Openstack-nova-scheduler.service openstack-nova-conductor.service\
Openstack-nova-novncproxy.service
Systemctl start openstack-nova-api.service\
Openstack-nova-scheduler.service openstack-nova-conductor.service\
Openstack-nova-novncproxy.service
Nova Compute Node installation
Installation package
Yum install openstack-nova-compute-y
Edit the / etc/nova/nova.conf file and complete the following:
# pull the configuration of the control node to modify. Just delete the following configuration, which is the configuration for database access.
[api_database]
Connection = mysql+pymysql://nova:nova@172.16.175.11/nova_api
[database]
Connection = mysql+pymysql://nova:nova@172.16.175.11/nova
[placement_database]
Connection = mysql+pymysql://placement:placement@172.16.175.11/placement
# add the following:
[vnc]
# modified to IP of compute node
Server_proxyclient_address = 172.16.175.12
Novncproxy_base_url = http://172.16.175.11:6080/vnc_auto.html
Determine if your compute node supports hardware acceleration for virtual machines:
Egrep-c'(vmx | svm)'/ proc/cpuinfo
If the return value of this command is greater than 1, the compute node supports hardware acceleration and usually does not require additional configuration.
If this command returns a value of zero, your compute node does not support hardware acceleration, and you must configure libvirt to use QEMU instead of KVM.
Edit the [libvirt] section of the file, / etc/nova/nova.conf as follows:
[libvirt]
#...
Virt_type = kvm
# although the return value here is greater than 1, the configuration of kvm causes the virtual machine to fail to start, and the qemu is normal. Please go barefoot.
Start the nova compute node service
Systemctl enable libvirtd.service openstack-nova-compute.service
Systemctl start libvirtd.service openstack-nova-compute.service
Add a compute node to the cell database (executed in the management node)
Source admin-openstack.sh
# confirm that there is a host in the database
Openstack compute service list-service nova-compute
# discovering computing hosts
Su-s / bin/sh-c "nova-manage cell_v2 discover_hosts-verbose" nova
When adding new compute nodes, you must run on the controller node to register these new compute nodes. Alternatively, you can set the appropriate interval at the following location: / etc/nova/nova.conf
[scheduler]
Discover_hosts_in_cells_interval = 300
Verification operation
Source admin-openstack.sh
# list the service components to verify the successful startup and registration of each process: state is up status
Openstack compute service list
# list the API endpoints in the Identity service to verify the connection to the Identity service
Openstack catalog list
# list the images in the Image service to verify the connection to the Image service:
Openstack image list
# check whether the cell and placement API runs successfully:
Nova-status upgrade check
Just to make it clear here, the official document starts one more server than you do when the openstack compute service list command is viewed, and you just start it.
This service is a console remote connection to the authentication server, and vnc remote login cannot be performed without installation.
Systemctl enable openstack-nova-consoleauth
Systemctl start openstack-nova-consoleauth
Neutron service installation
Neutron control node installation
Create a database correlation for the neutron service:
Mysql-uroot-p
CREATE DATABASE neutron
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY' neutron'
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY' neutron'
Create a neutron administrative user
Openstack user create-domain default-password-prompt neutron
Add neutron users to the neutron service and give admin the role
Openstack role add-project service-user neutron admin
Create a neutron service entity:
Openstack service create-name neutron-description "OpenStack Networking" network
Create a network service API endpoint:
Openstack endpoint create-- region RegionOne network public http://172.16.175.11:9696
Openstack endpoint create-- region RegionOne network internal http://172.16.175.11:9696
Openstack endpoint create-- region RegionOne network admin http://172.16.175.11:9696
Configure network options
You can deploy network services using one of the two architectures represented by options 1 (Procider) and 2 (Self-service).
Option 1 deploys the simplest architecture, which only supports attaching instances to the provider (external) network. There is no self-service (private) network, router or floating IP address. Only this admin privileged user or other privileged user can manage the provider network.
Procider Network
Install the plug-in
Yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
Configure server components
Edit the / etc/neutron/neutron.conf file and complete the following
[DEFAULT]
# enable modular layer 2 (ML2) plug-ins and disable other plug-ins
Core_plugin = ml2
Service_plugins =
# notify Compute of network topology changes
Notify_nova_on_port_status_changes = true
Notify_nova_on_port_data_changes = true
# configure RabbitMQ message queuing access
Transport_url = rabbit://openstack:openstack@172.16.175.11
Auth_strategy = keystone
[database]
# configure database access
Connection = mysql+pymysql://neutron:neutron@172.16.175.11/neutron
[keystone_authtoken]
# configure identity service access
Www_authenticate_uri = http://172.16.175.11:5000
Auth_url = http://172.16.175.11:5000
Memcached_servers = 172.16.175.11purl 11211
Auth_type = password
Project_domain_name = default
User_domain_name = default
Project_name = service
Username = neutron
Password = neutron
# configure the network to notify Compute of network topology changes
[nova]
Auth_url = http://172.16.175.11:5000
Auth_type = password
Project_domain_name = default
User_domain_name = default
Region_name = RegionOne
Project_name = service
Username = nova
Password = nova
# configure Lock path
[oslo_concurrency]
Lock_path = / var/lib/neutron/tmp
Configure the Modular layer 2 (ML2) plug-in
The ML2 plug-in uses the Linux bridging mechanism to build a layer 2 (bridging and switching) virtual network infrastructure for the instance.
Edit the / etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following:
[ml2]
# enable flat and VLAN networks
Type_drivers = flat,vlan
# disable self-service network
Tenant_network_types =
# enable Linux bridging mechanism
Mechanism_drivers = linuxbridge
# enable the port security extension driver
Extension_drivers = port_security
[ml2_type_flat]
# configure the provider virtual network as a flat network
Flat_networks = provider
[securitygroup]
# enable ipset to improve the efficiency of security group rules
Enable_ipset = true
Configure linux Bridge Agent
The Linux bridge agent builds a layer 2 (bridging and switching) virtual network infrastructure for the instance and handles security groups.
Edit the / etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following:
[linux_bridge]
# the virtual network of the provider is mapped to the physical network interface of the provider, where the eth-0 is the mapped network card
Physical_interface_mappings = provider:eth-0
[vxlan]
# disable VXLAN overlay network
Enable_vxlan = false
[securitygroup]
# enable security groups and configure Linux bridging iptables firewall drivers:
Enable_security_group = true
Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Verify that all of the following sysctlvalues are set to 1: ensure that your Linux operating system kernel supports bridge filters:
Modprobe br_netfilter
Ls / proc/sys/net/bridge
Add to / etc/sysctl.conf:
Net.bridge.bridge-nf-call-ip6tables = 1
Net.bridge.bridge-nf-call-iptables = 1
Execution takes effect
Sysctl-p
Configure DHCP proxy
The DHCP agent provides DHCP services for virtual networks.
Edit the / etc/neutron/dhcp_agent.ini file and complete the following:
[DEFAULT]
# configure the Linux bridge interface driver, the Dnsmasq DHCP driver, and enable isolated metadata so that instances on the provider network can access the metadata over the network:
Interface_driver = linuxbridge
Dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
Enable_isolated_metadata = true
Self-service networks
Install component
Yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
Configure service components
Edit the / etc/neutron/neutron.conf file and complete the following:
[DEFAULT]
# enable modular layer 2 (ML2) plug-ins, router services and overlapping IP addresses
Core_plugin = ml2
Service_plugins = router
Allow_overlapping_ips = true
# configure RabbitMQ message queuing access
Transport_url = rabbit://openstack:openstack@172.16.175.11
Auth_strategy = keystone
# notify Compute of network topology changes
Notify_nova_on_port_status_changes = true
Notify_nova_on_port_data_changes = true
[database]
# configure database access
Connection = mysql+pymysql://neutron:neutron@172.16.175.11/neutron
[keystone_authtoken]
# configure identity service access
Www_authenticate_uri = http://172.16.175.11:5000
Auth_url = http://172.16.175.11:5000
Memcached_servers = 172.16.175.11purl 11211
Auth_type = password
Project_domain_name = default
User_domain_name = default
Project_name = service
Username = neutron
Password = neutron
# configure the network to notify Compute of network topology changes
[nova]
Auth_url = http://172.16.175.11:5000
Auth_type = password
Project_domain_name = default
User_domain_name = default
Region_name = RegionOne
Project_name = service
Username = nova
Password = nova
# configure Lock path
[oslo_concurrency]
Lock_path = / var/lib/neutron/tmp
Configure the Modular layer 2 (ML2) plug-in
The ML2 plug-in uses the Linux bridging mechanism to build a layer 2 (bridging and switching) virtual network infrastructure for the instance.
Edit the / etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following:
[ml2]
# enable flat,VLAN and VXLAN networks
Type_drivers = flat,vlan,vxlan
# enable VXLAN self-service network
Tenant_network_types = vxlan
# enable Linux bridge and layer 2 filling mechanism
Mechanism_drivers = linuxbridge,l2population
# enable the port security extension driver
Extension_drivers = port_security
[ml2_type_flat]
# configure the provider virtual network as a flat network
Flat_networks = provider
[ml2_type_vxlan]
# self-Service Network configuration VXLAN Network Identifier range
Vni_ranges = 1PUR 1000
[securitygroup]
# enable ipset to improve the efficiency of security group rules
Enable_ipset = true
Configure Linux Bridge Agent
The Linux bridge agent builds a layer 2 (bridging and switching) virtual network infrastructure for the instance and handles security groups.
Edit the / etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following:
[linux_bridge]
# the virtual network of the provider is mapped to the physical network interface of the provider, where the eth0 is the mapped network card
Physical_interface_mappings = provider:eth0
[vxlan]
# enable VXLAN overlay network, configure the IP address that handles the physical network interface of the overlay network, and enable layer 2 filling
Enable_vxlan = true
Local_ip = 172.16.175.11
L2_population = true
[securitygroup]
# enable security groups and configure Linux bridging iptables firewall drivers:
Enable_security_group = true
Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
By verifying that all of the following sysctlvalues are set to 1: ensure that your Linux operating system kernel supports bridge filters:
Modprobe br_netfilter
Ls / proc/sys/net/bridge
Add to / etc/sysctl.conf:
Net.bridge.bridge-nf-call-ip6tables = 1
Net.bridge.bridge-nf-call-iptables = 1
Execution takes effect
Sysctl-p
Configure Tier 3 proxy
Layer 3 (L3) agents provide routing and NAT services for self-service virtual networks.
Edit the / etc/neutron/l3_agent.ini file and complete the following:
[DEFAULT]
# configure Linux bridge interface driver and extranet bridge
Interface_driver = linuxbridge
Configure DHCP proxy
The DHCP agent provides DHCP services for virtual networks.
Edit the / etc/neutron/dhcp_agent.ini file and complete the following:
[DEFAULT]
# configure the Linux bridge interface driver, the Dnsmasq DHCP driver, and enable isolated metadata so that instances on the provider network can access the metadata over the network
Interface_driver = linuxbridge
Dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
Enable_isolated_metadata = true
Configure the metadata client
Metadata data provides configuration information for virtual machines.
Edit the / etc/neutron/metadata_agent.ini file and complete the following
[DEFAULT]
# configure metadata hosts and shared keys
Nova_metadata_host = controller
Metadata_proxy_shared_secret = heleicool
# heleicool is the password for communication between neutron and nova
Configure computing services (nova computing services) to use network services
Edit the / etc/nova/nova.conf file and do the following
[neutron]
# configure access parameters, enable metadata agent and configure password:
Url = http://172.16.175.11:9696
Auth_url = http://172.16.175.11:5000
Auth_type = password
Project_domain_name = default
User_domain_name = default
Region_name = RegionOne
Project_name = service
Username = neutron
Password = neutron
Service_metadata_proxy = true
Metadata_proxy_shared_secret = heleicool
Installation completed
The network service initialization script requires a symbolic link / etc/neutron/plugins/ml2/ml2_conf.ini to the ML2 plug-in configuration file with / etc/neutron/plugin.ini. If this symbolic link does not exist, create it using the following command
Ln-s / etc/neutron/plugins/ml2/ml2_conf.ini / etc/neutron/plugin.ini
Populate the database. Neutron.conf and ml2_conf.ini are needed here.
Su-s / bin/sh-c "neutron-db-manage-- config-file / etc/neutron/neutron.conf\
-config-file / etc/neutron/plugins/ml2/ml2_conf.ini upgrade head "neutron
Restart the nova computing service because its configuration file has been modified.
Systemctl restart openstack-nova-api.service
Start the network service and configure it to start at system boot
Systemctl enable neutron-server.service\
Neutron-linuxbridge-agent.service neutron-dhcp-agent.service\
Neutron-metadata-agent.service
Systemctl start neutron-server.service\
Neutron-linuxbridge-agent.service neutron-dhcp-agent.service\
Neutron-metadata-agent.service
Neutron Compute Node installation
Install component
Yum install openstack-neutron-linuxbridge ebtables ipset-y
Configure common components
The Networking common component configuration includes authentication mechanisms, message queues, and plug-ins.
Edit the / etc/neutron/neutron.conf file and complete the following:
Comment out any connection options because the compute node does not access the database directly
[DEFAULT]
# configure RabbitMQ message queuing access
Transport_url = rabbit://openstack:openstack@172.16.175.11
# configure identity service access
Auth_strategy = keystone
[keystone_authtoken]
Www_authenticate_uri = http://172.16.175.11:5000
Auth_url = http://172.16.175.11:5000
Memcached_servers = 172.16.175.11purl 11211
Auth_type = password
Project_domain_name = default
User_domain_name = default
Project_name = service
Username = neutron
Password = neutron
[oslo_concurrency]
# configure Lock path
Lock_path = / var/lib/neutron/tmp
Configure network options
Select the same network option selected for the controller node to configure its specific services
Procider Network
Configure Bridge Agent
The Linux bridge agent builds a layer 2 (bridging and switching) virtual network infrastructure for the instance and handles security groups.
Edit the / etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following:
[linux_bridge]
# Mapping the provider virtual network to the provider physical network interface
Physical_interface_mappings = provider:eth0
[vxlan]
# disable VXLAN overlay network
Enable_vxlan = false
[securitygroup]
# enable security groups and configure Linux bridging iptables firewall drivers
Enable_security_group = true
Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
By verifying that all of the following sysctlvalues are set to 1: ensure that your Linux operating system kernel supports bridge filters:
Modprobe br_netfilter
Ls / proc/sys/net/bridge
Add to / etc/sysctl.conf:
Net.bridge.bridge-nf-call-ip6tables = 1
Net.bridge.bridge-nf-call-iptables = 1
Execution takes effect
Sysctl-p
Self-service networks
Configure Bridge Agent
The Linux bridge agent builds a layer 2 (bridging and switching) virtual network infrastructure for the instance and handles security groups.
Edit the / etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following:
[linux_bridge]
# Mapping the provider virtual network to the provider physical network interface
Physical_interface_mappings = provider:eth0
[vxlan]
# enable VXLAN overlay network, configure the IP address that handles the physical network interface of the overlay network, and enable layer 2 filling
Enable_vxlan = true
Local_ip = OVERLAY_INTERFACE_IP_ADDRESS
L2_population = true
[securitygroup]
# enable security groups and configure Linux bridging iptables firewall drivers
Enable_security_group = true
Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
By verifying that all of the following sysctlvalues are set to 1: ensure that your Linux operating system kernel supports bridge filters:
Modprobe br_netfilter
Ls / proc/sys/net/bridge
Add to / etc/sysctl.conf:
Net.bridge.bridge-nf-call-ip6tables = 1
Net.bridge.bridge-nf-call-iptables = 1
Execution takes effect
Sysctl-p
Configure computing (nova computing services) services to use network services
Edit the / etc/nova/nova.conf file and complete the following
[neutron]
#...
Url = http://172.16.175.11:9696
Auth_url = http://172.16.175.11:5000
Auth_type = password
Project_domain_name = default
User_domain_name = default
Region_name = RegionOne
Project_name = service
Username = neutron
Password = neutron
Complete the installation
Restart the Compute service
Systemctl restart openstack-nova-compute.service
Start the Linux Bridge Agent and configure it to start at system boot
Systemctl enable neutron-linuxbridge-agent.service
Systemctl start neutron-linuxbridge-agent.service
Verification operation
Provider networks
List proxies that verify successful connection to neutron
Openstack network agent list
Self-service networks
List proxies that verify successful connection to neutron
# Metadata agent/Linux brideg agent/L3 agent/DHCP agent four agents
Openstack network agent list
Start the instance
After there is no problem with all the above services, you can create and start the virtual machine.
Create a virtual network
First of all, you need to create a virtual network and configure the virtual network according to the network options selected when configuring Neutron.
Provider networks
Create a network
Source admin-openstack.sh
Openstack network create-share-external\
-- provider-physical-network provider\
-- provider-network-type flat public
#-the share option allows all projects to use virtual networks
The #-external option defines the virtual network as external. If you want to create an internal network, you can use-- internal. Internal by default
#-provider-physical-network is the flat_networks configured in ml2_conf.ini.
#-provider-network-type flat is the network name
Create a subnet on the network
Openstack subnet create-- network public\
-- allocation-pool start=172.16.175.100,end=172.16.175.250\
-- dns-nameserver 172.16.175.2-- gateway 172.16.175.2\
-- subnet-range 172.16.175.0 apt 24 public
#-subnet-range uses CIDR notation to represent subnets that provide IP
# start and end assign the scope of IP to the instance, respectively
#-dns-nameserver specifies the IP address to be resolved by DNS
#-gateway gateway address
Self-service networks
Create your own network
Source admin-openstack.sh
Openstack network create selfservice
Create a subnet on the network
Openstack subnet create-- network selfservice\
-- dns-nameserver 8.8.8-- gateway 192.168.1.1\
-- subnet-range 192.168.1.0 Universe 24 selfservice
Create a rout
Source demo-openstack.sh
Openstack router create router
Add a self-service network subnet as an interface on the router
Openstack router add subnet router selfservice
Set up a gateway on the provider network on the router
Openstack router set router-external-gateway public
Verification operation
Lists network namespaces. You should see one qrouter namespace and two qdhcp namespaces
Source demo-openstack.sh
Ip netns
List the ports on the router to determine the gateway IP address on the provider network
Openstack port list-router router
Create an instance configuration type
# assign resources to virtual machines as 1C64M resource types named m1.nano
Openstack flavor create-- id 0-- vcpus 1-- ram 64-- disk 1 m1.nano
Configure key pair
# generate key file
Ssh-keygen-Q-N ""
# openstack creates a key named mykey
Openstack keypair create-public-key ~ / .ssh/id_rsa.pub mykey
# View the key
Openstack keypair list
Add Security Policy
By default, the default security group applies to all instances.
# allow icmp
Openstack security group rule create-proto icmp default
# allow port 22
Openstack security group rule create-proto tcp-dst-port 22 default
Start the instance
Provider networks
Determine instance options
View available configuration types
Source demo-openstack.sh
Openstack flavor list
View available images
Openstack image list
View available networks
Openstack network list
View available security groups
Openstack security group list
Start the instance
Openstack server create-flavor m1.nano-image cirros\
-- nic net-id=PROVIDER_NET_ID-- security-group default\
-- key-name mykey provider-instance
# PROVIDER_NET_ID is the public network ID, and if the selected environment contains only one network, you can omit this-- nic option because OpenStack automatically selects the only available network.
Check the status of the instance
Openstack server list
Use the virtual console to access the instance
Openstack console url show provider-instance
Self-service networks
Determine instance options
View available configuration types
Source demo-openstack.sh
Openstack flavor list
View available images
Openstack image list
View available networks
Openstack network list
View available security groups
Openstack security group list
Start the instance
# replace SELFSERVICE_NET_ID with selfservice network ID.
Openstack server create-flavor m1.nano-image cirros\
-- nic net-id=SELFSERVICE_NET_ID-- security-group default\
-- key-name mykey selfservice-instance
Check the status of the instance
Openstack server list
Use the virtual console to access the instance
Openstack console url show provider-instance
Horizon service installation
Horizon services need to be based on Apache HTTP services and Memcached services, and I install this service on the control node, so I do not need to install these services, if you want to deploy separately, you need to install these services.
Install and configure components
Installation package
Yum install openstack-dashboard-y
Edit the / etc/openstack-dashboard/local_settings file and complete the following
# configure the dashboard to use OpenStack services on the controller node
OPENSTACK_HOST = "172.16.175.11"
# configure the list of hosts allowed to access
ALLOWED_HOSTS = ['*', 'two.example.com']
# configure memcached session Storage Service
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache'
'LOCATION': '172.16.175.11 purl 11211'
}
}
# enable Identity API version 3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3"% OPENSTACK_HOST
# enable support for domains
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
# configure API version
OPENSTACK_API_VERSIONS = {
"identity": 3
"image": 2
"volume": 2
}
# configure Default as the default domain for users created through the dashboard
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
# configure the default role of user for users you create through the dashboard
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "myrole"
# if you select network option 1, disable support for layer 3 network services
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False
'enable_quotas': False
'enable_distributed_router': False
'enable_ha_router': False
'enable_lb': False
'enable_firewall': False
'enable_***': False
'enable_fip_topology_check': False
}
# configure time zone
TIME_ZONE = "Asia/Shanghai"
/ etc/httpd/conf.d/openstack-dashboard.conf if it is not included, add the following line.
WSGIApplicationGroup% {GLOBAL}
Installation completed
Restart the Web server and memcached storage service:
Systemctl restart httpd.service memcached.service
Complete
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.