In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
OpenStack introduction
Openstack is a cloud platform management project, and we can use openstack to manage our resource pool, which contains many subprojects. Openstack is composed of several different modules, and different functions are corresponding to different modules. The three cores of openstack are computing, network and storage. The interaction is provided by calling the API of different modules.
The version of openstack is released very quickly, and from the original A version to the current N version, officials usually release a new version every six months.
Each service in openstack has a corresponding project name, and different projects are equivalent to a module that provides separate services. The specific correspondence is as follows:
Horizon (Dashboard): Openstack's web management service.
Nova (Compute): through virtualization technology, provides a pool of computing resources.
Neutron (Networking): network resource management for virtual machines.
Storage Services (Storage)
Swift (Object Storage): object storage, suitable for "write once, read multiple".
Cinder (Block Storage): block storage that provides a pool of storage resources.
Shared Services (Share Service):
Keystone (Identify service): authentication management.
Glance (Image service): provides registration and storage management of virtual images.
Ceilometer (Telemetry): provide monitoring and data acquisition, measurement services.
High level Services (Higher-level service):
Heat (Orchestration): a component that automates deployment.
Trove (Database Service): provides database application services.
Basic service configuration installation
Preparatory work:
Two CentOS7 machines, named node1 and node2, need to be bound to hosts if there is no internal DNS.
Example of CS7 modifying hostname:
# hostnamectl set-hostname node1# hostnamectl status# cat / etc/hostname node1
Install the Yum source:
Rpm-ivh http://mirrors.aliyun.com/epel/epel-release-latest-7.noarch.rpm
Install the openstack repository:
Yum install-y centos-release-openstack-mitaka
Install the management pack and client:
Yum install-y python-openstackclientyum install-y openstack-selinux
The following services are installed on the control node:
Yum install-y mariadb mariadb-server python2-PyMySQL yum install-y rabbitmq-serveryum install-y openstack-keystone httpd mod_wsgi memcached python-memcachedyum install-y openstack-glanceyum install-y openstack-nova-api openstack-nova-cert\ openstack-nova-conductor openstack-nova-console\ openstack-nova-novncproxy openstack-nova-scheduleryum install-y openstack-neutron openstack-neutron-ml2\ openstack-neutron-linuxbridge ebtables
Compute nodes install nova and neutron:
Yum install-y openstack-nova-compute sysfsutilsyum install-y openstack-neutron openstack-neutron-linuxbridge ebtables
All components except Horizon,openstack need to connect to the database.
Except for Horizon and keystone, other components need to connect to RabbitMQ (message queuing, communication hub).
OpenStack database configuration
Create / etc/my.cnf.d/openstack.cnf and add the following configuration:
[mysqld] bind-address = 172.16.10.50default-storage-engine = innodbinnodb_file_per_table # exclusive tablespace max_connections = 4096collation-server = utf8_general_cicharacter-set-server = utf8 start the database:
# systemctl enable mariadb.service# systemctl start mariadb.service
To ensure the security of the database service, run the ``mysql_secure_ installation`` script. In particular, set an appropriate password for the root user of the database.
# mysql_secure_installation creates a library and authorizes:
> create database keystone; > grant all on keystone.* to 'keystone'@'localhost' identified by' keystone'; > grant all on keystone.* to 'keystone'@'%' identified by' keystone'; > create database glance; > grant all on glance.* to 'glance'@'localhost' identified by' glance'; > grant all on glance.* to 'glance'@'%' identified by' glance'; > create database nova; > grant all on nova.* to 'nova'@'localhost' identified by' nova' > grant all on nova.* to 'nova'@'%' identified by' nova'; > create database nova_api; > grant all on nova_api.* to 'nova'@'localhost' identified by' nova'; > grant all on nova_api.* to 'nova'@'%' identified by' nova'; > create database neutron; > grant all on neutron.* to 'neutron'@'localhost' identified by' neutron'; > grant all on neutron.* to 'neutron'@'%' identified by' neutron';MariaDB [(none)] > show databases +-+ | Database | +-+ | glance | | information_schema | | keystone | | mysql | | neutron | | nova | | nova_api | | performance_schema | +- -+ 8 rows in set (0.00 sec)
The control node installs RabbitMQ and authorizes users:
Yum install-y rabbitmq-server systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service rabbitmqctl add_user openstack openstack rabbitmqctl set_permissions openstack ". *"
Open the monitoring plug-in:
Rabbitmq-plugins enable rabbitmq_management
At this point, you can check whether the service port of RabbitMQ is open:
# netstat-lntup | grep 15672tcp 0 0 0.0.0.0 grep 15672tcp 15672 0.0.0.0 LISTEN 30174/beam
You can visit the web interface directly to view: http://localhost_ip:15672/
The service of RabbitMQ is port 5672:
# netstat-lntup | grep 5672tcp 0 0 0.0.0. Grep 5672tcp 15672 0.0.0. 0 LISTEN 30174/beam tcp 0 0 0.0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 15:: 5672 :: * LISTEN 30174/beam
Synchronization time of all hosts
Yum install ntpdate-yntpdate time1.aliyun.comtimedatectl set-timezone Asia/Shanghai
In the production environment, it is necessary to ensure that the server time is consistent, otherwise the virtual machine will not be created, and all kinds of problems will occur in the synchronization process.
OpenStack Certification Management-Keystone
Keystone mainly provides the functions of user authentication and service directory. The service authorization of openstack needs to be completed on keystone. Keystone provides the authorized user with a token with a time validity period, and needs to be re-authorized after the user token expires. The service catalog contains all service items and associated API endpoints.
User authentication: User, Project,Token,Role.
The Role here is like a user group with the same permissions, and Keystone uses these mechanisms to authenticate and authorize operations.
Service catalog: service,endpoint.
Service services, such as Nova, Glance,Swift. A service can confirm whether the current user has access to the resource.
Endpoint is actually a url, and each url corresponds to the access address of an instance of the service, and has public, private and admin permissions. Public url can be accessed globally, private url can only be accessed by local area network, and admin url is separated from regular access.
Deployment of Keystone
Keystone uses memcatch to manage certified token. The reason why you choose to use memcache instead of mysql is that the token stored in memcache can set the expiration time and clean up automatically after expiration to prevent the table from being too large and difficult to maintain in mysql due to long-term use.
Generate a random value of token
# openssl rand-hex 1048d263aed5f11b0bc02f
Modify the configuration file of keystone
Configure the following items in the / etc/keystone/keystone.conf file
In the [DEFAULT] section, define the value of the initial management token
Replace the ``ADMIN_ token`` value with the random number generated in the previous step.
# grep "admin_token" / etc/keystone/keystone.confadmin_token = 48d263aed5f11b0bc02f
In the [database] section, configure database access:
Connection = mysql+pymysql://keystone:keystone@172.16.10.50/keystone in the ``[token]`` section, configure the provider of Fernet UUID tokens, and modify the storage method of token to memcache.
Provider = fernetdriver = memcache in the [memcache] section, modify the host ip that provides the memcache service:
Servers = 172.16.10.50 purl 11211
Once the modification is complete, the configuration of keystone is complete.
[root@node1 ~] # grep'^ [a Murz]'/ etc/keystone/keystone.confadmin_token = 48d263aed5f11b0bc02fconnection = mysql+pymysql://keystone:keystone@172.16.10.50/keystoneservers = 172.16.10.50:11211provider = fernetdriver = memcache initializes the database of the authentication service for database synchronization:
# su-s / bin/sh-c "keystone-manage db_sync" keystone
Verify that the synchronization is successful:
# mysql-h 172.16.10.50-ukeystone-pkeystone-e "use keystone; show tables;" initializes Fernet keys. After this command is executed, a fernet-keys directory is created under / etc/keystone:
# keystone-manage fernet_setup-- keystone-user keystone--keystone-group keystone launch memcached:
Systemctl enable memcachedsystemctl start memcached
Configure the Apache HTTP server
Edit the ``/ etc/httpd/conf/ httpd.conf`` file, and configure the ``ServerName`` option as the control node:
ServerName 172.16.10.50 ServerName 80 creates a file / etc/httpd/conf.d/wsgi-keystone.conf with the following
Listen 5000Listen 35357 WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=% {GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / / usr/bin/keystone-wsgi-public WSGIApplicationGroup% {GLOBAL} WSGIPassAuthorization On ErrorLogFormat "% {cu} t% M" ErrorLog / var/log/httpd/keystone-error.log CustomLog / var/log/httpd/keystone-access.log combined Require all granted WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=% {GROUP} WSGIProcessGroup keystone -admin WSGIScriptAlias / / usr/bin/keystone-wsgi-admin WSGIApplicationGroup% {GLOBAL} WSGIPassAuthorization On ErrorLogFormat "% {cu} t% M" ErrorLog / var/log/httpd/keystone-error.log CustomLog / var/log/httpd/keystone-access.log combined Require all granted launch Apache:
# systemctl enable httpd.service# systemctl start httpd.service to see if the corresponding ports 5000 and 35357 are enabled.
At the same time, check whether there is an error message in / var/log/keystone/keystone.log. If there is an error message, you need to enable the debug mode of keystone to troubleshoot:
Vim / etc/keystone/keystone.conf # debug = false # change this to true, and then check the log.
Set up authentication
Configure the authentication token to execute directly on the command line:
Export OS_TOKEN=48d263aed5f11b0bc02f configuration Endpoint URL:
Export OS_URL= http://172.16.10.50:35357/v3
Configure the certified API version:
Export OS_IDENTITY_API_VERSION=3
Create domains, projects, users, and roles
Create a domain ``default``:
# openstack domain create-description "Default Domain" default+-+--+ | Field | Value | +-+- -+ | description | Default Domain | | enabled | True | | id | 5ab6cfb424ee4c99b0fea0cbec19e3b3 | | name | default | +-+-+ in the environment For administrative operations, create managed projects, users, and roles:
Create an admin project:
Openstack project create-domain default\-description "Admin Project" admin
Create an admin user and set the password:
Openstack user create-- domain default\-- password-prompt admin
Create an admin role:
Openstack role create admin
Add the role ``admin`` to admin projects and users:
Openstack role add-- project admin-- user admin admin the meaning of the above command is to add the admin user to the project of admin and authorize it to the admin role
Create a project for demo:
Create the ``demo`` project: (do not repeat this step when creating additional users for this project. )
Openstack project create-- domain default\-- description "Demo Project" demo to create ``demo`` user and set password:
Openstack user create-- domain default\-- password-prompt demo creates a user role:
Openstack role create user
Add user`` role to ``demo project and user role:
Openstack role add-project demo-user demo user
Create a service project
Use a service project that you add to your environment where each service contains unique users.
Create ``service`` project:
Openstack project create-- domain default\-- description "Service Project" service to create glance users:
Openstack user create-- domain default-- password-prompt glance add glance users to the service project and admin roles:
Openstack role add-- project service-- user glance admin creates a nova user:
Openstack user create-- domain default-- password-prompt nova add nova users to the service project and admin roles:
Openstack role add-- project service-- user nova admin creates a neutron user:
Openstack user create-domain default-password-prompt neutron
Add neutron users to the service project and admin roles:
Openstack role add-project service-user neutron admin
Service registration
Openstack service create-- name keystone-- description "OpenStack Identity" identity creates the endpoint of the public and specifies the url. Note the IP and port:
Openstack endpoint create-- region RegionOne identity public http://172.16.10.50:5000/v3 creates an endpoint of type internal:
Openstack endpoint create-- region RegionOne identity internal http://172.16.10.50:5000/v3 creates an endpoint of type admin and specifies the management port of the admin:
Openstack endpoint create-- region RegionOne identity admin http://172.16.10.50:35357/v3 hint: because the internals correspond to each other, if a creation error occurs, you need to delete all three corresponding entries and recreate them.
Delete method: (service, user, project, etc., can all be deleted in this way)
Openstack endpoint list # View the corresponding records ID # delete ID these records are essentially in the keystone.endpoint table of mysql, or you can modify the table directly.
Verification operation
Verify the previous operation before installing other services.
Reset the ``OS_ token`` and ``OS_ URL`` environment variables first:
Unset OS_TOKEN OS_URL uses admin users to request authentication tokens for testing:
Openstack-- os-auth-url http://172.16.10.50:35357/v3\-- os-project-domain-name default-- os-user-domain-name default\-- os-project-name admin-- os-username admin token issue demo user using demo project, request authentication token test:
Openstack-- os-auth-url http://172.16.10.50:35357/v3\-- os-project-domain-name default-- os-user-domain-name default\-- os-project-name demo-- os-username demo token issue if it appears (HTTP 401), the password is entered incorrectly.
Create an environment variable script
In the above test, the method of specifying parameters is used to verify the connection, but the input is relatively long. In the actual production operation, we can define an environment variable script to omit the operation of specified parameters.
Create admin and ``demo`` projects and users create client-side environment variable scripts. The next section references these scripts to load the appropriate credentials for the client operation.
Edit the file admin-openstack.sh and add the following (note that you specify the password and URL):
Export OS_PROJECT_DOMAIN_NAME=defaultexport OS_USER_DOMAIN_NAME=defaultexport OS_PROJECT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=adminexport OS_AUTH_URL= http://172.16.10.50:35357/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2 similarly, add demo's environment profile demo-openstack.sh:
After export OS_PROJECT_DOMAIN_NAME=defaultexport OS_USER_DOMAIN_NAME=defaultexport OS_PROJECT_NAME=demoexport OS_USERNAME=demoexport OS_PASSWORD=demoexport OS_AUTH_URL= http://172.16.10.50:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2 adds execution permissions to the script, the script must be executed by source every time an openstack-related command is executed, otherwise it will not succeed.
Validate the environment variable script:
Source admin-openstack.sh tries to get the token directly to see if it is successful.
Openstack token issue
Openstack Mirror Management-Glance
Glance consists of three components: Glance-api, Glance-Registry and p_w_picpath Storage.
Glance-api: accepts requests for creation, deletion and reading of cloud system images. By receiving the request from REST API, calling other modules to complete the image search, acquisition, upload, deletion and other operations. The default listening port is 9292.
Glance-Registry: registration service for cloud images. Data interaction with mysql to store and obtain mirrored metadata. There are two tables in the Glance database, one is the pairwpicpath table, and the other is the p_w_picpath property table. The p_w_picpath table stores the image format, size and other information, while the p_w_picpath property table mainly stores the customization information of the image. The port on which glance-registry listens is 9191.
Image storage: it's a storage interface layer. Strictly speaking, it doesn't belong to glance, it's just an interface that provides calls to glance. Through this interface, glance can obtain the image. P_w_picpath storage supports storage such as S3 of Amazon, Swift of Openstack itself, and distributed storage such as ceph,sheepdog,GlusterFS. P_w_picpath storage is an interface for image storage and acquisition. Because it is only an interface layer, the specific implementation needs the support of external storage.
Deployment of Glance
Http://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/glance-install.html#prerequisites
Install the glance service and configure the database
Previous operations have been done to the database, so you need to edit the configuration file / etc/glance/glance-api.conf and complete the following actions:
In the [database] section, configure database access: (the second glance is the password)
Connection = mysql+pymysql://glance:glance@172.16.10.50/glance this parameter is also configured in [database] in the configuration file of / etc/glance/glance-registry.conf
Connection = mysql+pymysql://glance:glance@172.16.10.50/glance synchronizes the database, here is a warning:
Su-s / bin/sh-c "glance-manage db_sync" glance verifies that the database is synchronized successfully:
# mysql-h 172.16.10.50-uglance-pglance-e "use glance;show tables;"
Set up keystone
Set the configuration of keystone in / etc/glance/glance-api.conf, in the following two modules
Add the following configuration information:
[keystone_authtoken]... auth_uri = http://172.16.10.50:5000auth_url = http://172.16.10.50:35357memcached_servers = 172.16.10.50:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultuser_domain_name = serviceusername = glancepassword = glance [paste_deploy]... flavor = keystone sets the same configuration in / etc/glance/glance-registry.conf:
[keystone_authtoken]... auth_uri = http://172.16.10.50:5000auth_url = http://172.16.10.50:35357memcached_servers = 172.16.10.50:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = glancepassword = glance [paste_deploy]... flavor = keystone configuration image storage
Modify the configuration of / etc/glance/glance-api.conf:
In the [glance_store] section, configure the local file system storage and mirror file location:
[glance_store] stores = file,httpdefault_store = filefilesystem_store_datadir = / var/lib/glance/p_w_picpaths start the service:
Systemctl enable openstack-glance-api.service openstack-glance-registry.service
Systemctl start openstack-glance-api.service openstack-glance-registry.service
Check to see if ports 9292 and 9191 are open after startup.
Do service registration on keystone
To create a glance service entity, load the admin environment variable first
# source admin-openstack.sh# openstack service create-- name glance\-- description "OpenStack Image" p_w_picpath creates the API endpoint of the image service:
# openstack endpoint create-- region RegionOne\ p_w_picpath public http://172.16.10.50:9292# openstack endpoint create-- region RegionOne\ p_w_picpath internal http://172.16.10.50:9292# openstack endpoint create-- region RegionOne\ p_w_picpath admin http://172.16.10.50:9292 verify whether the configuration is successful:
# glance p_w_picpath-list +-+-+ | ID | Name | +-+
Verification operation
Obtain admin credentials to gain access to commands that only administrators can execute:
# source admin-openstack.sh download source image:
# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img uploads the image to the image service and sets it to be publicly visible, so that all projects can access it:
# openstack p_w_picpath create "cirros"-- file cirros-0.3.4-x86_64-disk.img\-- disk-format qcow2-- container-format bare-- public verify whether the upload is successful:
# openstack p_w_picpath list+--+ | ID | Name | Status | +-- -+ | 82c3ba8f-4930-4e32-bd1b-34881f5eb4cd | cirros | active | this image can be seen under / var/lib/glance/p_w_picpaths/ after uploading successfully. Named after the mirror ID:
[root@node1] # cd / var/lib/glance/p_w_picpaths/ [root @ node1 p_w_picpaths] # lltotal 12980 Murray. 1 glance glance 13287936 Oct 26 16:00 82c3ba8f-4930-4e32-bd1b-34881f5eb4cd
OpenStack Computing Service-Nova
Http://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/nova-controller-install.html
In the creation of openstack, we generally put the compute node components of Nova on the host that needs to create virtual machines, while other Nova components except computing nodes are installed on the control node, and computing nodes are only responsible for creating virtual machines.
Service components of Nova:
API: responsible for receiving and responding to external requests. Requests received by API will be placed in the message queue (rabbitMQ). Is the only way for external access to nova.
Cert: responsible for identity authentication EC2.
Scheduler: used for CVM scheduling. Decide on which host (compute node) the virtual machine is created
Conductor: middleware for computing nodes to access databases.
Consoleauth: authorization verification for the console
Novncproxy: VNC proxy
Configuration database
Edit the ``/ etc/nova/ nova.conf`` file and complete the following:
In the ``[api_database]`` and ``[database]`` sections, configure the connection to the database:
[api_database]... connection = mysql+pymysql://nova:nova@172.16.10.50/nova_ API [database]... connection = mysql+pymysql://nova:nova@172.16.10.50/nova
Synchronize the Compute database:
Su-s / bin/sh-c "nova-manage api_db sync" novasu-s / bin/sh-c "nova-manage db sync" nova to see if the database synchronization is successful:
Mysql-h 172.16.10.50-unova-pnova-e "use nova;show tables;" mysql-h 172.16.10.50-unova-pnova-e "use nova_api;show tables;"
Configure keystone
Edit the ``/ etc/nova/ nova.conf`` file and complete the following:
Edit the "[keystone_authtoken]" section and add the following:
[keystone_authtoken]... auth_uri = http://172.16.10.50:5000auth_url = http://172.16.10.50:35357memcached_servers = 172.16.10.50:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = nova open the comment in [DEFAULT]:
[DEFAULT]... auth_strategy = keystone
Configure RabbitMQ
Modify the nova.conf file:
[DEFAULT]... rpc_backend=rabbitrabbit_host=172.16.10.50rabbit_port=5672rabbit_userid=openstackrabbit_password=openstack
Configure nova service parameters
Edit the ``/ etc/nova/ nova.conf`` file
In the ``[DEFAULT]`` section, only computing and metadata API are enabled:
Enabled_apis=osapi_compute,metadata
In the [DEFAULT] section, enable the Networking service: (the setting here requires modifying the default parameter Noop)
Use_neutron = Truefirewall_driver = nova.virt.firewall.NoopFirewallDriver in the ``[vnc] `section, configure the VNC agent to use the IP address of the management interface of the control node:
Vncserver_listen=172.16.10.50vncserver_proxyclient_address=172.16.10.50 configure the location of the image service API in the "glance" region:
In the [oslo_concurrency] section, api_servers= http://172.16.10.50:9292 configures the lock path:
Lock_path=/var/lib/nova/tmp starts the Compute service and sets it to start with the system:
Systemctl enable openstack-nova-api.service\ openstack-nova-consoleauth.service openstack-nova-scheduler.service\ openstack-nova-conductor.service openstack-nova-novncproxy.service
Systemctl start openstack-nova-api.service\ openstack-nova-consoleauth.service openstack-nova-scheduler.service\ openstack-nova-conductor.service openstack-nova-novncproxy.service
Register for the nova service
Create a nova service entity:
# source admin-openstack.sh # openstack service create-- name nova\-- description "OpenStack Compute" compute creates a Compute service API endpoint. The port of nova api is 8774:
# openstack endpoint create-- region RegionOne\ compute public http://172.16.10.50:8774/v2.1/%\(tenant_id\)s# openstack endpoint create-- region RegionOne\ compute internal http://172.16.10.50:8774/v2.1/%\(tenant_id\)s# openstack endpoint create-- region RegionOne\ compute admin http://172.16.10.50:8774/v2.1/%\(tenant_id\)s to check whether the registration is successful:
# openstack host list+-+ | Host Name | Service | Zone | +-+ | node1 | conductor | internal | | node1 | consoleauth | internal | | node1 | scheduler | internal | +- -+
Deployment of Nova Compute Node
Http://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/nova-compute-install.html
A compute node is a node that actually runs virtual machines, and its hardware configuration determines how many virtual machines can be run. And these nodes need to support CPU virtualization.
Determine whether CPU supports hardware acceleration for virtual machines: (if the result is not 0, it is supported)
Egrep-c'(vmx | svm)'/ proc/cpuinfo the compute node here has the corresponding service installed earlier.
Modify the compute node nova configuration: (because it is similar to most configurations of the control node, directly from the control node scp configuration file to local modification, and modify user permissions)
# scp 172.16.10.50:/etc/nova/nova.conf. / nova1.conf# chown root:nova nova1.conf# mv nova.conf nova.conf-bak# mv nova1.conf nova.conf modify configuration file:
Delete the configuration parameter of connection in [database].
# connection = mysql+pymysql://nova:nova@172.16.10.50/nova#connection = mysql+pymysql://nova:nova@172.16.10.50/nova_api
In the ``[vnc]`` section, enable and configure remote console access:
Open the comment:
Enabled=truenovncproxy_base_url= http://172.16.10.50:6080/vnc_auto.htmlvncserver_listen=0.0.0.0 vncserver_proxyclient_address=172.16.10.51 # is modified to turn on the default KVM virtualization for the local host ip:
Virt_type=kvm
Start the computing service and its dependencies and configure it to start automatically with the system:
Systemctl enable libvirtd.service openstack-nova-compute.servicesystemctl start libvirtd.service openstack-nova-compute.service verifies that the operation is normal:
View in the control node:
# source admin-openstack.sh # openstack host list+-+ | Host Name | Service | Zone | +-+ | node1 | conductor | internal | | node1 | consoleauth | internal | | node1 | scheduler | Internal | | node2 | compute | nova | +-+ # nova -+ | ID | Name | Status | Server | +-+ | 82c3ba8f-4930-4e32-bd1b-34881f5eb4cd | cirros | ACTIVE | | + -- + the above result appears Prove that the installation of the compute node is normal.
OpenStack Network Service-Neutron
Neutron is served by a Neutron Server and mainly consists of layer 2 plug-ins, such as Linux Bridge,openvSwitch,DHCP-Agent, L3-Agent, LBAAS-Agent and other components. The services and protocols in the actual physical network are simulated.
Installation and deployment
Http://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-controller-install-option1.html
Neutron has two network architectures, a single flat network and a complex multi-segment network. Here, take a single flat network as an example.
Installation and deployment requires the corresponding services to be installed on the control node and compute node, which has been installed before, so skip this step.
Database configuration
Edit the ``/ etc/neutron/ uploon.conf`` file and perform the following operations in the control node:
In the [database] section, configure database access:
Connection = mysql+pymysql://neutron:neutron@172.16.10.50/neutron configuration Keystone
In the "[DEFAULT]" and "[keystone_authtoken]" sections, configure authentication service access:
[DEFAULT] auth_strategy = keystone
[keystone_authtoken] auth_uri = http://172.16.10.50:5000auth_url = http://172.16.10.50:35357memcached_servers = 172.16.10.50:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = neutron configuration RabbitMQ
In the "[DEFAULT]" and "[oslo_messaging_rabbit]" sections, configure the connection to the "RabbitMQ" message queue:
[DEFAULT]... rpc_backend = rabbit# modify other configurations of the note [oslo_messaging_rabbit] rabbit_host = 172.16.10.50rabbit_userid = openstackrabbit_password = openstackneutron
In the ``[DEFAULT]`` section, enable the ML2 plug-in and disable other plug-ins:
Core_plugin = ml2service_plugins = configure nova
In the ``[DEFAULT]`` and ``[nova]`` sections, configure network services to notify computing nodes of network topology changes:
Notify_nova_on_port_status_changes = truenotify_nova_on_port_data_changes = true [Nova] auth_url = http://172.16.10.50:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = novapassword = nova in the [oslo_concurrency] section, configure the lock path:
Lock_path = / var/lib/neutron/tmp
Configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Linuxbridge mechanism to create an layer-2 virtual network infrastructure for the instance.
Edit the ``/ etc/neutron/plugins/ml2/ml2_ conf.ini`` file and complete the following:
In the ``[ml2]`` section, enable flat and VLAN networks:
Type_drivers = flat,vlan,gre,vxlan,geneve
In the ``[ml2]`` section, disable VPC:
Tenant_network_types =
In the ``[ml2]`` section, enable the Linuxbridge mechanism:
Mechanism_drivers = linuxbridge,openvswitch
In the ``[ml2]`` section, enable the port security extension driver:
Extension_drivers = port_security
In the ``[ml2_type_flat]`` section, configure the public virtual network as flat network
[ml2_type_flat] flat_networks = public
In the ``[securitygroup]`` section, enable ipset to increase the efficiency of security group rules:
Enable_ipset = true
Configure Linuxbridge proxy
The Linuxbridge agent establishes the layer-2 virtual network for the instance and handles the security group rules.
Edit the ``/ etc/neutron/plugins/ml2/linuxbridge_ agent.ini`` file and complete the following:
In the ``[linux_bridge]`` section, correspond the public virtual network to the public physical network interface:
Physical_interface_mappings = public:eth0
In the ``[vxlan]`` section, prohibit VXLAN from overwriting the network:
Enable_vxlan = False
In the ``[securitygroup]`` section, enable the security group and configure Linuxbridge iptables firewall driver:
Enable_security_group = truefirewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Configure DHCP proxy
Edit the ``/ etc/neutron/dhcp_ agent.ini`` file and complete the following:
In the ``[DEFAULT]`` section, configure Linuxbridge driver interface, DHCP driver and enable isolated metadata, so that instances on the public network can access metadata through the network:
Interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriverdhcp_driver = neutron.agent.linux.dhcp.Dnsmasqenable_isolated_metadata = True
Configure metadata proxy
Edit the ``/ etc/neutron/metadata_ agent.ini`` file and complete the following:
In the ``[DEFAULT]`` section, configure the metadata host and the shared password:
Nova_metadata_ip = 172.16.10.50metadata_proxy_shared_secret = trying
Configure network services for nova
Edit the ``/ etc/nova/ nova.conf`` file and complete the following:
In the ``[neutron]`` section, configure the access parameters, enable the metadata proxy and set the password:
[neutron]... url = http://172.16.10.50:9696auth_url = http://172.16.10.50:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = neutronservice_metadata_proxy=truemetadata_proxy_shared_secret = trying
Complete the installation
The network service initialization script requires a soft link / etc/neutron/ plugin.ini`` to the ML2 plug-in configuration file / etc/neutron/plugins/ml2/ml2_ conf.ini``.
# ln-s / etc/neutron/plugins/ml2/ml2_conf.ini / etc/neutron/plugin.ini
Synchronize the database:
# su-s / bin/sh-c "neutron-db-manage-- config-file / etc/neutron/neutron.conf\-- config-file / etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
Restart the computing API service:
Systemctl restart openstack-nova-api.service
When the system starts, start the Networking service and configure it to start
Systemctl enable neutron-server.service\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service\ neutron-metadata-agent.servicesystemctl start neutron-server.service\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service\ neutron-metadata-agent.service
Complete registration on keystone
# source admin-openstack.sh # openstack service create-name neutron-description "OpenStack Networking" network# openstack endpoint create-region RegionOne\ network public http://172.16.10.50:9696 # openstack endpoint create-region RegionOne\ network internal http://172.16.10.50:9696 # openstack endpoint create-region RegionOne\ network admin http://172.16.10.50:9696
Verify that the neutron was verified successfully:
# neutron agent-list+-+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +-+- -+-+ | 172afad4-755b-47 | Linux bridge | node1 | | True | neutron- | | A1-81e8-d38056e2 | agent | linuxbridge-agent | | 441e | | 7f568fdf-192f-45 | Metadata agent | node1 | |: -) | True | neutron-metadata- | | bd-8436-b48ecb5d | | | agent | | 7480 | | fda9f554-952a-4b | DHCP agent | node1 | nova |: -) | True | neutron-dhcp- | | 7e-8509-f2641a65 | | | | agent | | 95c9 | +-- | -+-+
Install Neutron on the compute node
The configuration of the compute node is similar to that of the control node. We can copy the file of the control node directly to the compute node for modification.
# scp / etc/neutron/neutron.conf 172.16.10.51:/etc/neutron/ # scp / etc/neutron/plugins/ml2/linuxbridge_agent.ini 172.16.10.51:/etc/neutron/plugins/ml2/# chown root.neutron / etc/neutron/plugins/ml2/linuxbridge_agent.ini # scp
Delete the configuration section of [database] on neutron.conf, and delete all ``connection`` entries, because the compute node does not directly access the database.
# connection = mysql+pymysql://neutron:neutron@172.16.10.50/neutron
Delete the configuration in the [nova] section at the same time:
[nova]... # auth_url = http://172.16.10.50:35357#auth_type = password#project_domain_name = default#user_domain_name = default#region_name = RegionOne#project_name = service#username = nova#password = nova
Comment out the option for the core plugin:
# core_plugin = ml2#service_plugins =
Comment out the option for nova port notification:
# notify_nova_on_port_status_changes = true#notify_nova_on_port_data_changes = true
View the configuration information of the compute node / etc/neutron/plugins/ml2/linuxbridge_agent.ini:
[root@node2 ~] # grep'^ [Amurz]'/ etc/neutron/plugins/ml2/linuxbridge_agent.ini physical_interface_mappings = public:eth0firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriverenable_security_group = Trueenable_vxlan = False
Modify the configuration file / etc/nova/nova.conf of nova on the compute node, which is consistent with that of the control node:
[neutron] url = http://172.16.10.50:9696auth_url = http://172.16.10.50:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = neutron
Restart the compute node Nova-compute
# systemctl restart openstack-nova-compute starts neutron-linuxbridge-agent:
# systemctl enable neutron-linuxbridge-agent.service# systemctl start neutron-linuxbridge-agent.service
Verify success on the control node:
[root@node1 ~] # neutron agent-list+--+- -- +-+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +-- -- + | 172afad4-755b-47a1-81e8-d38056e2441e | Linux bridge agent | node1 | |: -) | True | neutron-linuxbridge-agent | | 7f568fdf-192f-45bd-8436-b48ecb5d7480 | Metadata agent | node1 | |: -) | True | neutron-metadata-agent | | cb3f16cf-c8dd-4a6b-b9e8-71622cde1774 | Linux bridge agent | node2 | |: -) | True | neutron-linuxbridge-agent | | fda9f554 -952a-4b7e-8509-f2641a6595c9 | DHCP agent | node1 | nova |: -) | True | neutron-dhcp-agent | +-+- -+
Node2 has been added, indicating that the configuration is successful.
Tip: if this process does not get the status of node2 properly, check the configuration file neutron permissions, firewall and selinux settings.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.