In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
4.4 Computing Service configuration (Compute Service Nova)
Deployment node: Controller Node
Novaapi novaconductor novaconsoleauth novanovncproxy novascheduler needs to be installed on the Controller node
Mysql-u root-p123456CREATE DATABASE nova_api;CREATE DATABASE nova;GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY' novaapi';GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY' novaapi';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY' nova';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY' nova'
Openstack user create-domain default-password-prompt novaopenstack role add-project service-user nova adminopenstack service create-name nova-description "OpenStack Compute" computeopenstack endpoint create-region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)sopenstack endpoint create-region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)sopenstack endpoint create-region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
Install the computing service component
① installs Nova components
Yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler
② modifies the configuration file sudo vi / etc/nova/nova.conf.
Enable only compute and metadata APIs at [DEFAULT], and set the
267 # enabled_apis=osapi_compute,metadata changed to enabled_apis=osapi_compute,metadata
Configure database access connections at [api_database] and [database] (if there are no [api_database] and [database] tags, then
Add manually)
Note: replace NOVA_DBPASS with the actual password designed earlier
[api_database]... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_ API [database]... connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
Configure RabbitMQ message queue access at [DEFAULT] and [oslo_messaging_rabbit]
Note: replace RABBIT_PASS with the actual password designed earlier
[DEFAULT]
...
Rpc_backend = rabbit
[oslo_messaging_rabbit]
...
Rabbit_host = controller
Rabbit_userid = openstack
Rabbit_password = RABBIT_PASS
Configure identity service access at [DEFAULT] and [keystone_authtoken]
Note: replace NOVA_PASS with the actual password designed earlier
Note: comment or delete other content at [keystone_authtoken]
[DEFAULT]
...
Auth_strategy = keystone
[keystone_authtoken]
...
Auth_uri = http://controller:5000
Auth_url = http://controller:35357
Memcached_servers = controller:11211
Auth_type = password
Project_domain_name = default
User_domain_name = default
Project_name = service
Username = nova
Password = nova
Configure my_ip at [DEFAULT] as the Management Network port address of the Controller node
My_ip = 10.0.0.11
Enable network service support at [DEFAULT]
Note: by default, the computing service uses the host internal firewall driver, so the firewall driver in the OpenStack network service must be disabled.
Use_neutron = True
Firewall_driver = nova.virt.firewall.NoopFirewallDriver
At [vnc], configure the VNC agent (VNC proxy) with the Controller node Management Network port address.
[vnc]
...
Vncserver_listen = $my_ip
Vncserver_proxyclient_address = $my_ip
Configure the mirror service API location at [glance]
[glance]
...
Api_servers = http://controller:9292
Configure lock_path at [oslo_concurrency]
[oslo_concurrency]
...
Lock_path = / var/lib/nova/tmp
Write configuration information to the computing service database nova
# su-s / bin/sh-c "nova-manage api_db sync" nova# su-s / bin/sh-c "nova-manage db sync" nova
Restart computing services
Systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
Deployment node: Compute Node
Novacompute needs to be installed on the Compute node
.
Note: the following steps are performed on the Compute node
Install the configuration Computing Services component
Install nova-compute components
Yum install openstack-nova-compute
Modify the configuration file sudo vi / etc/nova/nova.conf
① configures RabbitMQ message queue access at [DEFAULT] and [oslo_messaging_rabbit]
Note: replace RABBIT_PASS with the actual password designed earlier
[DEFAULT]... rpc_backend = Rabbit [Oslo _ messaging_rabbit]... rabbit_host = controllerrabbit_userid = openstackrabbit_password = RABBIT_PASS
② configures identity service access at [DEFAULT] and [keystone_authtoken]
Note: replace NOVA_PASS with the actual password designed earlier
Note: comment or delete other content at [keystone_authtoken]
[DEFAULT]
...
Auth_strategy = keystone
[keystone_authtoken]
...
Auth_uri = http://controller:5000
Auth_url = http://controller:35357
Memcached_servers = controller:11211
Auth_type = password
Project_domain_name = default
User_domain_name = default
Project_name = service
Username = nova
Password = NOVA_PASS
Configure my_ip at [DEFAULT] as the Management Network port address of the Compute node
My_ip=10.0.0.31
Enable network service support at [DEFAULT]
[DEFAULT]... use_neutron = Truefirewall_driver = nova.virt.firewall.NoopFirewallDriver
Configure remote control access at [vnc]
[vnc]... enabled = Truevncserver_listen = 0.0.0.0vncserver_proxyclient_address = $my_ipnovncproxy_base_url = http://controller:6080/vnc_auto.html
Note: the VNC server listens for all addresses, and the VNC proxy client listens for only Compute node Management Network port addresses.
Base URL sets the remote console browser access address of the Compute node (if the browser cannot resolve the controller, it should be replaced with the corresponding IP
Address).
Configure the mirror service API at [glance]
Api_servers = http://controller:9292
Configure lock_path at [oslo_concurrency]
Lock_path = / var/lib/nova/tmp
Complete the installation and restart the computing service
① detects whether virtual machine hardware acceleration is supported
Egrep-c'(vmx | svm)'/ proc/cpuinfo
If the returned result is greater than or equal to 1, it is supported. No additional configuration is required.
If the result is 0, hardware acceleration is not supported, and the following additional configuration is required: modify the configuration file sudo vi / etc/nova/novacompute.
The libvirt setting item in conf, using QEMU instead of KVM.
[libvirt]
Virt_type = qemu
Systemctl enable libvirtd.service openstack-nova-compute.servicesystemctl start libvirtd.service openstack-nova-compute.service
Verify that the computing service is installed correctly
Note: the following steps need to be performed on the Controller node
① sets OpenStack admin user environment variables
Source admin-openrc
② prints a list of service components to verify each process that was successfully started and registered.
[root@controller ~] # openstack compute service list+----+ | Id | Binary | Host | Zone | Status | State | Updated At | | +-- + | 1 | nova- | controller | internal | enabled | up | 2016-09-03T09:29 | | consoleauth | | |: 56.000000 | | 2 | nova-conductor | controller | internal | enabled | up | 2016-09-03T09:29 | 56.000000 | 3 | nova-scheduler | controller | internal | enabled | up | 2016-09-03T09:29 | | |: 56.000000 | | 7 | nova-compute | compute | nova | enabled | up | 2016-09-03T09:29 | 56.000000 | +-+ | -+
4.5 Network Service configuration (Networking Service Neutron)
Deployment node: Controller Node
Create a neutron database in MariaDB (MySQL)
Mysql-u root-pCREATE DATABASE neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY' neutron';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY' neutron'
Create a network service certificate and API path
Openstack user create-domain default-password-prompt neutronopenstack role add-project service-user neutron adminopenstack service create-name neutron-description "OpenStack Networking" networkopenstack endpoint create-region RegionOne network public http://controller:9696openstack endpoint create-region RegionOne network internal http://controller:9696openstack endpoint create-region RegionOne network admin http://controller:9696
Install and configure neutron-server service components
Yum install openstack-neutron openstack-neutron-ml2
Modify the configuration file sudo vi / etc/neutron/neutron.conf
Vi / etc/neutron/ [database] connection = mysql://neutron:neutron@controller/neutron [DEFAULT] core_plugin = ml2service_plugins = routerallow_overlapping_ips [default] rpc_backend = rabbi t [Oslo _ messaging_rabbit] rabbit_host = controllerrabbit_userid = openstackrabbit_password = openstack [default] auth_strategy = Keystone [Keystone _ authtoken] auth_uri = http://controller:5000auth_url = http://controller:35357auth_plugin = passwordproject_domain_id = defaultuser_domain_id = defaultproject_name = serviceusername = neutronpassword = notify_nova_on_port_status_changes = Truenotify_nova_on_port_data_changes = True [default] auth_url = http://controller:35357auth_plugin = passwordproject_domain_id = defaultuser_domain_id = defaultregion_name = RegionOneproject_name = serviceusername = novapassword = Nova [OLO _ concurrency] lock_path = / var/lib/neutron/ TMP [default] verbose = True
Configure the ML2 plug-in configure LINUX Bridge Agent configure metadata proxy
ML2 plugin uses the Linux bridge mechanism to establish layer2 virtual network facilities (bridging and switching) for OpenStack instances. Modify the configuration file sudo vi / etc/neutron/plugins/ml2/ml2_conf.inivi / etc/neutron/plugins/ml2/ml2_ conf.ini [ml2] type_drivers = flat,vlan,vxlan # after configuring ML2, removing this item will cause database inconsistency [ml2] tenant_network_types = vxlan [ml2] mechanism_drivers = linuxbridge L2population[ ml2] extension_drivers = port_security # # enable port security extension driver [ml2_type_flat] flat_networks = public # # operator virtual network enables ipset for flat network [ml2 _ type_vxlan] vni_ranges = 1ml2_type_flat 1000 [securitygroup] enable_ipset = True # # to enhance the efficiency of security group rules
Write configuration information to the neutron database
Su-s / bin/sh-c "neutron-db-manage-- config-file/etc/neutron/ neutron.conf-- config-file/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
Configure compute nodes to use the network
Vi / etc/nova/ Nova. Confession url = http://controller:9696auth_url = http://controller1:35357auth_plugin = passwordproject_domain_id = defaultuser_domain_id = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = neutronservice_metadata_proxy = Truemetadata_proxy_shared_secret = metadata
Create a file connection
Ln-s / etc/neutron/plugins/ml2/ml2_conf.ini / etc/neutron/plugin.ini
Restart the service
Systemctl restart openstack-nova-api.servicesystemctl restart neutron-server.servicesystemctl start neutron-metadata-agent.servicesystemctl enable neutron-server.servicesystemctl enable neutron-metadata-agent.service
Deployment node: Network Node
Deploy the components on the Network node:
There are two ways to deploy the network services architecture: Provider Networks and Self-Service Networks, which are briefly introduced at the beginning of this article. In this paper
Deploy in Self-Service Networks mode.
Reference document: Deploy Networking Service using the Architecture of SelfService
Networks
Yum install openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
Configure common service components
The configuration of common components includes authentication mechanism and message queue. Modify the configuration file sudo vi / etc/neutron/neutron.conf
[DEFAULT] rpc_backend = Rabbit [Oslo _ messaging_rabbit] rabbit_host = controllerrabbit_userid = openstackrabbit_password = openstack [default] auth_strategy = Keystone [Keystone _ authtoken] auth_uri = http://controller:5000auth_url = authtoken = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = neutron configuration Linux Bridge proxy Linux bridge agent establishes a two-tier virtual network facility for the instance and can manage security groups. Modify the configuration file sudo vi / etc/neutron/plugins/ml2/linuxbridge_agent.inivim / etc/neutron/plugins/ml2/linuxbridge_agent.ini # Note the name of the bridged Nic [linux_bridge] physical_interface_mappings = public: eth0 [VLAN] enable_vxlan = Truelocal_ip = 10.0.0.21 # physical public network interface address (controller) l2_population = True [agent] prevent_arp_spoofing = True [securitygroup] enable_security_group = Truefirewall_ Driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Configure layer 3 network proxy
The L3 (Layer-3) Agent bit self-service network provides routing and NAT services.
Modify the configuration file sudo vi / etc/neutron/l3_agent.ini, and configure the Linux Bridge Interface driver (Linux BridgeInterface Driver) and the external bridge at [DEFAULT].
Vi / etc/neutron/l3_ agent.ini [default] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriverexternal_network_bridge = # Note: the external_network_bridge value is deliberately vacant, which allows multiple external networks to share a single agent. [DEFAULT] verbose = True
Modify the configuration file sudo vi / etc/neutron/dhcp_agent.ini and configure Linux bridge interface at [DEFAULT]
Driver and Dnsmasq DHCP driver, enabling independent metadata to enable operator network instances to access virtual network meta-information.
Vi / etc/neutron/dhcp_ agent.ini [default] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriverdhcp_driver = neutron.agent.linux.dhcp.Dnsmasqenable_isolated_metadata = truth [default] verbose = True
Configure metadata proxy
The metadata agent provides some configuration information such as certificates.
Modify the configuration file sudo vi / etc/neutron/metadata_agent.ini to configure the metadata host and shared key at [DEFAULT].
Note: replace METADATA_SECRET with the actual password designed earlier
[DEFAULT] nova_metadata_ip = controllermetadata_proxy_shared_secret = metadata
Systemctl start neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.servicesystemctl enable neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
Deployment node: Compute Node
Install the Network Services component
[root@compute ~] # yum install openstack-neutron-linuxbridge
Configure common components
The configuration of common components includes authentication mechanism, message queue and plug-in.
[root@compute ~] # cat / etc/neutron/ organizon.confs [default] rpc_backend = Rabbit[ Oslo _ messaging_rabbit] rabbit_host = controllerrabbit_userid = openstackrabbit_password = openstack [default] auth_strategy = Keystone [Keystone _ authtoken] auth_uri = http://controller:5000auth_url = http://controller:35357memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = neutron
Configure network settings
Configure the Linux bridge agent and modify the configuration file sudo vi / etc/neutron/plugins/ml2/linuxbridge_agent.ini
[root@compute ~] # cat / etc/neutron/plugins/ml2/linuxbridge_ agent.ini [Linux _ bridge] physical_interface_mappings = provider: eth0 [VXlan] enable_vxlan = Truelocal_ip = 10.0.0.31l2_population = True [securitygroup] enable_security_group = Truefirewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Configure the computing service to access the network
Modify the configuration file sudo vi / etc/nova/nova.conf
[root@compute ~] # vi / etc/nova/ Nova.confession [url on] url = http://controller:9696auth_url = http://controller:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = neutron
Restart the service
Systemctl restart openstack-nova-compute.servicesystemctl restart neutron-linuxbridge-agent.servicesystemctl enable neutron-linuxbridge-agent.service
Verification
[root@controller ~] # neutron ext-list [root@controller ~] # neutron agent-list +-- -+-+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +- -- + -+ | 0e1c9f6f-a56b-40d1-b43e-91754cabcf75 | Metadata agent | network | |: -) | True | neutron-metadata-agent | | 24c8daec-b495-48ba-b70d-f7d103c8cda1 | Linux bridge agent | compute | | True | neutron-linuxbridge-agent | | 2e93bf03-e095-444d-8f74-0b832db4a0be | | Linux bridge agent | network | |: -) | True | neutron-linuxbridge-agent | | 456c754a-d2c0-4ce5-8d9b-b0089fb77647 | Metadata agent | controller |: -) | True | neutron-metadata-agent | | 8a1c7895-fc44-407f-b74b-55bb1b4519d8 | DHCP agent | network | nova |: -) | True | | | neutron-dhcp-agent | | 93ad18bf-d961-4d00-982c-6c617dbc0a5e | L3 agent | network | nova |: -) | True | neutron-l3-agent | +-- -+
4.6 Dashboard Service configuration (Dashboard Service Horizon)
The dashboard is a Web interface that enables cloud administrators and users to manage a wide variety of OpenStack resources and services. This paper adopts Apache Web
Server deploys the Dashboard service.
Deployment node: Controller Node
Yum install openstack-dashboard
Modify the configuration file sudo vim / etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*',]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache'
'LOCATION': 'controller:11211'
}
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3"% OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3
"p_w_picpath": 2
"volume": 2
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False
'enable_quotas': False
'enable_distributed_router': False
'enable_ha_router': False
'enable_lb': False
'enable_firewall': False
'enable_***': False
'enable_fip_topology_check': False
}
TIME_ZONE = "TIME_ZONE"
Systemctl restart httpd.service memcached.service
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.