Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Under openstack

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Continue with the previous article

Computing services:

Install the configuration control node:

Yum install openstack-nova-api openstack-nova-conductor\

Openstack-nova-console openstack-nova-novncproxy\

Openstack-nova-scheduler

At this point, a package is missing: python-pygments needs to download and install it itself.

1. Obtain admin credentials to gain access to commands that only administrators can execute:

#. Admin-openrc

2. To create a service certificate, complete these steps:

To create a nova user:

Openstack user create-- domain default\

-- password-prompt nova

Add the admin role to the nova user:

Openstack role add-project service-user nova admin

Create a nova service entity:

Openstack service create-- name nova\

Description "OpenStack Compute" compute

Create a Compute service API endpoint:

# openstack endpoint create-- region RegionOne\

Compute public http://172.25.33.10:8774/v2.1/%\(tenant_id\)s

# openstack endpoint create-- region RegionOne compute internal http://172.25.33.10:8774/v2.1/%\(tenant_id\)s

+-+ +

| | Field | Value |

+-+ +

| | enabled | True |

| | id | 44b3adb6ce2348908abbf4d3f9a52f2b |

| | interface | internal |

| | region | RegionOne |

| | region_id | RegionOne |

| | service_id | a394a2c40c144d6fb9db567a1105c44a |

| | service_name | nova |

| | service_type | compute |

| | url | http://172.25.33.10:8774/v2.1/%(tenant_id)s |

+-+ +

# openstack endpoint create-- region RegionOne compute admin http://172.25.33.10:8774/v2.1/%\(tenant_id\)s

Edit the ``/ etc/nova/ nova.conf`` file and complete the following:

1. In the ``[DEFAULT]`` section, only computing and metadata API are enabled.

[DEFAULT]

Enabled_apis = osapi_compute,metadata

In the ``[api_database]`` and ``[database]`` sections, configure the connection to the database:

[api_database]

Connection = mysql+pymysql://nova:nova@172.25.33.10/nova_api

[database]

Connection = mysql+pymysql://nova:nova@172.25.33.10/nova

In the "[DEFAULT]" and "[oslo_messaging_rabbit]" sections, configure "RabbitMQ" message queue access:

[DEFAULT]

Rpc_backend = rabbit

[oslo_messaging_rabbit]

Rabbit_host = controller

Rabbit_userid = openstack

Rabbit_password = rabbit

In the "[DEFAULT]" and "[keystone_authtoken]" sections, configure authentication service access

[DEFAULT]

Auth_strategy = keystone

[keystone_authtoken]

Auth_uri = http://172.25.33.10:5000

Auth_url = http://172.25.33.10:35357

Memcached_servers = 172.25.33.10 11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = nova

Password = nova

In the [DEFAULT section, configure ``my_ ip`` to use the IP address of the management interface of the control node.

[DEFAULT]

My_ip = 10.0.0.11

In the [DEFAULT] section, enable the Networking service:

[DEFAULT]

Use_neutron = True

Firewall_driver = nova.virt.firewall.NoopFirewallDriver

By default, the computing service uses the built-in firewall service. Since the network service includes the firewall service, you must use the ``nova.virt.firewall.NoopFirewallDriver`` firewall service to disable the firewall service built into the computing service.

In the ``[vnc]`` section, configure the VNC agent to use the IP address of the management interface of the control node

[vnc]

Vncserver_listen = $my_ip

Vncserver_proxyclient_address = $my_ip

In the "glance" area, configure the location of the image service API:

[glance]

Api_servers = http://controller:9292

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]

Lock_path = / var/lib/nova/tmp

Synchronize the Compute database:

# su-s / bin/sh-c "nova-manage api_db sync" nova

# su-s / bin/sh-c "nova-manage db sync" nova

# systemctl enable openstack-nova-api.service\

Openstack-nova-consoleauth.service openstack-nova-scheduler.service\

Openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl start openstack-nova-api.service\

Openstack-nova-consoleauth.service openstack-nova-scheduler.service\

Openstack-nova-conductor.service openstack-nova-novncproxy.service

# grep ^ [amurz] / etc/nova/nova.conf

Rpc_backend = rabbit

Enabled_apis = osapi_compute,metadata

Auth_strategy = keystone

My_ip = 172.25.33.10

Use_neutron = True

Firewall_driver = nova.virt.firewall.NoopFirewallDriver

Debug=true

Connection = mysql+pymysql://nova:nova@172.25.33.10/nova_api

Connection = mysql+pymysql://nova:nova@172.25.33.10/nova

Api_servers = http://172.25.33.10:9292

Auth_uri = http://172.25.33.10:5000

Auth_url = http://172.25.33.10:35357

Memcached_servers = 172.25.33.10 11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = nova

Password = nova

Lock_path = / var/lib/nova/tmp

Rabbit_host = 172.25.33.10

Rabbit_userid = openstack

Rabbit_password = rabbit

Vncserver_listen = $my_ip

Vncserver_proxyclient_address = $my_ip

Install and configure compute nodes:

Minion2:172.25.33.11

Install the package:

# yum install openstack-nova-compute

Edit the ``/ etc/nova/ nova.conf`` file and complete the following

In the ``[DEFAULT]`` and [oslo_messaging_rabbit] sections, configure the connection of the ``RabbitMQ`` message queue:

[DEFAULT]

Rpc_backend = rabbit

[oslo_messaging_rabbit]

Rabbit_host = 172.25.33.10

Rabbit_userid = openstack

Rabbit_password = rabbit

In the "[DEFAULT]" and "[keystone_authtoken]" sections, configure authentication service access

[DEFAULT]

Auth_strategy = keystone

[keystone_authtoken]

Auth_uri = http://172.25.33.10:5000

Auth_url = http://172.25.33.10:35357

Memcached_servers = 172.25.33.10 11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = nova

Password = nova

In the [DEFAULT] section, configure the my_ip option

[DEFAULT]

My_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

Replace the MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on the compute node

My_ip = 172.25.33.11

In the [DEFAULT] section, enable the Networking service:

[DEFAULT]

Use_neutron = True

Firewall_driver = nova.virt.firewall.NoopFirewallDriver

By default, Compute uses the built-in firewall service. Because Networking includes firewall services, you must remove Compute's built-in firewall services by using nova.virt.firewall.NoopFirewallDriver

In the ``[vnc]`` section, enable and configure remote console access:

[vnc]

Enabled = True

Vncserver_listen = 0.0.0.0

Vncserver_proxyclient_address = $my_ip

Novncproxy_base_url = http://172.25.33.10:6080/vnc_auto.html

In the "glance" area, configure the location of the image service API:

[glance]

Api_servers = http://172.25.33.10:9292

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]

Lock_path = / var/lib/nova/tmp

Configuration missed by the official documentation: report error: oslo_service.service [-] Error starting thread.

Or PlacementNotConfigured: This compute is not configured to talk to the placement service

[placement]

Auth_uri = http://172.25.33.10:5000

Auth_url = http://172.25.33.10:35357

Memcached_servers = 172.25.33.10 11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = nova

Password = nova

Os_region_name = RegionOne

Complete the installation

1. Determine whether your compute node supports hardware acceleration of virtual machines.

# egrep-c'(vmx | svm)'/ proc/cpuinfo

If this command returns the value of one or greater, then your compute node supports hardware acceleration and does not require additional configuration.

If this command returns a zero value, then your compute node does not support hardware acceleration. You must configure libvirt to use QEMU instead of KVM

# egrep-c'(vmx | svm)'/ proc/cpuinfo

0

Make the following edits in the [libvirt] area of the / etc/nova/nova.conf file

[libvirt]

Virt_type = qemu

2. Start the computing service and its dependencies, and configure it to start automatically with the system:

# systemctl enable libvirtd.service openstack-nova-compute.service

# systemctl start libvirtd.service openstack-nova-compute.service

Verification operation: on control node 172.25.33.10

Obtain admin credentials to gain access to commands that only administrators can execute:

#. Admin-openrc

List the service components to verify that each process was successfully started and registered:

# openstack compute service list

+-- +

| | ID | Binary | Host | Zone | Status | State | Updated At | |

+-- +

| | 1 | nova-conductor | server10.example | internal | enabled | up | 2017-04-04T14:07:4 |

| | 9.000000 |

| | 2 | nova-scheduler | server10.example | internal | enabled | up | 2017-04-04T14:07:5 |

| | 1.000000 |

| | 3 | nova-consoleauth | server10.example | internal | enabled | up | 2017-04-04T14:07:5 |

| | 0.000000 |

| | 6 | nova-compute | server11.example | nova | enabled | up | 2017-04-04T14:07:5 |

| | .com | 1.000000 |

Network Services:

Control node:

The OpenStack Network (neutron) manages the access layer of all virtual network infrastructure (VNI) and physical network infrastructure (PNI) in the OpenStack environment. OpenStack networks allow tenants to create advanced virtual network topologies such as firewall,: term: `load balancer` and: term: `virtual private network (× ×) `.

Configuration:

1. Obtain admin credentials to gain access to commands that only administrators can execute:

. Admin-openrc

2. To create a service certificate, complete these steps:

Create a ``roomon`` user:

Openstack user create-domain default-password-prompt neutron

Add the role ``admin`` to the ``accounon`` user:

Openstack role add-project service-user neutron admin

Create a ``roomon`` service entity:

# openstack service create-- name neutron\

>-- description "OpenStack Networking" network

Create a network service API endpoint

# openstack endpoint create-- region RegionOne\

Network public http://172.25.33.10:9696

+-- +

| | Field | Value |

+-- +

| | enabled | True |

| | id | 0092457b66b84d869d710e84c715219c |

| | interface | public |

| | region | RegionOne |

| | region_id | RegionOne |

| | service_id | a33565b8fdfa4531963fdbb74245d960 |

| | service_name | neutron |

| | service_type | network |

| | url | http://172.25.33.10:9696 |

+-- +

# openstack endpoint create-- region RegionOne network internal http://172.25.33.10:9696

# openstack endpoint create-- region RegionOne network admin http://172.25.33.10:9696

This network example uses a public network:

Option 1 deploys with the simplest possible architecture, allowing only instances to connect to the public network (external network). There are no private networks (personal networks), routers or floating IP addresses. Only ``admin`` or other privileged users can manage public networks

Option 2 adds layer-3 services to option 1 to support instances to connect to the VPC. ``demo`` or other unprivileged users can manage their own VPCs, including routers connecting public and private networks. In addition, floating IP addresses allow instances to connect to external networks, such as the Internet, using a private network

Yum install openstack-neutron openstack-neutron-ml2\

Openstack-neutron-linuxbridge ebtables

Configure service components

The configuration of Networking server components includes databases, authentication mechanisms, message queues, topology change notifications, and plug-ins.

Edit the ``/ etc/neutron/ uploon.conf`` file and complete the following:

In the [database] section, configure database access

[database]

Connection = mysql+pymysql://neutron:neutron@172.25.33.10/neutron

In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) plug-in, routing services and overlapping IP addresses:

[DEFAULT]

Core_plugin = ml2

Service_plugins =

In the "[DEFAULT]" and "[oslo_messaging_rabbit]" sections, configure the connection to the "RabbitMQ" message queue:

[DEFAULT]

Rpc_backend = rabbit

[oslo_messaging_rabbit]

Rabbit_host = 172.25.33.10

Rabbit_userid = openstack

Rabbit_password = rabbit

In the "[DEFAULT]" and "[keystone_authtoken]" sections, configure authentication service access:

[DEFAULT]

Auth_strategy = keystone

[keystone_authtoken]

Auth_uri = http://172.25.33.10:5000

Auth_url = http://172.25.33.10:35357

Memcached_servers = 172.25.33.10 11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = neutron

Password = neutron

In the ``[DEFAULT]`` and ``[nova]`` sections, configure network services to notify computing nodes of network topology changes:

[DEFAULT]

Notify_nova_on_port_status_changes = True

Notify_nova_on_port_data_changes = True

[nova]

Auth_url = http://172.25.33.10:35357

Auth_type = password

Project_domain_name = default

User_domain_name = default

Region_name = RegionOne

Project_name = service

Username = nova

Password = nova

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]

Lock_path = / var/lib/neutron/tmp

Configure the Modular Layer 2 (ML2) plug-in

The ML2 plug-in uses the Linuxbridge mechanism to create an layer-2 virtual network infrastructure for the instance

Edit the ``/ etc/neutron/plugins/ml2/ml2_ conf.ini`` file and complete the following:

In the ``[ml2]`` section, enable flat and VLAN networks as well as VXLAN networks:

[ml2]

Type_drivers = flat,vlan

In the ``[ml2]`` section, disable VPC:

[ml2]

Tenant_network_types =

In the ``[ml2]`` section, enable the Linuxbridge mechanism:

[ml2]

Mechanism_drivers = linuxbridge

In the ``[ml2]`` section, enable the port security extension driver:

[ml2]

Extension_drivers = port_security

In the ``[ml2_type_flat]`` section, configure the public virtual network as flat network

[ml2_type_flat]

Flat_networks = provider

In the ``[securitygroup]`` section, enable ipset to increase the efficiency of security group rules:

[securitygroup]

Enable_ipset = True

Configure Linuxbridge proxy

The Linuxbridge agent establishes the layer-2 virtual network for the instance and handles the security group rules.

Edit the ``/ etc/neutron/plugins/ml2/linuxbridge_ agent.ini`` file and complete the following:

In the ``[linux_bridge]`` section, correspond the public virtual network to the public physical network interface:

[linux_bridge]

Physical_interface_mappings = public:eth0

Replace ``NAME`` with the underlying physical public network interface

In the ``[vxlan]`` section, disable VXLAN overlay network

[vxlan]

Enable_vxlan = False

In the ``[securitygroup]`` section, enable the security group and configure Linuxbridge iptables firewall driver:

[securitygroup]

Enable_security_group = True

Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

Configure DHCP proxy

The DHCP agent provides DHCP services for virtual networks

Edit the ``/ etc/neutron/dhcp_ agent.ini`` file and complete the following:

In the ``[DEFAULT]`` section, configure Linuxbridge driver interface, DHCP driver and enable isolated metadata, so that instances on the public network can access metadata through the network

[DEFAULT]

Interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

Dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

Enable_isolated_metadata = True

Configure metadata proxy

Edit the ``/ etc/neutron/metadata_ agent.ini`` file and complete the following:

In the ``[DEFAULT]`` section, configure the metadata host and the shared password:

[DEFAULT]

Nova_metadata_ip = 172.25.33.10

Metadata_proxy_shared_secret = redhat

Configure network services for compute nodes

Edit the ``/ etc/nova/ nova.conf`` file and complete the following:

In the ``[neutron]`` section, configure the access parameters, enable the metadata proxy and set the password:

[neutron]

Url = http://172.25.33.10:9696

Auth_url = http:/172.25.33.10:35357

Auth_type = password

Project_domain_name = default

User_domain_name = default

Region_name = RegionOne

Project_name = service

Username = neutron

Password = neutron

Service_metadata_proxy = True

Metadata_proxy_shared_secret = redhat

Complete the installation

The web service initialization script requires a hyperlink / etc/neutron/ plugin.ini`` to the ML2 plug-in configuration file / etc/neutron/plugins/ml2/ml2_ conf.ini``. If the hyperlink does not exist, create it using the following command:

Ln-s / etc/neutron/plugins/ml2/ml2_conf.ini / etc/neutron/plugin.ini

Synchronize the database:

Su-s / bin/sh-c "neutron-db-manage-- config-file / etc/neutron/neutron.conf\

-config-file / etc/neutron/plugins/ml2/ml2_conf.ini upgrade head "neutron

The final display of OK is a success.

Restart the computing API service

# systemctl restart openstack-nova-api.service

Boot up

# systemctl enable neutron-server.service\

Neutron-linuxbridge-agent.service neutron-dhcp-agent.service\

Neutron-metadata-agent.service

# systemctl start neutron-server.service\

Neutron-linuxbridge-agent.service neutron-dhcp-agent.service\

Neutron-metadata-agent.service

For network option 2, also enable the layer-3 service and set it to start with the system

# systemctl enable neutron-l3-agent.service

# systemctl start neutron-l3-agent.service

Compute nodes:

# yum install openstack-neutron-linuxbridge ebtables ipset

The configuration of Networking common components includes authentication mechanism, message queue and plug-ins.

Edit the ``/ etc/neutron/ uploon.conf`` file and complete the following:

In the ``[database]`` section, annotate all the ``connection`` items, because the compute node does not access the database directly.

In the "[DEFAULT]" and "[oslo_messaging_rabbit]" sections, configure the connection to the "RabbitMQ" message queue:

[DEFAULT]

Rpc_backend = rabbit

[oslo_messaging_rabbit]

Rabbit_host = 172.25.33.10

Rabbit_userid = openstack

Rabbit_password = rabbit

In the "[DEFAULT]" and "[keystone_authtoken]" sections, configure authentication service access:

[DEFAULT]

Auth_strategy = keystone

[keystone_authtoken]

Auth_uri = http://172.25.33.10:5000

Auth_url = http://172.25.33.10:35357

Memcached_servers = 172.25.33.10 11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = neutron

Password = neturon

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]

Lock_path = / var/lib/neutron/tmp

Select public network: (you can test the configuration on minion1)

Configure Linuxbridge proxy

The Linuxbridge agent establishes the layer-2 virtual network for the instance and handles the security group rules.

Edit the ``/ etc/neutron/plugins/ml2/linuxbridge_ agent.ini`` file and complete the following:

In the ``[linux_bridge]`` section, correspond the public virtual network to the public physical network interface:

[linux_bridge]

Physical_interface_mappings = public:eth0

In the ``[vxlan]`` section, prohibit VXLAN from overwriting the network:

[vxlan]

Enable_vxlan = False

In the ``[securitygroup]`` section, enable the security group and configure Linuxbridge iptables firewall driver:

[securitygroup]

Enable_security_group = True

Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDr

Edit the ``/ etc/nova/ nova.conf`` file and complete the following:

In the ``[neutron]`` section, configure the access parameters:

[neutron]

Url = http://172.25.33.10:9696

Auth_url = http://172.25.33.10:35357

Auth_type = password

Project_domain_name = default

User_domain_name = default

Region_name = RegionOne

Project_name = service

Username = neutron

Password = neutron

Restart the computing service:

# systemctl restart openstack-nova-compute.service

Boot:

# systemctl enable neutron-linuxbridge-agent.service

# systemctl start neutron-linuxbridge-agent.service

Inspection:

Neutron ext-listneutron ext-list

Neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

+-+

| | alias | name |

+-+

| | default-subnetpools | Default Subnetpools |

| | availability_zone | Availability Zone |

| | network_availability_zone | Network Availability Zone |

| | binding | Port Binding |

| | agent | agent |

| | subnet_allocation | Subnet Allocation |

| | dhcp_agent_scheduler | DHCP Agent Scheduler |

| | tag | Tag support |

| | external-net | Neutron external network |

| | flavors | Neutron Service Flavors |

| | net-mtu | Network MTU |

| | network-ip-availability | Network IP Availability |

| | quotas | Quota management support |

| | provider | Provider Network |

| | multi-provider | Multi Provider Network |

| | address-scope | Address scope |

| | subnet-service-types | Subnet service types |

| | standard-attr-timestamp | Resource timestamps |

| | service-type | Neutron Service Type Management |

| | tag-ext | Tag support for resources: subnet, subnetpool, | |

| | port, router |

| | extra_dhcp_opt | Neutron Extra DHCP opts |

| | standard-attr-revisions | Resource revision numbers |

| | pagination | Pagination support |

| | sorting | Sorting support |

| | security-group | security-group |

| | rbac-policies | RBAC Policies |

| | standard-attr-description | standard-attr-description |

| | port-security | Port Security |

| | allowed-address-pairs | Allowed Address Pairs |

| | project-id | project_id field enabled |

+-+

List the agents to verify that the neutron agent was started successfully:

# neutron agent-list

Neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

+-+

| | id | agent_type | host | availability_zone | alive | admin_state_up | binary | |

+-+

| | 0d135b32 | DHCP agent | server10 | nova |: -) | True | neutron-dhcp- |

| |-f115-4d | | .example | agent |

| | 2fmur8296-| |

| | 27c6590c |

| | a08c |

| | 6c603475 | Metadata | server10 | |: -) | True | neutron- |

| |-571a-4b | agent | .example | metadata- |

| | de-a414- | | agent |

| | b6531938 | |

| | 8508 |

| | b8667984 | Linux | server11 | |: -) | True | neutron- |

| |-0d75 | bridge | .example | linuxbridge- |

| |-47bf-| agent | .com | agent |

| | 958b-c88 |

| | 6244ff1f |

| | 7 |

+-+

Profile at a glance:

Control node:

# cat / etc/neutron/neutron.conf

[DEFAULT]

Rpc_backend = rabbit

Core_plugin = ml2

Service_plugins =

Auth_strategy = keystone

Notify_nova_on_port_status_changes = True

Notify_nova_on_port_data_changes = True

[database]

Connection = mysql+pymysql://neutron:neutron@172.25.33.10/neutron

[oslo_messaging_rabbit]

Rabbit_host = 172.25.33.10

Rabbit_userid = openstack

Rabbit_password = rabbit

[keystone_authtoken]

Auth_uri = http://172.25.33.10:5000

Auth_url = http://172.25.33.10:35357

Memcached_servers = 172.25.33.10 11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = neutron

Password = neutron

[nova]

Auth_url = http://172.25.33.10:35357

Auth_type = password

Project_domain_name = default

User_domain_name = default

Region_name = RegionOne

Project_name = service

Username = nova

Password = nova

[oslo_concurrency]

Lock_path = / var/lib/neutron/tmp

# grep ^ [amurz] / etc/nova/nova.conf

Rpc_backend = rabbit

Enabled_apis = osapi_compute,metadata

Auth_strategy = keystone

My_ip = 172.25.33.10

Use_neutron = True

Firewall_driver = nova.virt.firewall.NoopFirewallDriver

Connection = mysql+pymysql://nova:nova@172.25.33.10/nova_api

Connection = mysql+pymysql://nova:nova@172.25.33.10/nova

Api_servers = http://172.25.33.10:9292

Auth_uri = http://172.25.33.10:5000

Auth_url = http://172.25.33.10:35357

Memcached_servers = 172.25.33.10 11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = nova

Password = nova

Url = http://172.25.33.10:9696

Auth_url = http:/172.25.33.10:35357

Auth_type = password

Project_domain_name = default

User_domain_name = default

Region_name = RegionOne

Project_name = service

Username = neutron

Password = neutron

Service_metadata_proxy = True

Metadata_proxy_shared_secret = redhat//Z should be used after this password

Lock_path = / var/lib/nova/tmp

Rabbit_host = 172.25.33.10

Rabbit_userid = openstack

Rabbit_password = rabbit

Vncserver_listen = $my_ip

Vncserver_proxyclient_address = $my_ip

[root@server10 ~] # grep ^ [Amurz] / etc/neutron/plugins/ml2/ml2_conf.ini

Type_drivers = flat,vlan

Tenant_network_types =

Mechanism_drivers = linuxbridge

Extension_drivers = port_security

Flat_networks = provider

Enable_ipset = True

[root@server10 ~] # grep ^ [Amurz] / etc/neutron/plugins/ml2/linuxbridge_agent.ini

Physical_interface_mappings = public:eth0

Enable_security_group = True

Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewal

Enable_vxlan = False

# grep ^ [amurz] / etc/neutron/plugins/ml2/linuxbridge_agent.ini

Physical_interface_mappings = public:eth0

Enable_security_group = True

Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewal

Enable_vxlan = False

[root@server10 ~] # grep ^ [Amurz] / / etc/neutron/dhcp_agent.ini

Interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

Dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

Enable_isolated_metadata = True

# grep ^ [amurz] / / etc/neutron/dhcp_agent.ini

Interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

Dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

Enable_isolated_metadata = True

[root@server10 ~] # grep ^ [Amurz] / etc/neutron/metadata_agent.ini

Nova_metadata_ip = 172.25.33.10

Metadata_proxy_shared_secret = redhat// uses the metadata area password above

Compute nodes:

# grep ^ [amurz] / etc/neutron/neutron.conf

Rpc_backend = rabbit

Auth_strategy = keystone

Rabbit_host = 172.25.33.10

Rabbit_userid = openstack

Rabbit_password = rabbit

Auth_uri = http://172.25.33.10:5000

Auth_url = http://172.25.33.10:35357

Memcached_servers = 172.25.33.10 11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = neutron

Password = neutron

Lock_path = / var/lib/neutron/tmp

# grep ^ [amurz] / etc/neutron/plugins/ml2/linuxbridge_agent.ini

Physical_interface_mappings = public:eth0

Enable_vxlan = False

Enable_security_group = True

Firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

# grep ^ [amurz] / etc/nova/nova.conf

Rpc_backend = rabbit

Enabled_apis = osapi_compute,metadata

Auth_strategy = keystone

My_ip = 172.25.33.10

Use_neutron = True

Firewall_driver = nova.virt.firewall.NoopFirewallDriver

Connection = mysql+pymysql://nova:nova@172.25.33.10/nova_api

Connection = mysql+pymysql://nova:nova@172.25.33.10/nova

Api_servers = http://172.25.33.10:9292

Auth_uri = http://172.25.33.10:5000

Auth_url = http://172.25.33.10:35357

Memcached_servers = 172.25.33.10 11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = nova

Password = nova

Virt_type = qemu

Url = http://172.25.33.10:9696

Auth_url = http://172.25.33.10:35357

Auth_type = password

Project_domain_name = default

User_domain_name = default

Region_name = RegionOne

Project_name = service

Username = neutron

Password = neutron

Lock_path = / var/lib/nova/tmp

Rabbit_host = 172.25.33.10

Rabbit_userid = openstack

Rabbit_password = rabbit

Auth_uri = http://172.25.33.10:5000

Auth_url = http://172.25.33.10:35357

Memcached_servers = 172.25.33.10 11211

Auth_type = password

Project_domain_name = default

User_domain_name = default

Project_name = service

Username = nova

Password = nova

Os_region_name = RegionOne

Enabled = True

Vncserver_listen = 0.0.0.0

Vncserver_proxyclient_address = 172.25.33.11

Ovncproxy_base_url = http://172.25.33.10:6080/vnc_auto.html

Note: all passwords and service names are the same

172.25.33.10 is the control node

172.25.33.11 is the compute node

At this point, the basic service is complete, and you can create an instance:

-

Create a virtual network

-

Public network:

Create a public network:

1. On the control node, load the admin credentials to obtain the command access permissions that the administrator can execute:

Source admin-openrc

2. Create a network:

# neutron net-create-shared-provider:physical_network provider\

>-- provider:network_type flat public

Neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

Created a new network:

+-+

| | Field | Value |

+-+

| | admin_state_up | True |

| | availability_zone_hints |

| | availability_zones |

| | created_at | 2017-04-09T11:35:39Z |

| | description |

| | id | 876887d3-2cf3-4253-9804-346f180b6077 |

| | ipv4_address_scope |

| | ipv6_address_scope |

| | mtu | 1500 | |

| | name | public |

| | port_security_enabled | True |

| | project_id | 7f1f3eae73dc439da7f53c15c634c4e7 |

| | provider:network_type | flat |

| | provider:physical_network | provider |

| | provider:segmentation_id |

| | revision_number | 3 | |

| | router:external | False |

| | shared | True |

| | status | ACTIVE |

| | subnets |

| | tags |

| | tenant_id | 7f1f3eae73dc439da7f53c15c634c4e7 |

| | updated_at | 2017-04-09T11:35:39Z |

+-+

The ``- shared`` option allows all projects to use virtual networks

View the network CIDR # neutron net-list

Neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

+-+

| | id | name | tenant_id | subnets | |

+-+

| | 876887d3-2cf3-4253-9 | public | 7f1f3eae73dc439da7f5 | 6428d4dd-e15d-48b0 | |

| | 804-346f180b6077 | | 3c15c634c4e7 |-995e-45df957f4735 | |

| | 172.25.33.0 Compact 24 | |

+-+

3. Create a subnet on the network:

# neutron subnet-create-- name provider-- allocation-pool start=172.25.33.100,end=172.25.33.200-- dns-nameserver 114.114.114.114-- gateway 172.25.33.250 public 172.25.33.0 public 24

Neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

Created a new subnet:

+-+

| | Field | Value |

+-+

| | allocation_pools | {"start": "172.25.33.100", "end": "172.25.33.200"} |

| | cidr | 172.25.33.0 Candle 24 | |

| | created_at | 2017-04-09T11:40:38Z |

| | description |

| | dns_nameservers | 114.114.114.114 | |

| | enable_dhcp | True |

| | gateway_ip | 172.25.33.250 | |

| | host_routes |

| | id | 6428d4dd-e15d-48b0-995e-45df957f4735 |

| | ip_version | 4 |

| | ipv6_address_mode |

| | ipv6_ra_mode |

| | name | provider |

| | network_id | 876887d3-2cf3-4253-9804-346f180b6077 |

| | project_id | 7f1f3eae73dc439da7f53c15c634c4e7 |

| | revision_number | 2 | |

| | service_types |

| | subnetpool_id |

| | tags |

| | tenant_id | 7f1f3eae73dc439da7f53c15c634c4e7 |

| | updated_at | 2017-04-09T11:40:38Z |

+-+

Replace ``PROVIDER_NETWORK_ CIDR`` with the subnet CIDR tag of the physical network of the provider. That is, the subnet listed above

Replace DNS_RESOLVER with the IP address of the DNS resolution service. In most cases, you can select one from the host ``/ etc/ roomv.conf`` file.

Replace ``GATEWAY`` with the gateway of the public network. The general IP address of the gateway ends with ".1". You can also use the IP of the host.

Create a host with m1.nano specifications

The default minimum host requires 512 MB of memory. For those computing nodes in the environment whose memory is less than 4 GB, we recommend that you create a host with the specification of ``m1.nano`` that only needs 64 MB. For testing purposes only, use a host of ``m1.nano`` specifications to load the CirrOS image

# openstack flavor create-id 0-vcpus 1-ram 64-disk 1 m1.nano

+-+ +

| | Field | Value |

+-+ +

| | OS-FLV-DISABLED:disabled | False |

| | OS-FLV-EXT-DATA:ephemeral | 0 | |

| | disk | 1 | |

| | id | 0 | |

| | name | m1.nano |

| | os-flavor-access:is_public | True |

| | properties |

| | ram | 64 | |

| | rxtx_factor | 1.0 | |

| | swap |

| | vcpus | 1 | |

+-+ +

Generate a key-value pair

Most cloud images support public key authentication instead of traditional password authentication. Before starting the instance, you must add a public key to the computing service.

Import the credentials of the tenant ``demo``

$. Demo-openrc

Generate and add key pairs:

$ssh-keygen-Q-N ""

$openstack keypair create-- public-key ~ / .ssh/id_rsa.pub mykey

+-+

| | Field | Value |

+-+

| | fingerprint | 7f:a9:fd:62:e4:2b:87:84:27:f1:ce:d4:c1:89:f3:b8 |

| | name | mykey |

| | user_id | 251ad20a4d754dc4a104a3f5b8159142 |

+-+

Verify the addition of the public key:

# openstack keypair list

+-+

| | Name | Fingerprint |

+-+

| | mykey | 7f:a9:fd:62:e4:2b:87:84:27:f1:ce:d4:c1:89:f3:b8 |

+-+

Add security group rules

By default, the ``default`` security group applies to all instances and includes firewall rules that deny remote access to the instance. For Linux images such as CirrOS, we recommend that you allow at least ICMP (ping) and secure shell (SSH) rules.

Add rules to the default security group.

Allow ICMP (ping):

# openstack security group rule create-proto icmp default

+-- +

| | Field | Value |

+-- +

| | created_at | 2017-04-09T11:46:06Z |

| | description |

| | direction | ingress |

| | ether_type | IPv4 |

| | id | 5a168a4b-7e2a-40ee-8302-d19fbb7dda6d |

| | name | None |

| | port_range_max | None |

| | port_range_min | None |

| | project_id | 45a1b89bc5de479e8d3e04eae314ee88 |

| | protocol | icmp |

| | remote_group_id | None |

| | remote_ip_prefix | 0.0.0.0amp 0 | |

| | revision_number | 1 | |

| | security_group_id | eb93c9e4-c2fd-45fc-806c-d1640ac3bf2e |

| | updated_at | 2017-04-09T11:46:06Z |

+-- +

Allow access to secure shell (SSH):

[root@server10] # openstack security group rule create-- proto tcp-- dst-port 22 default

+-- +

| | Field | Value |

+-- +

| | created_at | 2017-04-09T11:46:34Z |

| | description |

| | direction | ingress |

| | ether_type | IPv4 |

| | id | 26a91aee-5cd7-4c4d-acc6-104b7be0bc59 |

| | name | None |

| | port_range_max | 22 | |

| | port_range_min | 22 | |

| | project_id | 45a1b89bc5de479e8d3e04eae314ee88 |

| | protocol | tcp |

| | remote_group_id | None |

| | remote_ip_prefix | 0.0.0.0amp 0 | |

| | revision_number | 1 | |

| | security_group_id | eb93c9e4-c2fd-45fc-806c-d1640ac3bf2e |

| | updated_at | 2017-04-09T11:46:34Z |

+-- +

Create an instance on the public network

One instance specifies the approximate allocation of virtual machine resources, including processors, memory, and storage.

List the available types:

# openstack flavor list

+-- +

| | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | |

+-- +

| | 0 | m1.nano | 64 | 1 | 0 | 1 | True |

+-- +

A cannot allocate memory error occurred here because the memory for the virtual machine is too small.

List available mirrors:

# openstack p_w_picpath list

+-+

| | ID | Name | Status | |

+-+

| | 2ed41322-bbd2-45b0-8560-35af76041798 | cirros | active | |

+-+

List the available networks:

# openstack network list

+-+

| | ID | Name | Subnets | |

+-+

| | 876887d3-2cf3-4253-9804-346f180b | public | 6428d4dd-e15d-48b0-995e-|

| | 6077 | | 45df957f4735 |

+-+

This example uses the ``provider`` public network. You must use ID instead of a name to use this network

List the available security groups:

# openstack security group list

+-- +-+

| | ID | Name | Description | Project | |

+-- +-+

| | eb93c9e4-c2fd-45fc-806c- | default | Default security group |

| | d1640ac3bf2e |

+-- +-+

Create an instance

Start the instance:

Replace ``PUBLIC_NET_ ID ``with the ID of ``public`` public network

# openstack server create-flavor m1.nano-p_w_picpath cirros-nic net-id=876887d3-2cf3-4253-9804-346f180b6077-security-group default-key-name mykey public-instance

+-+

| | Field | Value |

+-+

| | OS-DCF:diskConfig | MANUAL |

| | OS-EXT-AZ:availability_zone |

| | OS-EXT-STS:power_state | NOSTATE |

| | OS-EXT-STS:task_state | scheduling |

| | OS-EXT-STS:vm_state | building |

| | OS-SRV-USG:launched_at | None |

| | OS-SRV-USG:terminated_at | None |

| | accessIPv4 |

| | accessIPv6 |

| | addresses |

| | adminPass | nJ5gwMuEG4vN |

| | config_drive |

| | created | 2017-04-09T12:11:15Z |

| | flavor | m1.nano (0) |

| | hostId |

| | id | 9ddc6c6b-4847-47ae-91de-8cd7a607c212 |

| | p_w_picpath | cirros (2ed41322-bbd2-45b0-8560-35af76041798) | |

| | key_name | mykey |

| | name | public-instance |

| | progress | 0 | |

| | project_id | 45a1b89bc5de479e8d3e04eae314ee88 |

| | properties |

| | security_groups | name='default' |

| | status | BUILD |

| | updated | 2017-04-09T12:11:16Z |

| | user_id | 251ad20a4d754dc4a104a3f5b8159142 |

| | volumes_attached |

+-+

Check the status of the instance

# openstack server list

+-+

| | ID | Name | Status | Networks | Image Name | |

+-+

| | 9ddc6c6b-4847-47ae-| public-instance | BUILD | | cirros |

| | 91de-8cd7a607c212 |

+-+

When the build process is fully successful, the state changes from BUILD`` to ``ACTIVE

Use the virtual console to access the instance

Get the Virtual Network Computing (VNC) session URL of your instance and access it from the web browser:

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report