In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Openstack Network Service Neutron [V] Openstack Network Service Neutron [V]
Openstack
Time: November 28, 2016
Neutron introduction
Neutron is one of the important components of openstack. In the past, there was no neutron project.
In the early days, there was no neutron. The nova-network of the network used in the early days had a neutron only after the version was changed.
Openstack Networking
Network:
In the actual physical environment, we use switches or hubs to connect multiple computers to form a network. In the world of Neutron, the network also connects many different CVMs.
Subnet:
In the actual physical environment, in a network. We can divide the network into logical subnets. In the world of Neutron, subnets also belong to the network.
Port:
In the actual physical environment, there are many ports, such as switch ports, for computers to connect to each subnet or network. The world port of the Neutron also belongs to the subnet, and the Nic of the CVM will correspond to a port.
Router:
In the actual network environment, if communication is needed between different networks or different logical subnets, it needs to be routed through the router. Routing also plays the same role in Neutron. Used to connect different networks or subnets.
Introduction to Neutron Architecture
Neutron is also divided into control node and computing node.
The default network of openstack is a single flat network (the virtual machine and the host are on the same network segment). In the official document, it is called the provider network.
Installation
We created the database from the very beginning.
We have also created keystone users
Configure network options
You can deploy network services using one of the two architectures, option 1 and option 2.
Option 1 deploys with the simplest possible architecture, allowing only instances to connect to the public network (external network). There are no private networks (personal networks), routers or floating IP addresses. Only admin or other privileged users can manage public networks
Option 2 adds layer-3 services to option 1 to support instances to connect to the VPC. Demo or other unprivileged users can manage their own private networks, including routers that connect public and private networks. In addition, floating IP addresses allow instances to connect to external networks, such as the Internet, using a private network
A typical private network generally uses an overlay network. Overlay networks, such as VXLAN, contain additional headers that increase overhead and reduce the available space for valid content and user data. Without knowing the virtual network architecture, the example attempts to send a packet with 1500 bytes of Ethernet maximum transmission unit (MTU). The network service automatically provides the instance with the correct value of MTU via DHCP. However, some cloud images do not use DHCP or ignore the DHCP MTU option, requiring the use of metadata or scripts for configuration
Let's configure the public network first.
Install components on the control node
[root@linux-node1 ~] # yum install openstack-neutron openstack-neutron-ml2\ openstack-neutron-linuxbridge ebtables
Tip: the small difference between neutron and other components is that you can't synchronize immediately after configuring the database. It also depends on other configuration files.
Edit the / etc/neutron/neutron.conf file and complete the following
Configure database access [root@linux-node1 ~] # vim / etc/neutron/neutron.conf684 connection = mysql+pymysql://neutron:neutron@192.168.56.11/neutron in [database]
Tip: no need to synchronize the database, 684 is 684 rows
In the [DEFAULT] and [keystone_authtoken] sections, configure authentication service access
[DEFAULT] auth_strategy = Keystone [Keystone _ authtoken] auth_uri = http://192.168.56.11:5000auth_url = http://192.168.56.11:35357memcached_servers = 192.168.56.11:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = neutron
Rabbitmq (message queuing) configuration
Rpc_backend = rabbitrabbit_host = 192.168.56.11rabbit_userid = openstackrabbit_password = openstack
There are 2 core configurations for neutron
Start the ML2 plug-in and disable other plug-ins
[DEFAULT]... Core_plugin = m12service_plugins = hint: nothing is written after the service_plugins equal sign is to disable other plug-ins
Configure network services to notify computing nodes of network topology changes: (configure nova related)
[DEFAULT]... Notify_nova_on_port_status_changes = truenotify_nova_on_port_data_changes = true
Tip: to put it simply, notify nova when the port changes.
The configuration of [nova] nova tag is actually the configuration of keystone auth_url = http://192.168.56.11:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = novapassword = nova
Configure lock path
[oslo_concurrency] lock_path = / var/lib/neutron/tmp
Summary of neutron configuration:
[root@linux-node1 ~] # grep'^ [a Murz]'/ etc/neutron/neutron.confauth_strategy = keystone # validate with keystone core_plugin = ml2 # use the ml2 plug-in service_plugins = # No other plug-ins notify_nova_on_port_status_changes = true # Port change Notification novanotify_nova_on_port_data _ changes = true # Port change Notification novarpc_backend = rabbit # configuration using rabbitconnection = mysql+pymysql://neutron:neutron@192.168.56.11/neutron # database connection address auth_uri = http://192.168.56.11:5000 # neutron keystone configuration auth_url = http://192.168.56.11:35357 # neutron keystone configuration memcached_servers = 192 .168.56.11: 11211 # neutron keystone configuration auth_type = password # neutron keystone configuration project_domain_name = default # neutron keystone configuration user_domain_name = default # neutron keystone configuration project_name = service # neutron keystone configuration username = neutron # neutron keystone configuration password = neutron # neutron keystone configuration auth_url = http://192.168.56.11:35357 # neutron nova configuration auth_type = password # neutron nova configuration project_domain_name = default # neutron nova configuration user_domain_name = default # neutron nova configuration region_name = RegionOne # neutron nova configuration project_name = service # neutron nova configuration username = nova # neutron nova configuration password = nova # neutron nova configuration lock_path = / var/lib/neutron/tmp # lock path rabbit_host = 192.168.56.11 # rabbitmq configuration rabbit_userid = Openstack # rabbitmq configuration rabbit_password = openstack # rabbitmq configuration
Configure Modular Layer 2 (ML2)
The ML2 plug-in uses the Linuxbridge mechanism to create an layer-2 virtual network infrastructure for the instance
Edit configuration file / etc/neutron/plugins/ml2/ml2_conf.ini
Choice of driver
[ml2] type_drivers = flat,vlan,gre,vxlan,geneve
Set up which plug-ins to use to create the network
[ml2] mechanism_drivers = linuxbridge,openvswitch
Tip: we can write more than one, whether we use it or not.
Disable VPC
[ml2] tenant_network_types =
Start the port security extension driver
[ml2] extension_drivers = port_security
In the [ml2_type_flat] section, configure the public virtual network as a flat network
[ml2_type_flat] flat_networks = public# configure a public network
In the [securitygroup] section, enable ipset to increase the efficiency of security group rules:
[securitygroup] enable_ipset = true
Tip: there are many network types in ml2_conf. We need that kind of network type configuration.
Summary of ML2 plug-in configuration:
[root@linux-node1 ~] # grep'^ [a Murz]'/ etc/neutron/plugins/ml2/ml2_conf.initype_drivers = flat,vlan,gre,vxlan,geneve # driver type tenant_network_types = # tenant's network type mechanism_drivers = linuxbridge Openvswitch # create network plug-in extension_drivers = port_security # Open port security flat_networks = public # Network type publicenable_ipset = true # enable ipset
Configure Linuxbridge proxy
Edit the / etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following
In the [linux_bridge] section, correspond the public virtual network to the public physical network interface: [linux_bridge] physical_interface_mappings = public:eth0# modify the network card as it is, and modify the corresponding network card if it is not eth0.
In the [vxlan] section, prohibit VXLAN from overwriting the network:
[vxlan] enable_vxlan = false
In the [securitygroup] section, enable security groups and configure Linuxbridge iptables firewall driver:
[securitygroup]... enable_security_group = Truefirewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver# start the security group and set the firewall driver
The Linuxbridge agent is summarized as follows:
[root@linux-node1 ~] # grep'^ [a Murz]'/ etc/neutron/plugins/ml2/linuxbridge_agent.iniphysical_interface_mappings = public:eth0 # Network Mapping firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver # Firewall enable_security_group = true # Open security group enable_vxlan = false # close vxlan
Configure DHCP proxy
Edit the / etc/neutron/dhcp_agent.ini file and complete the following:
In the [DEFAULT] section, configure the Linuxbridge driver interface, DHCP driver and enable isolated metadata so that instances on the public network can access the metadata over the network
[DEFAULT]... Interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver# virtual interface driver, using Linuxbridgedhcp_driver = neutron.agent.linux.dhcp.Dnsmasq#dhcp driver, default uses Dnsmasq (a small open source project) to provide dhcp service enable_isolated_metadata = false# refresh routing use
Summary of DHCP configuration
[root@linux-node1 ~] # grep'^ [a Murz]'/ etc/neutron/dhcp_agent.iniinterface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver # underlying plug-in Linuxbridgedhcp_driver = neutron.agent.linux.dhcp.Dnsmasq # DHCPenable_isolated_metadata = false # use of push routing
Configure metadata proxy
Responsible for providing configuration information, such as credentials to access the instance
Edit the / etc/neutron/metadata_agent.ini file and complete the following:
In the [DEFAULT] section, configure the metadata host and the shared password:
Nova_metadata_ip = 192.168.56.11 # metadata host metadata_proxy_shared_secret = abcdocker # shared secret
Tip: this shared key is a string
Configure Network Services (nova-api)
Edit the / etc/nova/nova.conf file and complete the following:
In the [neutron] section, configure the access parameters, enable the metadata proxy, and set the password:
Url = http://192.168.56.11:9696auth_url = http://192.168.56.11:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = neutron
Tip: 9696 is the port of neutron-server
[neutron] service_metadata_proxy=truemetadata_proxy_shared_secret = abcdocker # shared key
The network service initialization script requires a hyperlink / etc/neutron/plugin.ini to the ML2 plug-in configuration file / etc/neutron/plugins/ml2/ml2_conf.ini. If the hyperlink does not exist, create it using the following command:
[root@linux-node1] # ln-s / etc/neutron/plugins/ml2/ml2_conf.ini / etc/neutron/plugin.ini
Tip: here we can use that plug-in to make a soft connection.
Synchronize database
[root@linux-node1] # su-s / bin/sh-c "neutron-db-manage-- config-file / etc/neutron/neutron.conf\-- config-file / etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
Note: database synchronization occurs after Networking because the script needs to complete the server and plug-in configuration files.
Restart the computing API service:
[root@linux-node1 ~] # systemctl restart openstack-nova-api.service
When the system starts, start the Networking service and configure it to start.
For two network options:
[root@linux-node1 ~] # systemctl enable neutron-server.service\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service\ neutron-metadata-agent.service [root@linux-node1 ~] # systemctl start neutron-server.service\ neutron-linuxbridge-agent.service neutron-dhcp-agent.service\ neutron-metadata-agent.service
Now you also need to register neutron on keystone.
Create a neutron service entity:
[root@linux-node1 ~] # source admin-openstack.sh [root@linux-node1 ~] # openstack service create-- name neutron\-- description "OpenStack Networking" network
Create a network service API endpoint:
[root@linux-node1 ~] # openstack endpoint create-region RegionOne\ network public http://192.168.56.11:9696[root@linux-node1 ~] # openstack endpoint create-region RegionOne network internal http://192.168.56.11:9696[root@linux-node1 ~] # openstack endpoint create-region RegionOne network admin http://192.168.56.11:9696
Check whether neutron is installed successfully
[root@linux-node1 ~] # neutron agent-list+-+- -- +-+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +-+- +-+ | b41a9731-2bffMusure 4257-| DHCP agent | linux- | nova | |: -) | True | neutron-dhcp-agent | | a3e9-91b13f568932 | | node1.abcdocker.com | | de108bab-f33a-4319 | Linux bridge agent | linux- |: -) | True | | | neutron-linuxbridge- | |-8caf-dd5fbda74d7e | | node1.abcdocker.com | agent | | f8286325-19adMai43ae- | Metadata agent | linux- | |: -) | True | neutron-metadata-agent | | a25a-c7c2ceca7aed | | node1.abcdocker.com | + -+
Configure the neutron compute node
Install components, installed server on 192.168.56.12 linux-node2.com
[root@linux-node2 ~] # yum install openstack-neutron-linuxbridge ebtables ipset-y
Because the configuration of the control node is almost the same as that of the computing node, we directly copy the files of the control node to modify the copy of the control node.
[root@linux-node1 ~] # scp / etc/neutron/neutron.conf root@192.168.56.12:/etc/neutron
Modify profile permissions at the compute node
[root@linux-node2 ~] # chown-R root:neutron / etc/neutron/neutron.conf [root@linux-node2 ~] # ll / etc/neutron/neutron.conf-rw-r- 1 root neutron 53140 Nov 21 15:13 / etc/neutron/neutron.conf
Compute node settings
[root@linux-node2 ~] # vim / etc/neutron/neutron.conf#connection = # Delete mysql connection path [nova] # Delete all configurations under the nova tag # notify_nova_on_port_status_changes = true # comment # notify_nova_on_port_data_changes = true # comment # core_plugin = ml2 # comment Plug-in # service_plugins = # comments
Compare the computing node with the control node
[root@linux-node2 ~] # diff / etc/neutron/neutron.conf / tmp/neutron.conf 30c30
< #core_plugin = ml2--->Core_plugin = ml233c33
< #service_plugins =--->Service_plugins = 137c137
< #notify_nova_on_port_status_changes = true--->Notify_nova_on_port_status_changes = true141c141
< #notify_nova_on_port_data_changes = true--->Notify_nova_on_port_data_changes = true684c684
< #connection = --->Connection = mysql+pymysql://neutron:neutron@192.168.56.11/neutron936a937944 > auth_url = http://192.168.56.11:35357> auth_type = password > project_domain_name = default > user_domain_name = default > region_name = RegionOne > project_name = service > username = nova > password = nova
Tip: it's okay not to comment, but it's better to comment out for the sake of environmental consistency.
Configure network services for compute nodes
We can directly copy the configuration of the control node and modify it.
[root@linux-node1] # vim / etc/nova/nova.conf... [neutron] url = http://192.168.56.11:9696auth_url = http://192.168.56.11:35357auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = neutron
Tip: when controlling the node, we configure the node option of neutron in the configuration file of nova and the option of nova in the configuration file of neutron. Configure neutron on the nova of the compute node
Configure Linuxbridge on the compute node
Configure network options
Friendly reminder: the configuration here is exactly the same as that of the control node.
Here we will copy the / etc/neutron/plugins/ml2/linuxbridge_agent.ini of the control node directly.
Copy
[root@linux-node1 ~] # scp / etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.56.12:/etc/neutron/plugins/ml2/root@192.168.56.12's password: linuxbridge_agent.ini
View
[root@linux-node2 ~] # ll / etc/neutron/plugins/ml2/linuxbridge_agent.ini-rw-r- 1 root root 7924 Nov 21 16:26 / etc/neutron/plugins/ml2/linuxbridge_ agent.ini [root @ linux-node2 ~] # grep'^ [amerz]'/ etc/neutron/plugins/ml2/linuxbridge_agent.iniphysical_interface_mappings = public:eth0firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriverenable_security_group = trueenable_vxlan = False [root@linux-node2 ~] # chown-R root:neutron / etc/neutron/plugins/ml2/linuxbridge_agent.ini
Restart the compute node nova-compute
[root@linux-node2 ~] # systemctl restart openstack-nova-compute.service
Start the Linuxbridge agent and set it to enable startup
[root@linux-node2 ~] # systemctl enable neutron-linuxbridge-agent.serviceCreated symlink from / etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to / usr/lib/systemd/system/neutron-linuxbridge-agent.service. [root@linux-node2 ~] # systemctl start neutron-linuxbridge-agent.service
Enter the control node and check
[root@linux-node1 ~] # source admin-openstack.sh [root@linux-node1 ~] # neutron agent-list+- +-+ | id | agent_type | host | availability_zone | alive | admin_state_up | binary | +-+- -+-+ | b41a9731-2bffMusure 4257-| DHCP agent | | linux- | nova |: -) | True | neutron-dhcp-agent | | a3e9-91b13f568932 | | node1.abcdocker.com | | de108bab-f33a-4319 | Linux bridge agent | linux- | | |: -) | True | neutron-linuxbridge- |-8caf-dd5fbda74d7e | | node1.abcdocker.com | agent | | eb879cc3-ca1d-470b- | Linux bridge agent | linux- | |: -) | True | | | neutron-linuxbridge- | | 9fe6-b0e5c2fedf2a | | node2.abcdocker.com | agent | | f8286325-19adMai43ae- | Metadata agent | linux- | |: -) | True | neutron-metadata-agent | | a25a-c7c2ceca7aed | | | node1.abcdocker.com | +-| -+
Tip: if the network interface is not eth0, your configuration file will not start without modification.
Troubleshooting routines:
1. Netstat-lntup confirms whether the port is listening.
2. Openstack service list ensures that the service is created
Openstack endpoint list ensures that the three endpoint are created correctly
3. Vim modify configuration file debug=true restart-execute command-check log
M version Chinese document: http://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/keystone-install.html
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 277
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.