In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
Part IX Overview of Network Services neutron
I. neutron architecture
OpenStack's network service neutron is the most complex part of the whole OpenStack. Its basic architecture is a central service (neutron-server) plus a variety of plugins and agents, using different network provider (network providers, such as Linux Bridge, OpenvSwitch (ovs), etc.) to implement various network architectures, on which network resources such as networks, subnets, ports and firewalls are provided for instances.
The following figure shows the basic architecture of neutron
It can be seen that neutron adopts a distributed architecture, and multiple components (sub-services) work together to provide network services.
Neutron server
Provide OpenStack network API, receive the request, and call Plugin to process the request.
The following is the structure diagram of neutron server.
Neutron server consists of two parts.
1. Provide API services; 2. Call Plugins.
Core API
Provide RESTful API to manage network, subnet and port.
Extension API
Provide RESTful API to manage router, load balance, firewall and other resources.
Commnon Service
Authenticate and validate API requests.
Neutron Core
The core handler of Neutron server handles the request by calling the corresponding Plugin.
Core Plugin API
The abstract function set of Core Plugin is defined, and Neutron Core calls the corresponding Core Plugin through this API.
Extension Plugin API
The abstract function set of Service Plugin is defined, and Neutron Core calls the corresponding Service Plugin through this API.
Core Plugin
Implement Core Plugin API, maintain the state of network, subnet and port in the database, and be responsible for calling the corresponding agent to perform related operations on network provider, such as creating network.
Service Plugin
Implement Extension Plugin API, maintain the state of router, load balance, security group and other resources in the database, and be responsible for calling the corresponding agent to perform related operations on network provider, such as creating router.
Plugin
Process the request from neutron server, maintain the state of OpenStack logical network, and call Agent to process the request to realize the function of "what network to provide (what)".
Plugin is divided into two categories by function: core plugin and service plugin.
Core plugin: maintains the information of netowrk, subnet and port related resources of neutron. The agent corresponding to core plugin includes linux bridge, OVS, etc.
Service plugin: provides dhcp, routing, firewall, load balance and other services, as well as corresponding agent.
Agent
Dealing with the request of Plugin, responsible for realizing all kinds of network functions on network provider and realizing the function of "how to provide network (how)".
Network provider
Virtual or physical network devices that provide network services, such as Linux Bridge,Open vSwitch, etc.
Queue
Neutron Server,Plugin and Agent communicate and invoke through Messaging Queue message queuing.
Database
Store the network status information of OpenStack, including Network, Subnet, Port, Router, etc.
Second, the working principle of neutron
The next section is going to use linux bridge as a network provider in neutron to provide network services, so how is it implemented?
According to the hierarchical model of neutron server above, to use this network provider, you need to install two things: linux bridge core plugin and linux bridge agent.
Linux bridge core plugin: runs with neutron server to implement core plugin API. Responsible for maintaining database information and informing linux bridge agent to achieve specific network functions.
Linux bridge agent: receives requests from plugin and implements neutron network functions by configuring linux bridge on the host. It is usually located on a computing node or a network node.
By the same token, if the network provider you are using is open vSwitch, you only need to install open vswitch plugin and open vswitch agent.
It can be seen that plugin,agent and network provider are used together. For example, if the network provider above is linux bridge, then you have to use linux bridge's plugin and agent;. If network provider is replaced with OVS or physical switch, plugin and agent also have to be replaced.
This creates a problem: all network provider plugin has to write a very similar set of database access code.
To solve this problem, neutron implements a plugin called ML2 (Modular Layer 2), which abstracts and encapsulates the functions of plugin.
With ML2 plugin, various network provider do not need to develop their own plugin, just need to develop the corresponding driver for ML2.
3. ML2 (Modular Layer 2) plugin
Moduler Layer 2 (ML2) is a core plugin implemented by Neutron in the Havana version (H version), which is used to replace the original linux bridge plugin and open vswitch plugin, and has been used ever since.
ML2 provides a framework that allows multiple Layer 2 network technologies to be used simultaneously in OpenStack networks, and different nodes can use different network implementation mechanisms.
With ML2 plugin, linux bridge agent, open vswitch agent, hyper-v agent and other third-party agent can be deployed on different nodes.
ML2 abstracts and models the layer 2 network, and introduces type driver and mechansim driver. These two types of driver decouple the network types supported by Neutron (type) and the mechanism to access these network types (mechanism). As a result, ML2 has very good flexibility, easy to expand, and can flexibly support a variety of type and mechanism.
Type driver
Each network type supported by Neutron has a corresponding ML2 type driver. Type driver is responsible for maintaining the status of the network type, performing verification, creating the network, and so on.
The network types supported by ML2 include local, flat, vlan, vxlan and gre.
Mechansim Driver
Every network mechanism supported by Neutron has a corresponding ML2 mechansim driver. Mechanism driver is responsible for obtaining the state of the network maintained by type driver.
There are three types of mechanism driver:
Agent-based: including linux bridge, open vswitch, etc.
Controller-based: including OpenDaylight, VMWare NSX, etc.
Based on physical switches: including Cisco Nexus, Arista, Mellanox, etc.
For example, to create a network vlan100 whose type driver is vlan,mechansim driver and linux bridge, then:
Vlan type driver ensures that vlan100 information is saved to the neutron database, including the name of network, vlan ID, etc., and linux bridge mechanism driver ensures that the linux brige agent on the node creates vlan devices and bridge devices with an ID of 100 on the physical network card and bridges the two.
The ML2 mechanism driver of linux bridge and open vswitch is used to configure virtual switches on each node.
IV. Service plugin/agent
Service Plugin and its Agent provide richer extended functions, including dhcp, routing, load balance,firewall, etc.
DHCP
Dhcp agent provides dhcp services to the instance through dnsmasq.
Routing
L3 agent can create router for project (tenant) and provide routing service between neutron subnet. The routing function is implemented through IPtables by default.
Firewall
L3 agent can configure firewall policies on router to provide network security protection.
Security Group
Another security-related feature is Security Group, which is also implemented through IPtables.
The difference between Firewall and Security Group is:
The Firewall security policy is located in router and protects all network of a project.
The Security Group security policy is located in the instance and protects a single instance.
Load Balance
Neutron provides load balance services to multiple instance in project through HAProxy by default.
V. Network resources managed by neutron
Neutron can create, delete, and modify the following network resources
Network
Network is an isolated layer 2 broadcast domain. Neutron supports many types of network, including local, flat, VLAN, VxLAN and GRE.
Subnet
Subnet is an IPv4 or IPv6 address field. The IP address of the instance is assigned from the subnet. Each subnet needs to define the range and mask of the IP address.
Port
Port can be thought of as a port on a virtual switch. MAC address and IP address are defined on port. When the virtual network card of the instance is connected to the port, port assigns the ip address and mac address to the virtual network card.
In OpenStack, a project (tenant) can have multiple network, a network can have multiple subnet, a subnet can have multiple port, a port belongs to an instance, and an instance can connect multiple port (multiple network cards).
VI. The interaction between neutron and nova
The web service neutron and the computing service nova interact through neutron-metadata-agent.
When starting an instance, you need to access the nova-metadata-api service to obtain metadata and userdata. These data are the customized information of the instance, such as hostname,ip,public key.
Through neutron-metadata-agent, the nova-metadata-api service can be accessed through the network, thus the customization information of the instance can be obtained. This agent enables the instance to communicate with nova-metadata-api through dhcp-agent or l3-agent.
VII. Physical deployment of neutron
Scheme 1: control node + compute node
Control node
Deployed services include: neutron server, core plugin's agent and service plugin's agent.
Computing node
Deploy agent of core plugin, which is responsible for providing layer 2 network functions.
1. Core plugin and service plugin have been integrated into neutron server, so there is no need to run independent plugin services.
two。 Both the control node and the compute node need to deploy the agent of the core plugin, because the layer 2 connection can only be established between the agent control node and the compute node.
3. Multiple control nodes and compute nodes can be deployed.
Scheme 2: control node + network node + computing node
In this deployment scheme, OpenStack consists of control nodes, network nodes and computing nodes.
Control node
Deploy the neutron server service.
Network node
The deployed services include: agent for core plugin and agent for service plugin.
Computing node
Deploy agent of core plugin, which is responsible for providing layer 2 network functions.
The key point of this scheme is to separate all agent from the control node and deploy them to independent network nodes.
1. The control node is only responsible for responding to API requests through neutron server.
two。 Data exchange, routing and load balance and other advanced network services are realized by independent network nodes.
3. More load can be borne by increasing the number of network nodes.
4. Multiple control nodes, network nodes, and compute nodes can be deployed.
This scheme is especially suitable for large-scale OpenStack environment.
8. Summary of network service neutron
1. Neutron provides the management of network, subnet, port, firewall and so on through the network services provided by plugin and agent.
2. Plugin is located in neutron server, including core plugin and service plugin.
3. Agent is located in each node and is responsible for the implementation of network services.
4. Core plugin provides L2 function, and ML2 is the recommended plugin.
5. The most widely used L2 agent are linux bridage and open vswitch.
6. Service plugin and agent provide extended functions, including dhcp, routing, load balance, firewall and so on.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 280
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.