Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of the Integration of OpenStack and TF

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you the example analysis of OpenStack and TF integration, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

Integration of OpenStack and TF

OpenStack is the leading open source choreography system for virtual machines and containers. Tungsten Fabric provides the implementation of Neutron network services and provides many additional functions.

In OpenStack, user groups are assigned to "projects" where resources such as VM and the network are private and cannot be seen by users in other projects (unless specifically enabled).

Using VRF in vRouters and each network has a routing table, project isolation can be implemented directly in the network layer, because only routes to allowed destinations are distributed to VRF in the vRouters on the compute node, and no proxy services executed by vRouter are flooded.

The network service is Neutron and the computing agent is Nova (OpenStack Computing Service).

When both are deployed in an OpenStack environment, Tungsten Fabric can provide a seamless network between VM and Docker containers.

In the following figure, you can see that the Tungsten Fabric plug-in for OpenStack provides the mapping from the Neutron network API to the Tungsten Fabric API call, which is executed in the Tungsten Fabric controller.

Tungsten Fabric supports policies for networks and subnets, as well as OpenStack network policies and security groups. You can create these entities in OpenStack or Tungsten Fabric and synchronize any changes between the two systems.

In addition, Tungsten Fabric supports OpenStack LBaaS v2 API.

However, because Tungsten Fabric provides a rich superset of network features through OpenStack, many network features are provided only through Tungsten Fabric API or GUI. These include specifying route target to connect to external routers, service chains, configuring BGP routing policies, and application policies.

When OpenStack uses the Tungsten Fabric network, application security is fully supported. Tungsten Fabric tags can be applied at the project, network, host, VM, or interface level and to all entities contained in the tag object.

In addition, Tungsten Fabric supports resources for network and security, which can be controlled using OpenStack Heat templates.

Kubernetes container and TF integration

The container allows multiple processes to run on the same operating system kernel, but each process has access to its own tools, libraries, and configuration files.

Containers require less computing overhead than virtual machines where each VM runs its own full client operating system. Applications running in containers generally start faster and perform better than the same applications running in VM, which is one of the reasons why people are increasingly concerned about using containers in data centers and NFV.

Docker is a software layer that makes containers portable across operating system versions, and Kubernetes acts as a typical interface for deployment containers to manage the creation and destruction of containers on the server.

As shown in the figure above, Kubernetes manages container groups, which together perform certain functions called _ pods. Containers in pod run on the same server and share IP addresses.

A set of the same pod (usually running on different servers) forms _ services_, and network traffic directed to the service must be directed to a specific pod in the service. In a Kubernetes network implementation, the selection of a specific pod is performed by the application itself using the native Kubernetes API in the sending pod. For non-native applications, the virtual IP address implemented in use by the load balancing agent executes the Linux iptables on the sending server.

Most applications are non-native because they are ports to existing code developed without considering Kubernetes, so load balancing agents are used.

The standard network in a Kubernetes environment is actually flat, and any pod can communicate with any other pod. If the name of the destination pod or its IP address is known, communication from pod in one namespace (similar to _ project _ in OpenStack) to pod in another namespace is not blocked.

While this model is suitable for very large data centers that belong to a single company, it is not suitable for service providers where data centers are shared among many end customers, or for enterprises that must isolate different groups of traffic from each other.

Tungsten Fabric virtual networks can be integrated in a Kubernetes environment to provide a range of multi-tenant network functions in a manner similar to OpenStack.

The Tungsten Fabric configuration with Kubernetes is shown in the following figure.

The Tungsten Fabric architecture using Kubernetes orchestration and Docker containers is similar to OpenStack and KVM / QEMU, where the vRouter runs in the host Linux OS and contains VRF with virtual network forwarding tables.

All containers in the pod share a network stack with a single IP address (IP-1,IP-2 in the figure), but listen on different TCP or UDP ports, and the interface of each network stack connects to the VRF of the vRouter.

A process named _ kube-network-manager _ listens uses Kubernetes _ K8s _ API to listen for network-related messages and sends them to Tungsten Fabric API.

When you create a pod on the server, the local _ kubelet_ and the vRouter agent communicate through Container Network Interface (CNI) to connect the new interface to the correct VRF.

Each pod in the service assigns a unique IP address in the virtual network, and also assigns floating IP addresses to all pods in the service. Service addresses are used to send traffic from pod or external clients or servers in other services to the service.

When traffic is sent from the pod to the service IP, the vRouter connected to that pod will use the route to the service IP address to perform ECMP load balancing, and the service IP address will resolve to the interfaces that make up the pod of the target service.

When traffic needs to be sent from outside the Kubernetes cluster to the service IP, Tungsten Fabric can be configured to create a pair of (for redundancy) _ ha-proxy_ load balancers that can perform URL-based routing to Kubernetes services, preferably with floating IP addresses to avoid exposing the cluster's internal IP addresses.

These externally visible service addresses resolve to ECMP load-balanced routes to the service Pod.

When using Tungsten Fabric virtual networks in a Kubernetes cluster, Kubernetes agent load balancing is not required.

Other alternatives to providing external access include using the floating IP address associated with the load balancer object or using the floating IP address associated with the service.

When you create or delete services and pod in Kubernetes, the kube-network-manager process detects the corresponding events in k8s API and uses Tungsten Fabric API to apply network policies based on the network mode configured for the Kubernetes cluster. The various options are summarized in the following table.

Tungsten Fabric brings many powerful network features to the Kubernetes world, similar to those of OpenStack, including:

IP address management

DHCP

DNS

Load balancing

Network address translation (1:1 floating IP and NRV 1 SNAT)

Access control list

Application-based security

Integration of TF and vCenter {# tf-vcenter}

VMware vCenter is widely used as a virtualization platform, but network gateways need to be manually configured to enable network connectivity between virtual machines in different subnets and targets outside the vCenter cluster.

Tungsten Fabric virtual networks can be deployed in an existing vCenter environment to provide all the network features previously listed, while retaining workflows that users may rely on to create and manage virtual machines using vCenter GUI and API.

In addition, support for Tungsten Fabric is implemented in vRealize Orchestrator and vRealize Automation so that common tasks in Tungsten Fabric, such as creating virtual networks and network policies, can be included in the workflows implemented in these tools.

The Tungsten Fabric architecture that uses VMware vCenter is shown in the following figure.

Virtual networks and policies can be created directly in Tungsten Fabric or using TF tasks in vRO / vRA workflows.

When vCenter creates a VM with its GUI or vRO / vRA, Tungsten Fabric's vCenter plug-in will see the corresponding message on the vCenter message bus, which is the trigger for Tungsten Fabric to configure vRouter on the server (the server where the VM will be created).

Each interface of each VM is connected to a port group that corresponds to the virtual network on which the interface is located. The port group has an associated VLAN, which is set by the Tungsten Fabric controller using the "VLAN override" option in the vCenter, and all VLAN of the port group is sent to the vRouter through the trunk port group.

The Tungsten Fabric controller maps the VLAN of the interface to the VRF of the virtual network that contains the subnet. Strips the VLAN tag and performs a route lookup in VRF.

As mentioned earlier in this document, through the combination of Tungsten Fabric and vCenter, users can access all the network and security services provided by Tungsten Fabric, including zero-trust microsegmentation, proxy DHCP,DNS and DHCP, avoiding network flooding, service chains, almost unlimited scale, and seamless interconnection with physical networks.

# nested Kubernetes and OpenStack or vCenter {# tf-nested-kubernetes}

Assume that the KVM host running the container has been preconfigured in some way.

Another alternative is to use OpenStack or vCenter to configure the VM that the container is running, and use Tungsten Fabric to manage the virtual network between the VM created by OpenStack or vCenter and the container created by Kubernetes, as shown in the following figure.

Choreographer (OpenStack or vCenter), Kubernetes Master and Tungsten Fabric run in a set of servers or VM.

The choreographer is configured to use Tungsten Fabric to manage compute clusters, so there is vRouters on each server.

The virtual machine can be started and configured to run the CNI plug-in for Kubelet and Tungsten Fabric. These virtual machines can be run by Kubernetes hosts and manage the network through Tungsten Fabric.

Since the same Tungsten Fabric is responsible for managing the network of orchestrator and Kubernetes, seamless networking can be achieved between VM, between containers, and between VM and containers.

In a nested scenario, Tungsten Fabric provides the same level of isolation as previously described, and multiple Kubernetes Masters can coexist, and multiple VM running Kubelet can run on the same host. This allows the provision of multi-tenant Kubernetes container services.

The above is all the content of the article "sample Analysis of OpenStack and TF Integration". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report