Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction to Tungsten Fabric: about Service chain, BGPaaS and others

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

Tungsten Fabric primer series of articles, from the practical experience of technical experts, compiled and presented by TF Chinese community for you, aims to help beginners deeply understand TF operation, installation, integration, debugging and other whole processes. If you have any relevant experience or questions, please feel free to interact with us and further communicate with the community geeks. For more TF technical articles, please click the button at the bottom of the official number> Learning> Article Collection.

Author: Tatsuya Naganawa Translator: TF Compilation Team

service chain

Although there are many use cases, NFVI will be one of the most prominent use cases for Tungsten Fabric due to the many unique features of NFVI that make it the foundation for implementing software.

One of the most famous features is service-chain, which manages traffic without changing VNF IP, allowing VNFs to be inserted and deleted in real time.

Since a vRouter can contain VRFs internally, it can have VRFs on every interface of a VNF and can process traffic through false next hops, e.g., to send to the next VNF.

Tungsten Fabric's service chain is implemented in this way, so once the service chain is created, you'll see a lot of VRFs created and inserted into the next hop, sending traffic to the next VNF in the service chain.

VRF (routing-instance in control terminology) is named domain-name:project-name:virtual-network-name:routing-instance-name. In most cases virtual-network-name and routing-instance-name are the same, but service chaining is an exception to this rule

To set up an example service chain, follow the steps in the video below:

https://www.youtube.com/watch? v=h6qOqsPtQ7M

After that, you can see that the virtual-network on the left has the prefix virtual-network on the right, with an updated next hop that points to the interface on the left of the VNF, and vice versa for the virtual-network on the right.

Note: As far as I know, when using service chain v2, only the "left" and "right" interfaces are used for service chain calculation, while the "management" and "other" interfaces are ignored

L2, L3, NAT

There are many VNFs with different sets of traffic types, so SDN for NFVI also needs to support multiple traffic types.

To this end, Tungsten Fabric service chains support three traffic types, namely l2, l3, and nat.

The l2 service chain (also known as transparent service chain) can be used with transparent VNFs, which have similar functionality to bridges and send messages based on arp responses.

Although vRouter always uses the same mac address (00:01:00:5e:00:00),

https://github.com/Juniper/contrail-controller/wiki/Contrail-VRouter-ARP-Processing#vrouter-mac-address

However, this is an exception to this rule, where the vRouter on the left side of the VNF sends traffic using dest mac: 2:0:0:0:2, while the vRouter on the right side of the VNF sends traffic using dest mac 1:0:0:1. Therefore, a bridge-type VNF sends traffic to the other side of its interface.

Note that even with l2 vnf, the virtual-network on the left and virtual-network on the right need to have different subnets. This may be a bit unusual, but since vRouter can do l3 routing, vRouter-L2 VNF-vRouter can be used just as router-L2 VNF-router is acceptable.

On the other hand, the l3 service chain (also known as the in-network service chain) will send traffic to the VNF without changing the mac address, because in this case the VNF will route according to its destination IP (similar to the behavior of a router). The behavior is almost identical to l2 except for mac addresses.

Nat service chaining is similar to l3 service chaining in that it expects VNFs to route according to the destination IP. A big difference is that it copies the prefixes of virtual-network on the right to virtual-network on the left, but does not copy the prefixes of virtual-network on the left to virtual-network on the right!

So the left/right interface needs to be chosen carefully, because in this case it is asymmetrical.

A typical use case for this form of service chaining is where, in the case of SNAT for Internet access, etc., the VNF has a private IP on the left interface and a global IP on the right interface. Since private IP cannot be exported to the Internet, in this case, the prefix of virtual-network on the left cannot be copied to virtual-network on the right.

ECMP, Multi VNF

The Service Chain feature also supports ECMP settings for deployment at scale.

The configuration is basically the same, but multiple port-tuples need to be assigned to a service instance.

After this, you will find that traffic will be load balanced according to the 5-tuple of 5 packets.

Multiple VNFs can also be set up if multiple service instances are assigned to a network policy.

When using l3 service chains, although counterintuitive, you need to assign two VNFs to the same virtual network.

Because all packets from VNFs are in separate VRFs of the service chain, they can have the same subnet.

The simultaneous use of l2 and l3 is also supported, but in this case l2 vnf needs to be allocated to different virtual networks, one of which is additional to the network policy.

An example setup is described in this blog post: https://tungsten.io/building-and-testing-layer2-service-images-for-opencontrail/

subinterface

This is also a feature used in NFVI, so I'll mention it here as well.

VNF sends tagged messages for various reasons. In this case, the vRouter can use a different VRF if the vlan label is different.

Similar to Junos term "set routing-instances routing-interface-name interface xxx"

Here is how to do it: www.youtube.com/watch? v=ANhBQe_DS2E

DPDK

vRouter has the ability to interact with physical NIC using DPDK.

It will often be used for NFV-type deployments because it is still not easy to achieve forwarding performance comparable to typical VNFs (which may themselves use DPDK or similar technologies) based on a pure Linux kernel network stack.

https://blog.cloudflare.com/how-to-receive-a-million-packets/

To enable this feature with ansible-deployer, you need to set these parameters.

bms1:

roles:

vrouter:

AGENT_MODE: dpdk

CPU_CORE_MASK: "0xe" ## coremask for forwarding core ( Note: please don't include first core in numa to reach optimal performance :( )

SERVICE_CORE_MASK: "0x1" ## this is for non-forwarding thread, so isolcpu for this core is not needed

DPDK_CTRL_THREAD_MASK: "0x1" ## same as SERVICE_CORE_MASK

DPDK_UIO_DRIVER: uio_pci_generic ## uio driver name

HUGE_PAGES: 16000 ## number of 2MB hugepages, it can be smaller

When set to AGENT_MODE: dpdk, ansible-deployer installs containers such as vrouter-dpdk--a process that runs PMD against the physical NIC. So in this case, forwarding from the vRouter to the physical NIC will be based on the DPDK.

Note:

1. Due to the limited number of vRouters linked to PMDs, it may be necessary to rebuild the vRouter to use certain NICs

https://github.com/Juniper/contrail-vrouter/blob/master/SConscript#L321

2. uio_pci_generic cannot be used for some NICs (e.g. XL710). In this case, vfio-pci is used instead.

https://doc.dpdk.org/guides-18.05/rel_notes/known_issues.html#uio-pci-generic-module-bind-failed-in-x710-xl710-xxv710

Since the forwarding plane of the vRouter is not in kernel space in this case, the tap device cannot be used to fetch messages from the VM. For this reason, QEMU has the function of "vhostuser" for sending messages to dpdk processes in user space. When vRouter is configured to AGENT_MODE: dpdk, nova-vif-driver automatically creates vhostuser vif instead of tap vif for kernel vRouter.

From the VM side, it still looks like virtio, so you can communicate with DPDK vRouter using regular virtio drivers.

One caveat is that when QEMU is going to connect to the vhostuser interface, qemu also needs to provide huge support for this. With OpenStack, this knob allocates a large number of pages per VM.

Both the kernel and dpdk processes themselves have a number of tuning parameters for optimal performance. For me, the following two articles are the most helpful in terms of kernel.

https://www.redhat.com/en/blog/tuning-zero-packet-loss-red-hat-openstack-platform-part-1

https://www.redhat.com/en/blog/going-full-deterministic-using-real-time-openstack

cat /proc/sched_debug can also be used to see if core isolation is working well

On the vRouter side, this may need to be noted.

1. vRouter will use a 5-tuple based core Load Balancer, so for optimal performance, you may need to increase the number of flows

https://www.openvswitch.org/support/ovscon2018/6/0940-yang.pptx

Note: Using untagged packets may result in more throughput when using vrouter-dpdk (this means providing vrouter-dpdk without using--vlan_tci)

BGPaaS

BGPaaS is also a unique feature in Tungsten Fabric, which is used to insert VRF routing into VNF.

In a sense, it is somewhat similar to the AWS VPN Gateway in that it automatically fetches routes from the VPC routing table

From an operational point of view, using the gateway IP and service IP of vRouter, VNF in vRouter will have IPV4 bgp peers.

One notable use case is setting up ipsec VNF, which can connect to the public cloud through VPN gateways. In this case, the routing table of the VPC will be replicated to the VNF and will be replicated to the VRF of the vRouter via BGPaaS, so all prefixes will be assigned correctly when a new modified subnet is added to the VPC of the public cloud.

Service Mesh

Istio works well, and multi-clustering is an interesting topic.

https://www.youtube.com/watch? v=VSNc9qd2poA

https://istio.io/docs/setup/kubernetes/install/multicluster/vpn/

Tungsten Fabric Starter Series Articles--

Initial Start-up and Operation Guide

Seven "weapons" of TF components

choreographer integration

About the installation (above)

About the installation (below)

Integration of mainstream monitoring system tools

Start the next day's work

8 Typical Faults and Troubleshooting Tips

What about cluster updates? L3VPN and EVPN integration

Tungsten Fabric Architecture Analysis Series--

Part 1: TF Main Features and Use Cases

Part 2: How TF Works

Part 3: Detailed explanation of vRouter architecture

Part 4: TF's Service Chain

Part 5: Deployment Options for vRouter

How does TF collect, analyze, and deploy?

Part 7: How TF is organized

Part 8: TF Support API Overview

Part 9: How TF connects to physical networks

TF Application-Based Security Policy

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report