In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces the example analysis of Tungsten Fabric installation, which is very detailed and has certain reference value. Friends who are interested must finish it!
HA behavior of Tungsten Fabric components
If the plan settings are used for critical traffic, HA is always required.
Tungsten Fabric has a good HA implementation, which is available in the following documentation.
Http://www.opencontrail.org/opencontrail-architecture-documentation/#section2_7
One more thing I want to say here is that cassandra's keyspace has a different replication-factor between configdb and analyticsdb.
Configdb:
Https://github.com/Juniper/contrail-controller/blob/master/src/config/common/vnc_cassandra.py#L609
Analytics:
Https://github.com/Juniper/contrail-analytics/blob/master/contrail-collector/db_handler.cc#L524
Because configdb's data has been replicated to all cassandras, it is unlikely to lose data even if some nodes' disks crash and need to be erased. On the other hand, because the replication-factor of analyticsdb is always set to 2, data may be lost if both nodes lose data at the same time.
Multi-NIC installation
When installing Tungsten Fabric, a multi-NIC installation is required in many cases, such as a separate NIC for the management plane and the control / data plane.
Binding (bonding) is not discussed here because bond0 can be specified directly by the VROUTER_GATEWAY parameter
I need to clarify the interesting behavior of vRouter in this setting.
For controller/analytics, it is not much different from a typical Linux installation, because Linux works well with multiple NIC and its own routing table, including the use of static routes.
On the other hand, in the vRouter node, you need to note that vRouter does not use the Linux routing table when sending messages, but always sends messages to the gateway IP.
This can be set using the gateway parameter in concert-vrouter-agent.conf and VROUTER_GATEWAY in the environment variable of the vrouter-agent container
Therefore, when setting up a multi-NIC installation, you need to be careful if you need to specify VROUTER_GATEWAY.
If not specified, and the route for Internet access (0.0.0.0 NIC 0) is overridden by the management NIC rather than the data plane NIC, the vrouter-agent container will choose the NIC that holds the default route for that node, although that will not be the correct NIC.
In this case, you need to specify the VROUTER_GATEWAY parameter explicitly.
Because of these behaviors, you still need to be careful when sending messages from a virtual machine or container to NIC (a NIC other than the NIC used by vRouter), because it also does not check the Linux routing table, and it always uses the same NIC as other vRouter communications.
As far as I know, messages from locally linked services or without gateways show similar behavior
In this case, you may need to use a simple gateway (simple-gateway) or SR-IOV.
Https://github.com/Juniper/contrail-controller/wiki/Simple-Gateway
Resize the cluster
For the general specification (sizing) of Tungsten Fabric clusters, you can use the following table.
Https://github.com/hartmutschroeder/contrailandrhosp10#21sizing-the-controller-nodes-and-vms
If the size of the cluster is very large, a lot of resources are needed to ensure the stability of the control plane.
Note that since version 5.1, the analytics database (and some components of analytics) have become optional. Therefore, if you only want to use the control plane in Tungsten Fabric, I recommend version 5.1.
Https://github.com/Juniper/contrail-analytics/blob/master/specs/analytics_optional_components.md
Although there is no convenient answer, the size of the cluster is also important because it depends on many factors.
I once tried to deploy nearly 5000 nodes using a K8s cluster (https://kubernetes.io/docs/setup/cluster-large/). It worked well with a controller node with 64 vCPU and 58GB memory, even though I didn't create too many ports, policies, logical routers, and so on.
This Wiki also describes some real-world experiences with massive clusters:
Https://wiki.tungsten.io/display/TUN/KubeCon+NA+in+Seattle+2018
Since a large number of resources can be obtained from the cloud at any time, the best choice is to simulate the cluster according to the actual size and traffic required, and to see if it is functioning properly and what the bottleneck is.
Tungsten Fabric has some good functions in dealing with massive scale, such as multi-cluster setting based on MP-BGP between clusters and BUM discarding function based on layer 3 virtual network, which is probably the key to its scalability and stability of virtual network.
Https://bugs.launchpad.net/juniperopenstack/+bug/1471637
To illustrate the scale-out behavior of controls, I created a cluster of 980 vRouter and 15 controls in AWS.
All control nodes have 4 vCPU and 16GB memory
When the number of control nodes is 15:00, the maximum number of connections to XMPP is only 113, so CPU utilization is not very high (up to 5.4%).
However, when 12 of these control nodes stop working, the number of XMPP connections for each remaining control node will be as high as 708, so CPU usage becomes very high (21.6%).
Therefore, if you need to deploy a large number of nodes, you may need to carefully plan the number of control nodes.
Kubeadm
At the time of this writing, ansible-deployer does not support K8s master HA.
Https://bugs.launchpad.net/juniperopenstack/+bug/1761137
Since kubeadm already supports K8s master HA, I'll show you how to integrate a kubeadm-based K8s installation and a YAML-based Tungsten Fabric installation.
Https://kubernetes.io/docs/setup/independent/high-availability/
Https://github.com/Juniper/contrail-ansible-deployer/wiki/Provision-Contrail-Kubernetes-Cluster-in-Non-nested-Mode
Like other CNI, you can install Tungsten Fabric directly through the "kubectl apply" command. However, to achieve this, you need to manually configure some parameters, such as the IP address of the controller node.
For the setup of this example, I used five EC2 instances (same with AMI, ami-3185744e), each with 2 vCPU, 8 GB memory, and 20 GB disk space. The CIDR of VPC is 172.31.0.0Comp16.
I will attach some original and modified yaml files for further reference.
Https://github.com/tnaganawa/tungstenfabric-docs/blob/master/cni-tungsten-fabric.yaml.orig
Https://github.com/tnaganawa/tungstenfabric-docs/blob/master/cni-tungsten-fabric.yaml
Then you finally have (in most cases) a kubernetes HA environment with Tungsten Fabric CNI that has been started.
Note: Coredns is not active in this output, which I will fix later in this section.
After creating the cirros deployment, as described in the "get up and run" section, ping is already available between the two vRouter nodes.
The output is the same, but now you are using MPLS encapsulation between the two vRouter!
Note: two changes are needed to make coredns active.
Finally, coredns is also active and the cluster is fully started!
The above is all the content of this article "sample Analysis of Tungsten Fabric installation". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.