Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Getting started with Tungsten Fabric: about Multi-Cluster and Multi-data Center

2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

The introduction to Tungsten Fabric series, which comes from the practical experience taught by technology bulls, is compiled and presented by the TF Chinese community for you, and is designed to help beginners understand the whole process of TF operation, installation, integration, debugging and so on. If you have any relevant experience or questions, you are welcome to interact with us and communicate further with the community geeks. For more TF technical articles, please click the button at the bottom of the official account > learn > articles collection.

Author: Tatsuya Naganawa translator: TF compilation Group

Multi-cluster

Because MPLS-VPN is used internally, virtual-network in Tungsten Fabric can be extended to other Tungsten Fabric clusters.

This may be a little surprising, but as far as I know, the Neutron ML2 plug-in or some other CNI does not support this setting

That is, because they have different databases, shared resources need to be marked between them.

To do this, I will describe the use of several bgp parameters.

Routing (Routing)

Because Tungsten Fabric uses L3VPN for inter-VRF routing, it can route messages if route-target is set up correctly between VRF.

Since you cannot use network-policy / logical-router between multiple clusters, you need to configure route-target directly on each virtual-network.

Note: if only L3 forwarding is specified, L3VPN will be used even in the forwarding of the internal VRF, so bridging will not be used in this setting.

Security Group (security-group)

Tungsten Fabric also has some extended properties to convey the content of the security group ID.

Https://github.com/Juniper/contrail-controller/wiki/BGP-Extended-Communities

Since this ID can also be configured manually, you can set the same ID for the security group of each cluster to allow traffic from that prefix.

Note: as far as I know, you cannot manually configure the label's ID from the Tungsten Fabric Webui in the R5.1 branch, so you cannot use fw-policy between clusters. This behavior may change in the future.

DNS

DNS is an important topic when dealing with multiple clusters.

Because Tungsten Fabric has a vDNS implementation similar to OpenStack's default settings, you can parse the vmname in the cluster and make these names available externally.

Https://github.com/Juniper/contrail-controller/wiki/DNS-and-IPAM

The Controller node has a contrail-named process that responds to external DNS queries

To enable this feature, select Configure > DNS > DNS Server > (create) > External Access from Tungsten Fabric Webui

Therefore, at least when OpenStack (or vCenter) is used as the choreographer, and different clusters have different domain names, it can directly resolve the names of other clusters.

Upstream DNS transponders need to be able to resolve all names

When using Kubernetes, Tungsten Fabric uses coredns as the source of name resolution, not in its own vDNS. These IP and domain names can be modified in the kubeadm settings.

Cluster0:

Kubeadm init-pod-network-cidr=10.32.0.0/24-service-cidr=10.96.0.0/24

Cluster1:

Kubeadm init-pod-network-cidr=10.32.1.0/24-service-cidr=10.96.1.0/24-service-dns-domain=cluster1.local

Cluster1:

# cat / etc/sysconfig/kubelet

-KUBELET_EXTRA_ARGS=

+ KUBELET_EXTRA_ARGS= "--cluster-dns=10.96.1.10"

# systemctl restart kubelet

Note: after the configuration is complete, the Tungsten Fabric setting also needs to be changed (set in configmap env)

Cluster0:

KUBERNETES_POD_SUBNETS: 10.32.0.0/24

KUBERNETES_IP_FABRIC_SUBNETS: 10.64.0.0/24

KUBERNETES_SERVICE_SUBNETS: 10.96.0.0/24

Cluster1:

KUBERNETES_POD_SUBNETS: 10.32.1.0/24

KUBERNETES_IP_FABRIC_SUBNETS: 10.64.1.0/24

KUBERNETES_SERVICE_SUBNETS: 10.96.1.0/24

Once coredns is set up, it can resolve the names of other clusters (coredns IP needs to leak to their respective VRF, because these IP must be accessible)

Kubectl edit-n kube-system configmap coredns

Cluster0:

# add these lines to resolve cluster1 names

Cluster1.local: 53 {

Errors

Cache 30

Forward. 10.96.1.10

}

Cluster1:

# add these lines to resolve cluster0 names

Cluster.local: 53 {

Errors

Cache 30

Forward. 10.96.0.10

}

Therefore, even if you have several separate Tungsten Fabric clusters, it is not too difficult to stitch virtual-network between them.

One of the reasons for this is to have more nodes than the choreographer currently supports, but even choreographers like Kubernetes, OpenStack, and vCenter can already support a large number of hypervisors.

Multiple data Center (Multi-DC)

If the traffic is across multiple data centers, you need to be careful when planning your Tungsten Fabric installation.

There are two options: 1. Single cluster; 2. Multi-cluster.

The single cluster option is simpler and easier to manage-even though RTT between data centers may be a problem because many kinds of traffic, such as XMPP, RabbitMQ, Cassandra, and so on, will pass through controller (local support for multiple data centers is not currently supported)

The multi-cluster approach will bring more complexity to the operation, because each cluster has its own different database, so you need to set some parameters manually, such as route-target or security-group id.

In addition, implementing vMotion between them will be more difficult.

Even if you use cross-vCenter vMotion functionality, because the new vCenter and the new Tungsten Fabric cluster will create a new port, it will use a fixed IP that is different from the original port.

Nova currently does not support live migration across OpenStack, so if you use OpenStack, you cannot do live migration between them

Because vCenter requires the RTT of 150ms between data centers (I can't find the similarity of KVM), although careful planning must be done for each particular situation, there is still a rule of thumb: single cluster

< 150 msec RTT < 多集群,。 https://kb.vmware.com/s/article/2106949 当计划安装单集群并且数据中心的数量为两个时,还需要注意一件事。 由于Tungsten Fabric中的Zookeeper / Cassandra当前使用Quorum一致性等级,因此当主站点关闭时,第二个站点将无法继续工作(Read和Write访问权限均不可用)。 https://github.com/Juniper/contrail-controller/blob/master/src/config/common/vnc_cassandra.py#L659 (使用config-api, schema-transformer, svc-monitor, device-manager) https://github.com/Juniper/contrail-common/blob/master/config-client-mgr/config_cassandra_client.cc#L458 (使用control, dns) 解决此问题的一种可能选项是,将一致性级别更改为ONE / TWO / THREE,或者LOCAL_ONE / LOCAL_QUORUM,尽管它需要重写源代码。 由于Zookeeper没有这样的knob,所以我知道的唯一方法,是在主站点关闭后更新weight。 https://stackoverflow.com/questions/32189618/hierarchical-quorums-in-zookeeper 即使Zookeeper暂时无法使用,大多数组件仍继续工作,尽管它用于HA的组件停止工作了(schema-transformer, svc-monitor, kube-manager, vcenter-plugin, ...)。 当数据中心的数量超过两个时,这将不再是一个问题。 ·END· Tungsten Fabric入门宝典系列文章-- 首次启动和运行指南 TF组件的七种"武器" 编排器集成 关于安装的那些事(上) 关于安装的那些事(下) 主流监控系统工具的集成 开始第二天的工作 8个典型故障及排查Tips 关于集群更新的那些事说说L3VPN及EVPN集成关于服务链、BGPaaS及其它 Tungsten Fabric 架构解析 系列文章-- 第一篇: TF主要特点和用例 第二篇: TF怎么运作 第三篇:详解vRouter体系结构 第四篇: TF的服务链 第五篇: vRouter的部署选项 第六篇: TF如何收集、分析、部署? 第七篇: TF如何编排 第八篇: TF支持API一览 第九篇: TF如何连接到物理网络 第十篇: TF基于应用程序的安全策略

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 279

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report