In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
There are many benefits to sharing control planes among multiple choreographers, including routing/bridging, DNS, security, and so on.
Let me describe the usage and configuration of each case. K8s+OpenStack
The combination of Kubernetes and OpenStack is already covered and works well.
Https://github.com/Juniper/contrail-ansible-deployer/wiki/Deployment-Example:-Contrail-and-Kubernetes-and-Openstack
In addition, Tungsten Fabric supports nested installation (nested installation) and non-nested installation (non-nested installation), so you can choose one of them.
Https://github.com/Juniper/contrail-kubernetes-docs
K8s+K8s adds multiple Kubernetes clusters to a Tungsten Fabric, which is a possible installation option. Since kube-manager supports the cluster_name parameter, which modifies the name of the tenant to be created (the default is "K8s"), this should be feasible. However, the last time I tried this method, it didn't work well because some objects were deleted by other kube-manager as stale objects (stale object).
Https://github.com/Juniper/contrail-container-builder/blob/master/containers/kubernetes/kube-manager/entrypoint.sh#L28
This behavior may be changed in future releases. Note: starting with R2002 and later, this patch addresses this issue and no longer requires custom patches.
Https://review.opencontrail.org/c/Juniper/contrail-controller/+/55758
Note: with these patches, it seems possible to add multiple kube-master to a Tungsten Fabric cluster.
Diff-git a/src/container/kube-manager/kube_manager/kube_manager.py b/src/container/kube-manager/kube_manager/kube_manager.py
Index 0f6f7a0..adb20a6 100644
-a/src/container/kube-manager/kube_manager/kube_manager.py
+ b/src/container/kube-manager/kube_manager/kube_manager.py
@ @-219, kube_api_skip=False 10 + 219, 10 @ @ def main (args_str=None, kube_api_skip=False, event_queue=None
If args.cluster_id:
Client_pfx = args.cluster_id +'-'
-zk_path_pfx = args.cluster_id +'/'
+ zk_path_pfx = args.cluster_id +'/'+ args.cluster_name
Else:
Client_pfx =''
-zk_path_pfx =''
+ zk_path_pfx =''+ args.cluster_name
# randomize collector list
Args.random_collectors = args.collectors
Diff-git a/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py b/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py
Index 00cce81..f968cae 100644
-a/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py
+ b/src/container/kube-manager/kube_manager/vnc/vnc_namespace.py
@ @-594 class VncNamespace 7 + 594 Magol 8 @ @ class VncNamespace (VncCommon):
Self._queue.put (event)
Def namespace_timer (self):
-self._sync_namespace_project ()
+ # self._sync_namespace_project () # # temporary disabled
+ pass
Def _ get_namespace_firewall_ingress_rule_name (self, ns_name):
Return "-" .join ([vnc_kube_config.cluster_name ())
Since the pod-network created by kube-master are all on the same Tungsten Fabric controller, it is possible to implement route leakage (route-leak) between them:)
Since cluster_name will be one of the tags in Tungsten Fabric's fw-policy, the same tag can be used across multiple Kubernetes clusters
172.31.9.29 Tungsten Fabric controller
172.31.22.24 kube-master1 (KUBERNETES_CLUSTER_NAME=k8s1 is set)
172.31.12.82 kube-node1 (it belongs to kube-master1)
172.31.41.5 kube-master2 (KUBERNETES_CLUSTER_NAME=k8s2 is set)
172.31.4.1 kube-node2 (it belongs to kube-master2) [root@ip-172-31-22-24] # kubectl get node
NAME STATUS ROLES AGE VERSION
Ip-172-31-12-82.ap-northeast-1.compute.internal Ready 57m v1.12.3
Ip-172-31-22-24.ap-northeast-1.compute.internal NotReady master 58m v1.12.3
[root@ip-172-31-22-24] #
[root@ip-172-31-41-5] # kubectl get node
NAME STATUS ROLES AGE VERSION
Ip-172-31-4-1.ap-northeast-1.compute.internal Ready 40m v1.12.3
Ip-172-31-41-5.ap-northeast-1.compute.internal NotReady master 40m v1.12.3
[root@ip-172-31-41-5] #
[root@ip-172-31-22-24] # kubectl get pod-o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
Cirros-deployment-75c98888b9-7pf82 1bat 1 Running 0 28m 10.47.255.249 ip-172-31-12-82.ap-northeast-1.compute.internal
Cirros-deployment-75c98888b9-sgrc6 1 Running 0 28m 10.47.255.250 ip-172-31-12-82.ap-northeast-1.compute.internal
Cirros-vn1 1 Running 0 7m56s 10.0.1.3 ip-172-31-12-82.ap-northeast-1.compute.internal
[root@ip-172-31-22-24] # [root@ip-172-31-41-5] # kubectl get pod-o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
Cirros-deployment-75c98888b9-5lqzc 1bat 1 Running 0 27m 10.47.255.250 ip-172-31-4-1.ap-northeast-1.compute.internal
Cirros-deployment-75c98888b9-dg8bf 1 Running 0 27m 10.47.255.249 ip-172-31-4-1.ap-northeast-1.compute.internal
Cirros-vn2 1 Running 0 5m36s 10.0.2.3 ip-172-31-4-1.ap-northeast-1.compute.internal
[root@ip-172-31-41-5] # / # ping 10.0.2.3
PING 10.0.2.3 (10.0.2.3): 56 data bytes
64 bytes from 10.0.2.3: seq=83 ttl=63 time=1.333 ms
64 bytes from 10.0.2.3: seq=84 ttl=63 time=0.327 ms
64 bytes from 10.0.2.3: seq=85 ttl=63 time=0.319 ms
64 bytes from 10.0.2.3: seq=86 ttl=63 time=0.325 ms
^ C
-10.0.2.3 ping statistics-
87 packets transmitted, 4 packets received, 95% packet loss
Round-trip min/avg/max = 0.319 ms 0.576 Compact 1.333
/ #
/ # ip-o a
1: lo: mtu 65536 qdisc noqueue qlen 1000\ link/loopback 00 qdisc noqueue qlen 00 brd 0012 00 brd 0012 00
1: lo inet 127.0.0.1 lo inet 8 scope host lo\ valid_lft forever preferred_lft forever
18: eth0@if19: mtu 1500 qdisc noqueue\ link/ether 02:b9:11:c9:4c:b1 brd ff:ff:ff:ff:ff:ff
18: eth0 inet 10.0.1.3 eth0 inet 24 scope global eth0\ valid_lft forever preferred_lft forever
/ #
-> ping works well between Pod belonging to different kubernetes clusters [root@ip-172-31-9-29] #. / contrail-introspect-cli/ist.py ctr route show-t default-domain:k8s1-default:vn1:vn1.inet.0
Default-domain:k8s1-default:vn1:vn1.inet.0: 2 destinations, 2 routes (1 primary, 1 secondary, 0 infeasible)
10.0.1.3/32, age: 0:06:50.001343, last_modified: 2019-Jul-28 18:23:08.243656
[XMPP (interface) | ip-172-31-12-82.local] age: 082.local 0682.local 50.005553, localpref: 200, nh: 172.31.12.82, encap: ['gre',' udp'], label: 50, AS path: None
10.0.2.3/32, age: 0:02:25.188713, last_modified: 2019-Jul-28 18:27:33.056286
[XMPP (interface) | ip-172-31-4-1.local] age: 02ip-172 25.193517, localpref: 200, nh: 172.31.4.1, encap: ['gre',' udp'], label: 50, AS path: None
[root@ip-172-31-9-29] #
[root@ip-172-31-9-29] #. / contrail-introspect-cli/ist.py ctr route show-t default-domain:k8s2-default:vn2:vn2.inet.0
Default-domain:k8s2-default:vn2:vn2.inet.0: 2 destinations, 2 routes (1 primary, 1 secondary, 0 infeasible)
10.0.1.3/32, age: 0:02:36.482764, last_modified: 2019-Jul-28 18:27:33.055702
[XMPP (interface) | ip-172-31-12-82.local] age: 02ip-172 36.489419, localpref: 200, nh: 172.31.12.82, encap: ['gre',' udp'], label: 50, AS path: None
10.0.2.3/32, age: 0:04:37.126317, last_modified: 2019-Jul-28 18:25:32.412149
[XMPP (interface) | ip-172-31-4-1.local] age: 04ip-172 37.133912, localpref: 200, nh: 172.31.4.1, encap: ['gre',' udp'], label: 50, AS path: None
[root@ip-172-31-9-29] #
Based on the following network strategy, each virtual network of each kube-master has a Pod (venv) [root@ip-172-31-9-29] # contrail-api-cli-host 172.31.9.29 ls-l virtual-network that routes to other kube-master
Virtual-network/f9d06d27-8fc1-413d-a6d6-c51c42191ac0 default-domain:k8s2-default:vn2
Virtual-network/384fb3ef-247b-42e6-a628-7111fe343f90 default-domain:k8s2-default:k8s2-default-service-network
Virtual-network/c3098210-983b-46bc-b750-d06acfc66414 default-domain:k8s1-default:k8s1-default-pod-network
Virtual-network/1ff6fdbd-ac2e-4601-b08c-5f7255466312 default-domain:default-project:ip-fabric
Virtual-network/d8d95738-0a00-457f-b21e-60304859d1f9 default-domain:k8s2-default:k8s2-default-pod-network
Virtual-network/0c075b76-4219-4f79-a4f5-1b4e6729f16e default-domain:k8s1-default:k8s1-default-service-network
Virtual-network/985b3b5f-84b7-4810-a54d-abd09a37f525 default-domain:k8s1-default:vn1
Virtual-network/23782ea7-4000-491f-b20d-01c6ab9e2ba8 default-domain:default-project:default-virtual-network
Virtual-network/90cce352-ef9b-4358-81b3-ef87a9cb63e8 default-domain:default-project:__link_local__
Virtual-network/0292810c-c511-4147-89c0-9fdd571ccce8 default-domain:default-project:dci-network
(venv) [root@ip-172-31-9-29] #
(venv) [root@ip-172-31-9-29] # contrail-api-cli-- host 172.31.9.29 ls-l network-policy
Network-policy/134d38b2-79e2-4a3e-a2f7-a3d61ceaf5e2 default-domain:k8s1-default:vn1-to-vn2
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.