In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article will explain in detail how to implement Kubernetes and OpenStack multi-cloud network, Xiaobian thinks it is quite practical, so share it with you as a reference, I hope you can gain something after reading this article.
OpenContrail Overview
OpenContrail is an open source SDN&NFV solution, starting with Havana, with a slight connection to OpenStack. It and Nicira (now VMware NSX-VH) were the first product Neutron plug-in, and the last Summit survey showed it was also the most frequently configured solution, ranking only behind OpenVwitch as the first vendor-based solution.
OpenContrail has been integrated into OpenStack, VMware, Docker and Kubernetes. Kube-network-manager, a Kubernetes network plug-in that was already under development at the OpenStack Summit in Vancouver last year, was first released late last year.
architecture
We started testing with two separate Contrail configurations and then installed BGP federation. The reason for the alliance is the kube-network-manager focus validation. When contrail-neutron-plugin is enabled, contrail API uses keystone validation when keystone validation is enabled, a feature not yet implemented in Kubernetes plugins. Contrail Alliance is described in more detail below.
The architecture below is a high-level architecture diagram with OpenStack clusters on the left and Kubernetes clusters on the right. OpenStack and OpenContrail are deployed in high-availability best-practice designs that can be scaled to hundreds or thousands of compute nodes.
The following figure shows the federation of two Contrail clusters. In general, this feature allows you to connect Contrail controllers to sites in a multisite data center without the need for a physical gateway. The control node for each site is the same as any other site using BGP. If possible, this approach can be extended to L2 and L3 networks over multiple data centers.
Typically, two separate OpenStack clouds or two OpenStack regions will use this design. All components of Contrail (including vRouter) are identical. Kube-network-manager and neutron-contrail-plugin transform API requests for different platforms. The core functionality of the network solution remains unchanged. This brings not only powerful web engines, but analytics as well.
application stack
overview
Let's look at a typical scenario. Our developers gave us docker compose.yml(https://github.com/django-leonardo/block/master/contrib/haproxy/docker-compose. yml), which is also used for development and local testing on laptops. This approach is easier, but our developers already know docker and application workloads. This application stack consists of the following components:
Database-PostgreSQL or MySQL Database Cluster
Download and install-cache for content
Leonardo--Django CMS Leonardo is used for application stack testing
Nginx--Network Proxy
Load Balancer--HAProxy Load Balancer for Container Scaling
When we want to apply it to products, we can transfer everything to Kubernetes RC along with services, but as we mentioned at the beginning, not everything applies to containers. So we split the database cluster into OpenStack virtual machines and rewrite the rest into Kubernetes key manifests.
application deployment
This section describes the workflow for application provisioning on OpenStack and Kubernetes.
OpenStack aspects
As a first step, we have officially launched the database stack on OpenStack. This creates three virtual machines with PostgreSQL and database networks. Database networks are private tenant isolation networks.
Kubernetes aspects
On the Kubernetes side, we had to publish indications with Leonardo and Nginx services. Click here to view: github.com/pupapaik/scripts/tree/master/kubernetes/leonardo
To make it work smoothly with network isolation, look at the following sections.
leonardo-rc.yaml-Leonardo applied RC, replicas3, and virtual network leonardo
leonardo-svc.yaml-leonardo service mounts application pods from the cluster network with virtual IP on port 8000.
nginx-rc.yaml-3 copies of NGINX RC, virtual network nginx and policy, which allow communication with leonardo-svc networks. This example does not use SSL.(NGINX replication controller with 3 replicas and virtual networknginx and policy allowing traffic to leonardo-svc network. This sample does notuse SSL.)
nginx-svc.yaml-Create service with cluster vip IP and virtual IP to access applications from the network
Let's call kubecl to run all the key lists.
The following pods and services were created in Kubernetes.
Only Nginx service has Public IP 185.22.97.188, which is a floating IP for Load Balancer. All traffic is now balanced by ECMP on the Juniper MX.
For the cluster to be fully operational, database virtual networks on OpenStack and leonardo virtual networks on Kubernetes Contrail are required. Go to both Contrail UIs and set the same Route Target for both networks. This can also be automated via contrail resources.
The following diagram shows how you should view the product application stack. At the top are two Juniper MXs with Public VRF, where floating IP spreads. Traffic propagates through the ECMP to MPLS overGRE tunnel to 3 nginx pods. The Nginx proxy requests to Leonardo application servers, which store meetings and content to PostgreSQL database clusters running on OpenStack virtual machines.
The connection between pods and virtual machines is direct, without any routing hub. Juniper MXs are only used for outbound connections to the Internet. Thanks to storing application sessions into databases (usually downloads or redis), we didn't need a specific L7 Load Balancer, and ECMP worked perfectly.
other output
This section shows other interesting outputs from the application stack. The Nginx service described with the Load Balancer shows floating IPs and private cluster IPs. Then there are three IP addresses for nginx pods. Traffic is distributed via vRouter ECMP.
The Nginx routing table shows internal routes between pods and route 10.254.98.15/32, pointing to the leonardo service.
Route10.254.98.15/32 is a description of leonardo service.
Leonardo's routing table is very similar to nginx, except for routes 10.0.100.X/32, which points to OpenStack virtual machines in different Contrail.
The most recent output is from Juniper MXs VRF, showing multiple paths to nginx pods.
About "how to implement Kubernetes and OpenStack multi-cloud network" this article is shared here, I hope the above content can be of some help to everyone, so that you can learn more knowledge, if you think the article is good, please share it to let more people see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.