In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains the "Kilo use tutorial", the article explains the content is simple and clear, easy to learn and understand, the following please follow the editor's ideas slowly in depth, together to study and learn the "Kilo use tutorial" bar!
After writing so many WireGuard-related babysitting tutorials, today I finally involve Kubernetes, otherwise how can I live up to the three words "cloud native"? If you are still new to WireGuard after reading this article, be sure to read each article in the following order:
WireGuard tutorial: how WireGuard works
WireGuard Quick installation tutorial
WireGuard configuration tutorial: using wg-gen-web to manage WireGuard configuration
Wireguard fully connected Mode (full mesh) configuration Guid
If you encounter something you don't understand, you can refer to the notes in this article:
WireGuard tutorial: detailed explanation of the Construction, use and configuration of WireGuard
The rest of these articles are optional, so if you are interested, just read:
Why don't I advocate WireGuard?
Why not "Why not WireGuard?"
WireGuard tutorial: NAT-to-NAT traversal with DNS-SD
The application of WireGuard in cloud native field has two aspects: networking and encryption. Whether it is networking or encryption, it is actually related to CNI. You can use WireGuard for encryption on the original networking scheme, or you can directly use WireGuard for networking. At present, the CNI that directly uses WireGuard for networking are Flannel, Wormhole and Kilo, and the CNI that only uses WireGuard for data encryption is only Calico. Of course, Flannel can also be used in combination with Kilo, so only WireGuard is used for encryption.
My interest is still to build a network using WireGuard. Imagine that you build a CVM on AWS, Azure, GCP and Ali Cloud respectively. You want to build these four CVMs into a K3s cluster, and you can directly access the Pod IP and Service IP in this K3s cluster on any device. How can you achieve this goal gracefully?
It is necessary to take two steps: the first step is to open the container network between the nodes of the K3s cluster, and the last step is to open up the network between the local and cloud containers. Let's take a look at the first step, opening the container network across the cloud. This step mainly depends on CNI. There are few customization options for Flannel. Whormhole has not been updated for a long time. It is recommended to use Kilo as the CNI of K3s.
Before deploying Kilo, you need to adjust the startup parameters of K3s to cancel the default CNI:
K3s server-- flannel-backend none...
Then restart K3s server:
$systemctl restart K3s
You can refer to the deployment of the K3s control plane for details. If you are deploying K3s from scratch, please refer to cross-cloud vendor deployment of K3s clusters.
1. Kilo network topology
Kilo supports the following three network topologies:
Logical packet Interconnection Mode (Logical Groups)
By default, Kilo creates an mesh network between different logical areas in the cluster, such as data centers, cloud service providers, and so on. By default, Kilo will try to use the node tag topology.kubernetes.io/region to determine the logical region of the node. You can also specify the label of the logical region through the startup parameter of Kilo-topology-label=, and you can also add annotation kilo.squat.ai/location to the node to specify the label of the logical region.
For example, to add GCP and AWS nodes to the same K3s cluster, you can add comments to all GCP nodes with the following command:
$for node in $(kubectl get nodes | grep-I gcp | awk'{print $1}'); do kubectl annotate node $node kilo.squat.ai/location= "gcp"; done
In this way, all annotated nodes will be divided into the same logical area, and unannotated nodes will be divided into the default logical area, so there are a total of two logical areas. Each logical area selects a WireGuard tunnel between the leader and the leader of other areas, and the nodes within the area connect the container network through Bridge mode.
The network topology diagram can be obtained through kgctl:
$kgctl graph | circo-Tsvg > cluster.svg
Fully connected Mode (Full Mesh)
In fact, the full interconnection mode is a special case of the logical packet interconnection mode, that is, each node is a logical area, and each node and all other nodes establish a WireGuard tunnel. For more details on fully interconnected mode, please refer to the Wireguard fully interconnected mode (full mesh) configuration guide. You can specify the full interconnection mode through the startup parameter mesh-granularity=full of Kilo.
The network topology diagram can be obtained through kgctl:
$kgctl graph | circo-Tsvg > cluster.svg
Mixed mode
Mixed mode is the combination of logical grouping mode and full interconnection mode. For example, if there are both GCP nodes and some bare metal nodes without secure private network segments in the cluster, you can put the GCP nodes in the same logical area, and other bare metal nodes directly use full interconnection mode to connect, this is the hybrid mode. The specific operation method is to add the same annotation to the GCP node, and other bare metal nodes to add independent annotation:
$for node in $(kubectl get nodes | grep-I gcp | awk'{print $1}'); do kubectl annotate node $node kilo.squat.ai/location= "gcp"; done$ for node in $(kubectl get nodes | tail-n + 2 | grep-v gcp | awk'{print $1}'); do kubectl annotate node $node kilo.squat.ai/location= "$node"; done
Get the network topology diagram through kgctl:
$kgctl graph | circo-Tsvg > cluster.svg
If the cluster also contains AWS nodes, you can add annotation as follows:
$for node in $(kubectl get nodes | grep-I aws | awk'{print $1}'); do kubectl annotate node $node kilo.squat.ai/location= "aws"; done$ for node in $(kubectl get nodes | grep-I gcp | awk'{print $1}'); do kubectl annotate node $node kilo.squat.ai/location= "gcp"; done$ for node in $(kubectl get nodes | tail-n + 2 | grep-v aws | grep-v gcp | awk'{print $1}'); do kubectl annotate node $node kilo.squat.ai/location= "$node";
The network topology architecture is as follows:
2. Kilo deployment
If you are using a domestic CVM, it is generally bound with IP address and MAC address, and the source address detection cannot be disabled. You cannot use Bridge mode, so you cannot use the logical packet interconnection mode of Kilo. You can only use full interconnection mode. If the cluster also includes a data center, the Bridge pattern can be used between the nodes in the data center, the same annotation can be added to the nodes in the data center, and different annotation can be added to other nodes.
My nodes are all domestic public cloud nodes, so I cannot use logical grouping interconnection mode, so I can only use full interconnection mode. This section demonstrates how to deploy Kilo, taking the full interconnection mode as an example.
Kubeconfig is required for Kilo, so you need to copy the kubeconfig file from Master to all Node in advance:
$scp-r / etc/rancher/k3s/ nodexxx:/etc/rancher/k3s/
Modify the kubeconfig file to change the address of API Server to the public network address of Master:
ApiVersion: v1clusters cluster-cluster: certificate-authority-data: * server: https://:6443 name: default.
Add the relevant annotaion to each node:
# specify the Endpoint public network IP:Port$ kubectl annotate nodes xxx kilo.squat.ai/force-endpoint=# where the tunnel is established by the WireGuard. The private network IP,WireGuard of the specified node will add it to the allowed ips, so that the private network IP$ kubectl annotate nodes xxx kilo.squat.ai/force-internal-ip= of each node can be opened.
Clone the official warehouse of Kilo and enter the deployment list directory:
$git clone https://github.com/squat/kilo$ cd kilo/manifests
Modify the kilo deployment manifest and adjust the startup parameters:
.. apiVersion: apps/v1kind: DaemonSetmetadata: name: kilo namespace: kube-system labels: app.kubernetes.io/name: kilospec: selector: matchLabels: app.kubernetes.io/name: kilo template: metadata: labels: app.kubernetes.io/name: kilospec: serviceAccountName: kilo hostNetwork: true containers:-name: kilo image: squat/kilo args:- -kubeconfig=/etc/kubernetes/kubeconfig-hostname=$ (NODE_NAME) +-encapsulate=never+-mesh-granularity=full.
-- encapsulate=never means that container network traffic within the same logical area is not encrypted using ipip protocol.
-- mesh-granularity=full means to enable full interconnection mode.
Deploy kilo using the deployment manifest:
$kubectl apply-f kilo-k3s.yaml
After successful deployment, two network interfaces are added to each node:
14: kilo0: mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 10.4.0.1/16 brd 10.4.255.255 scope global kilo0 valid_lft forever preferred_lft forever6: kube-bridge: mtu 1420 qdisc noqueue state UP group default qlen 1000 link/ether 2a:7d:32:71:75:97 brd ff:ff:ff:ff:ff:ff inet 10.42.0.1/24 scope global kube-bridge valid_lft forever preferred_ Lft forever inet6 fe80::287d:32ff:fe71:7597/64 scope link valid_lft forever preferred_lft forever
Where kilo0 is the WireGuard virtual network interface:
$ip-d link show kilo014: kilo0: mtu 1420 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/none promiscuity 0 wireguard addrgenmode none numtxqueues 1 gso_max_size 65536 gso_max_segs 65535$ wg show kilo0interface: kilo0 public key: VLAjOkfb1U3/ftNOVtAjY8P3hafR12qQB05ueUJtLBQ= private key: (hidden) listening port: 51820 peer: JznFuu9Q7gXcfHFGRLB/LirKi8ttSX22T5f+1cWomzA= endpoint: xxxx:51820 allowed ips: 10.42.1.0 gso_max_segs 24, 192.168.20.1 d link show kilo014 32, 10.4.0.2 seconds ago transfer 32 latest handshake: 51 seconds ago transfer: 88.91 MiB received 76.11 MiB sentpeer: gOvNh2FHJKtfigxV1Az5OFCq2WMq3YEn2F4H4xknVFI= endpoint: xxxx:51820 allowed ips: 10.42.2.0/24, 192.168.30.1/32, 10.4.0.3/32 latest handshake: 17 seconds ago transfer: 40.86 MiB received, 733.03 MiB sent.
Kube-bridge is the Bridge connected to the local container network veth pair:
Bridge link show kube-bridge7: veth99d2f30b state UP @ wg0: mtu 1420 master kube-bridge state forwarding priority 32 cost 28: vethfb6d487c state UP @ wg0: mtu 1420 master kube-bridge state forwarding priority 32 cost: veth88ae725c state UP @ wg0: mtu 1420 master kube-bridge state forwarding priority 32 cost 211: veth5c0d00d8 state UP @ wg0: mtu 1420 master kube-bridge state forwarding priority 32 cost 212: veth6ae51319 state UP @ wg0: mtu 1420 master kube-bridge state forwarding priority 32 cost 213: vethe5796697 state UP @ wg0: mtu 1420 master kube- Bridge state forwarding priority 32 cost 215: vethe169cdda state UP @ wg0: mtu 1420 master kube-bridge state forwarding priority 32 cost 221: vethfe78e116 state UP @ wg0: mtu 1420 master kube-bridge state forwarding priority 32 cost 2
At this point, the full interconnection mode of Kilo is deployed, and containers on various CVM nodes across the public cloud can communicate with each other. The next step is to open up the network between the local and the containers on the cloud.
3. Connect the local and cloud container networks
To make it easier to understand, let's assume that there are four public cloud nodes, namely AWS, Azure, GCP, and Aliyun, and then assume that the subnet of Service is 10.43.0.0, and the subnet of 16 prima pod is 10.42.0.0, then the Pod subnet of each node is 10.42.0.0, 10.42.1.0, 10.42.0and24, 10.42.3.0 and 24, respectively.
In order to separate from the Kubernetes cluster network, you need to use a new network interface wg0. Full interconnection mode is recommended for the network architecture. For more information, please see the Wireguard full Interconnection Mode (full mesh) configuration guide.
In order for local clients to access the Pod IP on the cloud, you can have local access to 10.42.0.0 Pod IP 24 of the AWS node, 10.42.1.0 handle 24 of the Azure node, and so on. Of course, it is also possible to have local access to 10.42.0.0swap 16 of any node on the cloud, but I still do not recommend using this architecture.
As for Service IP, it does not divide each node into a more fine-grained subnet like Pod, and all nodes are assigned from the same large subnet, so it is impossible to use the above method, only one of the nodes can be selected to centrally forward the traffic of local clients accessing Service, assuming that the node of AWS is selected.
As before, continue to use wg-gen-web to manage the configuration of WireGuard, assuming that the node of AWS is used to install wg-gen-web.
It should be noted that kilo0 has opened the private network segment of each node in K3s, so wg0 no longer needs to open the private network segment, just delete the private network segment of each node in K3s:
First, a new configuration is added to the local client. 10.42.0.0swap 24 and 10.43.0.0Uniple 16 are added to the Allowed IPs to enable the local client to access the Pod IP in the AWS node and the Service IP of the entire cluster:
At this point, you will find that the wg0.conf in the AWS node already contains the configuration of the local client:
$cat / etc/wireguard/wg0.conf...# macOS / / Updated: 2021-03-01 05 UTC 52purl 20.355083356 + 0000 UTC / Created: 2021-03-01 05 purl 52purl 20.355083356 + 0000 UTC [peer] PublicKey = pSAxmHb6xXRMl9667pFMLg/1cRBFDRjcVdD7PKtMP1M=AllowedIPs = 10.0.0.5532.
Modify the WireGuard configuration file of the Azure node to add the configuration of the local client:
$cat Azure.confi [interface] Address = 10.0.0.2/32PrivateKey = IFhAyIWY7sZmabsqDDESj9fqoniE/uZFNIvAfYHjN2o=PostUp = iptables-I FORWARD-I wg0-j ACCEPT; iptables-I FORWARD-o wg0-j ACCEPT; iptables-I INPUT-I wg0-j ACCEPT; iptables-t nat-A POSTROUTING-o eth0-j MASQUERADEPostDown = iptables-D FORWARD-I wg0-j ACCEPT; iptables-D FORWARD-o wg0-j ACCEPT; iptables-D INPUT-I wg0-j ACCEPT Iptables-t nat-D POSTROUTING-o eth0-j MASQUERADE [peer] PublicKey = JgvmQFmhUtUoS3xFMFwEgP3L1Wnd8hJc3laJ90Gwzko=PresharedKey = 10.0.0.1/32Endpoint = aws.com:51820# Aliyun / / Updated: 2021-02-24 07V 57V 45.941019829 + 0000 UTC / Created: 2021-02-24 0757V 45.941019829 + 0000 UTC [peer] PublicKey = kVq2ATMTckCKEJFF4TM3QYibxzlh+b9CV4GZ4meQYAo=AllowedIPs = 10.0.0.4/32Endpoint = aliyun.com:51820# GCP / / Updated: 2021-02-24 07V 57V 27.3555646 + 0000 UTC / Created : 2021-02-24 07 Vol 57 PublicKey 27.3555646 + 0000 UTC [Peer] PublicKey = qn0Xfyzs6bLKgKcfXwcSt91DUxSbtATDIfe4xwsnsGg=AllowedIPs = 10.0.0.3/32Endpoint = gcp.com:51820# macOS / / Updated: 2021-03-01 05 UTC [Peer] PublicKey = CEN+s+jpMX1qzQRwbfkfYtHoJ+Hqq4APfISUkxmQ0hQ=AllowedIPs = 10.0.0.532
Similarly, the GCP and Aliyun nodes also need to add new local client configurations.
Download the configuration file for the local client:
Copy the configuration of Aliyun, GCP, and Azure in the wg0.conf of the AWS node to the configuration of the local client, delete the configuration of PresharedKey, and then add the configuration of Endpoint and the network segment where the corresponding Pod IP resides:
[Interface] Address = 10.0.0.5/32PrivateKey = wD595KeTPKBDneKWOTUjJQjxZ5RrlxsbeEsWL0gbyn8= [peer] PublicKey = JgvmQFmhUtUoS3xFMFwEgP3L1Wnd8hJc3laJ90Gwzko=PresharedKey = 5htJA/UoIulrgAn9tDdUxt1WYmOriCXIujBVVaz/uZI=AllowedIPs = 10.0.0.1ax 32, 10.42.0.0Comp24, 10.43.0.0/16Endpoint = aws.com:51820# Aliyun / / Updated: 2021-02-24 07 Vuitton 45.941019829 + 0000 UTC / Created: 2021-02-24 07 Vuc 57V 57V [Peer] PublicKey = kVq2ATMTckCKEJFF4TM3QYibxzlh+b9CV4GZ4meQYAo=AllowedIPs = 10.0.0.4 32 10.42.3.0/24Endpoint = aliyun.com:51820# GCP / / Updated: 2021-02-24 07 UTC 57 10.42.3.0/24Endpoint 27.3555646 + 0000 UTC / Created: 2021-02-24 07 UTC [peer] PublicKey = qn0Xfyzs6bLKgKcfXwcSt91DUxSbtATDIfe4xwsnsGg=AllowedIPs = 10.0.0.3 10.42.2.0/24Endpoint = gcp.com:51820# Azure / / Updated: 2021-02-24 07 UTC 57 UTC 00.751653134 + 0000 UTC / Created: 2021-02-24 007 7 purge 43V 52.717385042 + 0000 UTC [peer] PublicKey = OzdH42suuOpVY5wxPrxM+rEAyEPFg2eL0ZI29N7eSTY=AllowedIPs = 10.0.0.2 azure.com:51820 32, 10.42.1.0/24Endpoint = azure.com:51820
Finally, run the WireGuard locally and you can visit the Kubernetes cluster of the CVM.
If you want to go further, you can access the services in the K3s cluster through the name of Service on any device, you will have to do something on CoreDNS, and those who are interested can study it yourself.
The hole is finally filled, and the WireGuard series is over for the time being. If I find a more interesting way to play later, I will share it with you as soon as possible.
Thank you for your reading, the above is the content of the "tutorial on the use of Kilo", after the study of this article, I believe you have a deeper understanding of the use of Kilo tutorial, the specific use of the situation also needs to be verified by practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.