Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to get IP address by Kubernetes Pod in linux

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly shows you "how to get the IP address in Kubernetes Pod in linux". The content is simple and clear. I hope it can help you solve your doubts. Let me lead you to study and learn the article "how to get IP address in linux by Kubernetes Pod".

One of the core requirements of the Kubernetes network model is that each Pod has its own IP address and can use that IP address for communication. When many people first started using Kubernetes, it was not clear how to assign IP addresses to each Pod. They understand how the various components work independently, but they don't know how these components are used together. For example, they know what CNI plug-ins are, but they don't know how they are called. This article describes how various network components interact in a Kubernetes cluster and how to help each Pod obtain an IP address.

There are a variety of network setup methods in Kubernetes, as well as various options for container runtime. This article will use Flannel as the network provider and Containered as the container runtime.

Background concept

Container network

Containers on the same host

One of the ways containers running on the same host can communicate with each other through IP addresses is to use Linux Bridge, that is, to create veth (virtual Ethernet) devices in the Kubernetes (and Docker) world. One end of the veth device is connected to the container network namespace, and the other end is connected to the Linux Bridge on the host network. All containers on the same host connect one end of this veth pair to the Linux Bridge, and they can communicate with each other using the IP address through Bridge. Linux Bridge is also assigned an IP address, which acts as a gateway for Pod outbound traffic from the destination to different nodes.

Containers on different hosts

One of the ways containers running on different hosts can communicate with each other through their IP addresses is by using packet encapsulation (packet encapsulation). Flannel uses this feature through vxlan, and vxlan encapsulates the original packet in an UDP packet and sends it to the destination.

In a Kubernetes cluster, Flannel creates a vxlan device and some routing tables on each node. Each packet destined for a container on a different host passes through the vxlan device and is encapsulated in an UDP packet. At the destination, it extracts the encapsulated packet and routes the packet to the destination Pod.

Note: this is just one way to configure the network between containers.

CRI

CRI (Container Runtime Interface) is a plug-in interface that allows kubelet to use different container runtimes. A variety of container runtimes implements CRI API, which allows users to use the container runtimes they want in their Kubernetes installation.

CNI

The CNI (Container Network Interface) project contains a rule for providing a common plug-in-based network solution for Linux containers. It consists of plug-ins that perform different functions when configuring the Pod network. The CNI plug-in is an executable file that follows the CNI specification.

Assign Pod IP addresses to node subnets

If you require all Pod to have IP addresses, make sure that all Pod IP addresses in the entire cluster are unique. This can be achieved by assigning a unique subnet to each node, that is, assigning a node IP address to the Pod from the subnet.

Node IPAM controller

When nodeipam passes the-controllers command line flag of kube-controller-manager, it assigns each node a private subnet (podCIDR) from the cluster CIDR (the IP scope of the cluster network). Because these podCIDR are disjoint subnets, it can assign a unique IP address to each Pod.

When the Kubernetes node first registers on the cluster, it is assigned a podCIDR. To change the podCIDR assigned to the nodes in the cluster, you need to unregister the nodes first, and then re-register the nodes with any configuration changes applied to the Kubernetes control plane. PodCIDR can list the names of nodes using the following command:

Kubelet, Container Runtime and CNI plug-in interaction

When Pod is dispatched on a node, a lot happens as soon as Pod is started. Here we focus only on the dynamics related to the Pod configuration network. Once the Pod is scheduled on the node, the network is configured and the application container is started.

Reference: container cri plug-in shelf

Interaction between Container Runtime and CNI plug-in

Each network provider has a CNI plug-in that container runtime invokes to configure the network when the Pod starts. Using containerization as container runtime, containerized CRI plug-in invokes the CNI plug-in. Each network provider installs an agent on each Kubernetes node to configure the Pod network. After installing network provider agent, it is configured with CNI or created on the node, and the CRI plug-in uses it to determine which CNI plug-in to call.

The location of the CNI configuration file is configurable and the default value is / etc/cni/net.d/. The cluster administrator needs to deliver the CNI plug-in on each node. The location of the CNI plug-in is also configurable, with a default value of / opt/cni/bin.

If you use containerd as the container runtime, you can specify the path to the CNI configuration and the CNI plug-in under the containerd config section [plugins. "io.containerd.grpc.v1.cri" .cni].

In this article, we use Flannel as the network provider, and here's a brief introduction to the setup of Flannel. Flanneld is a Flannel daemon, and install-cni is usually installed on a Kubernetes cluster as a daemon with an initialization container. The install-cni container creates a CNI configuration file on each node / etc/cni/net.d/10-flannel.conflist. Flanneld creates a vxlan device, obtains network metadata from apiserver, and monitors updates on Pod. When a Pod is created, it will assign routes to all Pod throughout the cluster, which allow Pod to connect to each other through IP addresses.

The interaction between the Containerd CRI plug-in and the CNI plug-in can be as follows:

As mentioned above, kubelet calls the Containered CRI plug-in to create the container, and then invokes the CNI plug-in to configure the network for the container. The Network provider CNI plug-in invokes other basic CNI plug-ins to configure the network. The interaction between CNI plug-ins is described below.

Interaction between CNI plug-ins

There are a variety of CNI plug-ins to help configure the network between containers on the host. This article focuses on the following three plug-ins.

Flannel CNI plug-in

When using Flannel as the network provider, the Containered CRI plug-in uses the CNI configuration file to invoke the Flannel CNI plug-in / etc/cni/net.d/10-flannel.conflist.

The Fannel CNI plug-in works in conjunction with Flanneld, and when Flanneld starts, it fetches podCIDR and other network-related details from apiserver and stores them in a file / run/flannel/subnet.env.

The Flannel CNI plug-in uses the information of / run/flannel/subnet.env to configure and invoke the Bridge CNI plug-in.

Bridge CNI plug-in

The Flannel CNI plug-in invokes the Bridge CNI plug-in with the following configuration:

When the Bridge CNI plug-in is first called, it creates a Linux Bridge "name": "cni0" in the configuration file, and then creates a veth pair for each Pod, with one end in the container's network namespace and the other connected to the Linux Bridge on the host network. With the Bridge CNI plug-in, all containers on the host are connected to the Linux Bridge on the host network.

After veth pair is configured, the Bridge plug-in invokes the host local IPAM CNI plug-in. We can configure the IPAM plug-in to be used in CNI config, and the CRI plug-in is used to invoke the Flannel CNI plug-in.

Host local IPAM CNI plug-in

The Bridge CNI plug-in invokes the host local IPAM CNI plug-in with the following configuration:

The host local IPAM (IP address management) plug-in returns the IP address of the container, and subnet stores the assigned IP locally in the directory / var/lib/cni/networks// specified by dataDir under the host. The / var/lib/cni/networks// file contains the container ID to which IP is assigned.

When called, the host local IPAM plug-in returns the following payload:

Kube-controller-manager assigns a podCIDR to each node. The IP address is assigned to the Pod on the node from the subnet value in podCIDR. Because podCIDR on all nodes are disjoint subnets, it allows a unique IP address to be assigned to each pod.

The Kubernetes cluster administrator can configure and install kubelet, container runtime, network provider, and distribute the CNI plug-in on each node. When Network provider agent starts, the CNI configuration is generated. After the Pod is dispatched on the node, kubelet invokes the CRI plug-in to create the Pod. In the case of the container, the container's CRI plug-in invokes the CNI plug-in specified in the CNI configuration to configure the Pod network. All of this will affect Pod's access to the IP address.

The above is all the content of the article "how to get the IP address of Kubernetes Pod in linux". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report