Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand Kubernets Network

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

In this issue, the editor will bring you about how to understand the Kubernets network. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.

Containers running on different hosts cannot access each other through IP, so how does Kubernetes achieve the interworking of Pod on different nodes? Pod has a life cycle, and its IP changes dynamically with dynamic creation and destruction. How does Kubernetes provide stable services to the outside world? Today, I would like to answer these questions one by one.

Docker network

Let's first take a look at the network in Docker. After starting the Docker service, a docker0 bridge (with a docker0 internal interface) is created by default, and the kernel layer connects to other physical or virtual network cards, which places all containers and local hosts on the same physical network.

Docker specifies the IP address and subnet mask of the docker0 interface by default, so that hosts and containers can communicate with each other through the bridge. It also gives MTU (the maximum transmission unit allowed to be received by the interface), usually 1500 Bytes, or default values supported on host host network routing, which can be configured when the service is started.

Root@ubuntu:/root# ifconfig...docker0: flags=4099 mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0 ether 02:42:d2:00:10:6c txqueuelen 0 (Ethernet)... root@ubuntu:/root# docker inspect busybox "IPAddress": "172.17.0.2",

In order to achieve the above functions, Docker mainly uses linux's Bridge, Network Namespace and VETH.

Bridge is equivalent to a virtual bridge, working in the layer 2 network. It can also be configured with IP to work on a layer 3 network. The docker0 gateway is implemented through Bridge.

Network Namespace is a network namespace, and some completely isolated network stacks can be established through Network Namespace. For example, through docker network create xxx, you are building a Network Namespace.

VETH is the interface pair of virtual network card, which can connect the two ends in two different Network Namespace to realize the communication between two originally isolated Network Namespace.

So to sum up: Network Namespace isolates the container from the host, Bridge establishes a gateway in the container and the host, and then uses VETH to connect the container and the host. But this is all implemented on the same host network, if you want to network on multiple hosts, you should take a look at the Kubernetes network described below.

Kubernetes network

In order to solve the problem of "cross-master communication" of containers, Kubernetes has proposed many solutions. There are two common ideas:

Establish routing rules for subnets on different hosts directly on the host

The layer 2 data frame is encapsulated by special network equipment, and the corresponding host IP address is found by matching the target IP address to the corresponding subnet. Finally, the IP packet is forwarded, and the same special network device on the destination host is unsealed and forwarded according to the local routing table.

Flannel

The well-known Flannel project is a container network solution launched by CoreOS. It is only a framework in itself, and it is the back-end implementation of Flannel that provides container network functionality for developers. At present, there are three specific implementations:

UDP

VXLAN

Host-gw

The following three-layer network refers to the bottom three layers of the seven-layer network model: the network layer, the data link layer, and the physical layer.

UDP mode is the earliest supported, the worst performance, but the easiest to understand and implement container cross-main network solution. Flannel UDP mode provides a three-layer overlay network: first, UDP encapsulates the IP packet at the sending end, then unencapsulates the original IP packet at the receiving end, and then forwards the packet to the target container. It is equivalent to opening a "tunnel" between two containers, so that the two containers can communicate directly using IP, regardless of the distribution of containers and hosts.

Because the UDP encapsulation and unencapsulation of Flannel are done in user mode, and the cost of context switching and user mode operation in Linux system is very high, which is the main reason for its poor performance.

VXLAN, namely Virtual Extensible LAN (Virtual Extensible Local area Network), is a network virtualization technology supported by Linux kernel itself. VXLAN kernel mode completes the above encapsulation and unpacking work, and constructs an overlay network (Overlay Network) through a "tunnel" mechanism similar to UDP mode, so that the "hosts" connected to this VXLAN layer 2 network can communicate freely as if they were in the local area network.

Host-gw mode works by setting the next hop of each Flannel subnet to the host IP address corresponding to that subnet.

In other words, the "host" will act as the "Getway" in the container communication path. Flannel host-gw mode must require layer 2 connectivity between cluster hosts.

Calico

The network solution provided by the Calico project is the same as the Flannel Host-gw model. However, unlike Flannel's practice of maintaining routing information through Etcd and host flanneld, the Calio project uses BGP (Border Gateway Protocol) to automatically distribute routing messages throughout the cluster. It consists of three parts:

CNI plug-in for Calico: this is the part where Calico docks with Kubernetes. Felix: it is a DaemonSet that is responsible for inserting routing rules into the host and maintaining the network equipment required by the Calico. BIRD: it is the client of BGP and is responsible for distributing routing rule information in the cluster.

In addition to the way routing information is maintained, another difference between the Calico project and Flannel's host-gw is that it does not create any bridge devices on the host.

CNI (Container Network Interface)

CNI) is a project under CNCF that consists of a set of specifications and libraries for configuring the network interface of the Linux container, as well as plug-ins. CNI is only concerned with network allocation when the container is created and releasing network resources when the container is deleted. The basic idea is that after Kubernetes starts the Infra container, it can directly call the CNI network plug-in to configure the Network Namespace of the Infra container in accordance with the expected network stack.

Kubernetes uses the CNI interface to maintain a separate bridge instead of docker0. This bridge is called the CNI bridge, and its default name on the host is: cni0. Take the VXLAN mode of Flannel as an example, in the Kubernetes environment, the way it works has not changed, but the docker0 bridge has been replaced by the CNI bridge. The CNI bridge simply takes over the responsibility of all CNI plug-ins, that is, the Pod created by Kuberntes.

Service

Pod in Kubernetes has a life cycle, and its IP will change dynamically with dynamic creation and destruction, so it can not provide services stably. Kubernetes Service defines such an abstraction: a logical grouping of Pod, a policy that can access them. Developers can access a set of Pod behind it through an Service entry address. Once the Service is created, Kubernetes automatically assigns it an available Cluster IP, and its Cluster IP remains unchanged throughout the life cycle of the Service. This solves the problem of service discovery in distributed clusters.

A typical Service definition is as follows:

ApiVersion: v1kind: Servicemetadata: name: nginxspec: selector: app: nginx ports:-nmae: dafault protocol: TCP port: 8000 targetPort: 80

In this Service example, the author uses the selector field to declare that the Service only proxies the pod of the app=nginx tag. This Service's port 8000 proxies Pod's port 80.

Then define the application Delpoyment as follows:

ApiVersion: v1kind: Delpoymentmetadata: name: nginxspec: selector: matchLabels: app: nginx replicas: 3 template: meatdata: lalels: app: nginxspec: containers:-name: nginx image: nginx ports:-containers: 80 protocol: TCP

The Pod selected by selector is called the Endpoints of Serivce, and you can view them using kubectl get ep, as shown below:

$kubectl get endpoints nginxNAME ENDPOINTS AGEnginx 172.20.1.16 80172.20.2.22 80172.20.2.23 80 1m

Through the VIP 10.68.57.93 address of the Service, you can access the Pod that it proxies:

$kubectl get svc nginxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGEnginx ClusterIP 10.68.57.93 80/TCP 1m $curl 10.68.57.93Welcome to nginxroom.Welcome to nginxcake.

This VIP address is automatically assigned to Service by Kubernetes. Access the VIP address of Service and port 80 of the agent, and it returns the default nginx page for us, which is called Service in Cluster IP mode.

Access Service outside the cluster

The access information of Servcie is not valid outside the kubernates cluster, because the so-called access interface of Service is actually the iptables rules generated by kube-proxy and the DNS records generated by kube-dns on each host.

There are several ways to solve the servcie created in the external access Kubernetes cluster:

NodePort

LoadBalancer

NodePort method

Here is an example of NodePort:

ApiVersion: v1kind: Servicemetadata: name: my-nginx labels: run: my-nginxspec: type: NodePort ports:-name: http nodePort: 30080 port: 8080 targetPort: 80 protocol: TCP

In this Service definition, declare its type as type=NodePort. Port 80 of Service's 8080 proxy Pod is now declared in the ports field.

If you do not display the declaration nodePort field, Kubernetes will randomly assign available ports for you to set up proxies. The default range for this port is: 30000-32767. Here it is set to 30080.

This is where you can access the service:

: 30080LoadBalancer

This approach applies to Kubernetes services on the public cloud by specifying a Service of type LoadBalancer.

ApiVersion: v1kind: Servicemetadata: name: example-servicespec: ports:-port: 8765 targetPort: 9379 selector: app: example type: LoadBalancer

When creating a Service, you can choose to automatically create a cloud network load balancer. This provides an externally accessible IP address that can send traffic to the correct port on the cluster node as long as your cluster is running in a supported cloud environment.

Ingress

The set of routing rules set for proxying different backend Service is the Ingress in Kubernetes.

For example, there is a subscription system whose domain name is https://wwww.example.com. Http://www.example.com/book is the book booking system and https://www.example.com/food is the food ordering system. These two systems are served by book and food Deployment respectively.

ApiVersion: v1kind: Ingressmetadata: name: example-ingressspec: tls:-hosts:-www.example.com secretName: example-secret rules:-host: www.example.com http: paths:-path: book backend: serviceName: book-svc servicePort: 80-path: / food backend: serviceName: food-svc servicePort: 80

The noteworthy rules field of this yaml file is called: IngressRules.

The Key of IngressRule is host, which must be a string in the format of a standard domain name, not an IP address.

The value defined in the host field is the entry to the Ingress, which means that when the user accesses the www.example.com, he is actually accessing the Ingress object. Kubernetes can forward the next step according to IngressRule. Here, two path are defined, which correspond to the Service of book and food, the two Deployment.

It is not difficult to see that the Ingress object is actually an abstraction of the "reverse proxy" of the Kubernetes project.

The above is the editor for you to share how to understand the Kubernets network, if you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report