Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to expose service access in Kubernetes

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)06/01 Report--

Overview of Kubernetes

In the recent year, the development of kubernetes is so brilliant that it is being adopted by more and more companies for use in production environments. At the same time, we can see the growth curve of the number of questions in K8s (2015.5-2016.5) on StackOverflow, the most famous developer Q & A community. Developers vote with their feet, which undoubtedly proves the popularity of K8s.

K8s comes from the practice of Google production environment, and the community activity is very high. The number of Star on github is 17kms, and the CNCF foundation, led by Google, is also strongly operating K8s. Just a few months ago, the OpenStack community announced that it fully embraced K8s, which also announced that the world's largest open source IAAS cloud community has chosen K8s as the only solution for containers.

When it comes to K8s, no matter how the topic starts, let's first introduce an overall architecture of K8s (as shown in the following figure):

As a configuration center and storage service, etcd saves the definition and state of all components. The interaction between multiple components of K8s also provides an interface with external interaction mainly through etcd;kube-apiserver, providing security mechanisms. Most of the interfaces read and write data in etcd directly. The kube-scheduler scheduler, which mainly does one thing, listens for pod directory changes in etcd, then allocates node through the scheduling algorithm, and finally calls apiserver's bind interface to associate the assigned node with pod; kube-controller-manager undertakes the main functions of master, such as interacting with CloudProvider (IaaS), managing node,pod,replication,service,namespace, and so on. The basic mechanism is to listen to the corresponding events under etcd / registry/events and handle them; kubelet mainly includes container management, image management, Volume management, etc. Kube-proxy is mainly used to implement the service mechanism of K8s. Provide some SDN functions as well as intelligent LoadBalancer within the cluster.

The content shared in this article is mainly on pod and service on minion nodes. Pod is the concrete example abstraction of k8s application, and service is the collection of these abstractions.

ClusterIP & NodePort & Loadbalancer

Going back to the topic of this article, Service access (whether internal or external) is exposed through kube-proxy in K8s. For example, if we define a Service in the following figure, it can be forwarded to port 9376 of Pod by accessing port 80 of Service.

There are two main modes of kube-proxy forwarding: Userspace and Iptables. As shown in the figure below, Userspace mode is on the left, which is also the default method of kube-proxy, and all forwarding is implemented through kube-proxy software; on the right, Iptables mode, all forwarding is implemented through Iptables kernel module, and kube-proxy is only responsible for generating the corresponding Iptables rules. In terms of efficiency, Iptables will be more efficient, but it needs to be released in the k8s1.2 version of the Iptables version > = 1.4.11 Magi IptabLes mode, and it needs to be considered whether to turn it on or not.

From the perspective of Service itself, there are three ways to expose access:

ClusterIP: use private ip within the cluster-this is the default

NodePort: in addition to using cluster ip, the port of service is also mapped to a specified internal port of each node, and the internal port of each node mapped is the same.

LoadBalancer: use a ClusterIP & NodePort, but apply to cloud provider for a load balancer that maps to service itself.

LoadBalancer Provider is mainly provided by cloud platforms such as aws, azure, openstack, gce and so on. The related implementation can be seen in the source code of k8s, as shown in the following figure:

Ingress

Ingress is also a separately defined object in k8s (as shown in the following figure). Its function is to achieve load balancing for exposed access. What is the difference between LoadBalancer and Service itself? Ingress supports L4 and L7 load balancers, while LoadBalancer only supports L4 Ingress deployment based on Pod, and sets the Pod network to external network;Ingress controller to support Nginx, Haproxy and GCE-L7, which can meet the internal use of enterprises.

In actual use, the architecture of Ingress is shown in the following figure:

However, in practical use, pod may drift, because Ingress Controller is also based on Pod deployment, so the external IP of Ingress will change. The access IP of Service is set on the firewall within the enterprise, and the change of IP is fatal to this mechanism, because it is impossible for enterprises to modify the firewall rules manually.

Then we need a VIP function that can also guarantee the HA of Ingress. We can consider adding a keepalived to Ingress Controller, and we can use the mechanism of keepalived+haproxy to complete the function of VIP. To implement this mechanism, you can refer to and modify the contrib-keepalived-vip mechanism in the K8s community.

In addition to the exposed service mechanism described above, there is Hpcloud-service-loadbalancer, which supports keepalived+nginx, F5, OpenStack Lbaas and L4 & L7 load balancing, but is not compatible with the development mechanism of the K8s community itself, so it has not been merged into the community. In addition, there is Contrib-service-loadbalancer, which is being developed within the community, and its idea is more ambitious, considering that it will support load balancing at the level of Cross-namespace and Cross-cluster. At the same time, a plug-in mechanism has been designed, which currently supports Haproxy as well as L4 & L7 load balancing.

Expose service access in Rancher K8s

Rancher implements a rancher-ingress-controller itself, which essentially wraps k8s-ingress-controller, and calls Rancher Cattle API to create Rancher's own LB on the actual creation of the load balancer.

The relevant code is also open source. Https://github.com/rancher/lb-controller Magi lbMurcontroller specifies provider as rancher at startup, and the corresponding implementation can also be seen in package provider/rancher.

After creating the Ingress, it can also be displayed on the Rancher UI.

During the creation process, you can watch me record this video tutorial, http://v.youku.com/v_show/id_XMTc2MDAzNjQ4OA==.html

Original source: Rancher Labs

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report