Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand the method of Service exposure in Kurbernetes

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article focuses on "how to understand the method of service exposure in Kurbernetes". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to understand the method of service exposure in Kurbernetes.

Due to the recent further collation and study of relevant materials on cloud native solutions, what has not been well understood is the way networks and services are exposed in kurbernetes. Recently, I looked up the information and studied it further.

Business scenario description

As mentioned earlier when talking about DevOps solutions, a complete DevOps continuous integration and delivery process needs to be integrated with containers to achieve automated deployment, dynamic auto scaling, environment migration and other capabilities.

A DevOps supporting platform is inseparable from the integration of containerized PaaS platform, that is, the final compiled and built content is mirrored and placed in the image repository, subsequent deployment, environment migration, and resource expansion based on the image repository for rapid copy and replication. For Docker containers, it is generally combined with K8S to achieve dynamic resource scheduling and cluster management capabilities.

In the original discussion, it was only mentioned that the deployment and dynamic expansion of resources through K8s would be provided to the application module from a VIP virtual address, but it was not expanded here. Today, it is mainly explained in combination with the scenario.

Scenario description:

We take, for example, that the whole application actually has two micro-service modules, one is UserMgrr micro-service, and the other is OrderMgr order management micro-service, both of these micro-services are automatically deployed to the container cloud environment through K8s. At the same time, we assume that each microservice dynamically extends two replica Pod, that is, three Pod nodes are formed.

In this case, it is impossible for us to access Pod IP directly. One is that the Pod IP itself will change dynamically, and the other is that multiple replicas of Pod IP already exist in the same micro-service after the cluster is expanded.

So we need to access it through Service.

Kubernetes Service defines such an abstraction: a logical grouping of Pod, a policy that can access them-- often referred to as microservices. This set of Pod can be accessed by Service, usually through Label Selector. For example, the UserMgrr microservice above, we can tag it with a UserMgr tag, and then the same tags are automatically aggregated into a Service logical grouping.

Internal inter-module service access-ClusterIP

As we mentioned just now, there are two microservices, UserMgr and OrderMgr, in the whole business scenario, so the access between these two microservices belongs to the access within the Kurbernetes cluster.

In this cluster internal access scenario, you can access it through the ClusterIP of Service.

Note that ClusterIP itself is a virtual IP and cannot be Ping. The access request for this IP is actually based on the IPTables routing table and the KubeProxy is finally routed to a specific Pod instance node. That is:

Request- "ClusterIP-" IPTables+KubeProxy- "Pod Instance

As shown below:

In iptables proxy mode, for each Service, it installs iptables rules to capture requests to the clusterIP (virtual IP) and port of that Service, which in turn redirects the request to one of a set of backend in Service. For each Endpoints object, it also installs the iptables rule, which selects a backend Pod. The default policy is to randomly select a backend.

Provide services to the outside world-NodePort

If you need to provide services to the outside world, there are actually many ways of NodePort,LoadBalancer and Ingress. Let's explain these ways respectively.

The NodePort method mainly exposes the port in the form of IP plus port for each node, and any node ip can be accessed (provided no node scheduling policy is specified), in which the port can see the port exposure range through the apiserver configuration file.

For example, after the above two micro-service modules are deployed, port 8001 can be configured to access the micro-service module UserMgr. That is, 10.0.0.1VR 8001, 10.0.0.1VR 8002pm 10.0.0.1VR 8003.

In the case of NodePort, the request is actually still forwarded to the Service and routed to the specific Pod instance node via Service. The only difference is that NodeIP is an accessible IP address.

The user management microservice can be accessed from all three addresses. Note that a port port is mapped to a microservice, such as 8001 to UserMgrr microservice and 8002 to 8002 microservice. The above three addresses can be accessed externally. If the client wants unified access, it can be uniformly accessed to a similar Ngnix reverse proxy.

However, there is a problem with this method, that is, if a new Node node is added, we need to add new configuration information to the cluster or load balancer. Secondly, the Node itself is attached to the virtual machine. If the IP address of the virtual machine in the entire IaaS environment may change after restart, then it needs to be configured manually.

Provide services to the outside world-LoadBalancer

This approach mainly uses other third-party LB exposure services, Aliyun or Amazon's LB, and so on. In this way, note that one IP is consumed for each micro service, which may lead to the problem of public cloud fees. Secondly, it is not easy to form a unified service access exit.

In this way, traffic from external load balancers will go directly to the backend Pod, but how they actually work depends on the cloud provider. In these cases, the load balancer is created based on the loadBalancerIP set by the user.

Provide services to the outside world-Ingress

Ingress resource object, which is used to forward access requests from different URL to different Service at the back end to implement the service routing mechanism of the HTTP layer. Kubernetes uses an Ingress policy definition and a concrete Ingress Controller, which combine to implement a complete Ingress load balancer.

Ingress Controller will forward customer requests directly to the backend Endpoint corresponding to Service based on Ingress rules, which will skip the forwarding function of kube-proxy and kube-proxy will no longer work.

For Ingress, it can be understood as a gateway or proxy exit for the whole Kurbernetes cluster. There is no problem with understanding it as an external API gateway. Each micro-service can be accessed and registered through Ingress. The meaning of the IP access address of the micro-service can distinguish which micro-service is routed to by different paths and url.

For the selection of Ingress gateway

The article gives a comparative picture as follows:

As you can see, after the current Kong API gateway itself has Kurbernetes plug-in, Kong Ingress is formed, which not only meets the external exposure of cluster nodes, but also includes some core capabilities of Kong gateway, including service registration discovery, current limit circuit breaker, security and other capabilities can meet the daily needs of API management.

To put it simply, if you are exposing the API interface of internal microservices to the front-end APP, then Kong Ingress should be a good choice. At the same time, Kong ingress also has a very big advantage, it provides some API, service definition, to abstract into K8S CRD, so it can be easily configured through K8S ingress, synchronized to the Kong cluster.

What can be done in DevOps integration?

For the collaboration between API gateway and DevOps, I have thought about and sorted out as follows.

Let's first take a look at when the API gateway is involved. In our initial concept, when a business application needs to publish API interface services, this external release may be used by other external partners or our own APP front end. As long as this scenario exists, it often involves the use of API gateway.

With the cooperation of multiple teams in a large-scale project, we can see that if the micro-service architecture is adopted, what we actually recommend is that each team is its own independent micro-service registry, which is responsible for the API interface calls among multiple micro-service modules within the team, and these API interfaces can be called through the registry. However, when it comes to cross-team collaborative API interface services, you need to register with the API gateway for unified management.

To put it simply, publishing API or cross-team API API calls need to involve connecting API registration to gateway management.

The collaboration between a micro-service module and an API gateway includes providing API interface service registration and access to the gateway, as well as invoking API interface service consumption from the gateway. Therefore, it is necessary to talk about coordination from two aspects: API registration access and API consumption call.

API Registration access

For the whole DevOps process, we can see that the bottom layer is Docker container + K8s resource scheduling, which involves compilation, build, packaging, deployment and other actions when we arrange the pipeline. You can actually see that the interface service exposes a dynamic ip access address provided by K8s after automatic deployment. What we need to do is to register and access the access interface provided by the ip address to the gateway.

After the whole process has been figured out, I can actually handle API registration access in two ways.

In the deployment node, add custom scripting, through the custom running script to complete the registration of the API interface service.

Add the interface registration pipeline orchestration node, after the deployment of the node, arrange the registration node, and define the interface registration content in the API registration node.

As the entire DevOps pipeline design and implementation is partial to the use of developers, you can see that the first approach is often more flexible. The only thing is that when defining a pipeline, you need to plan the interface content that needs to be accessed and registered in advance.

Although the complete API gateway management function is not needed in the DevOps support platform, it is best to add a function, that is, to be able to query on the DevOps support platform which interface services have been registered and accessed, what is the address of the proxy service provided after registration access, and which micro-service module registers access and other basic service directory information.

Based on the previous thinking, the follow-up consideration is to realize the integration of Kong Ingress and K8s cluster. For the interface service that needs to be registered, the configuration file is written first, and then when the micro-service is deployed or the dynamic node is extended in K8s, the interface service is automatically registered with the API gateway through API call to achieve external access.

API consumption call

Note that one of the benefits brought by the adoption of the API gateway is that the IP of the API access address provided by the API gateway itself is fixed and will not change dynamically with each automatic construction and deployment of the microservice module. For the API gateway, we will deploy to the test environment and production environment in advance, and then start the continuous integration and deployment of various micro-service modules after the deployment of the gateway is completed.

Therefore, a micro-service module needs to access which interfaces of other micro-service modules, one way is to call the service registry every time to query the specific service access address, one way is to store the access address in the local configuration file. A better way is to:

First, the service registry is called to obtain the service access address, and it exists in the local configuration file.

After discovering that the local configuration file already has a service access address, it will no longer be called from the service registry unless notified by an address change message

After this determination, the construction, packaging and deployment of the micro service module itself is actually exactly the same as not cooperating with the API gateway, except that the access address of the configuration file is fixed to the address provided by the API gateway. How to know which addresses are provided by the API gateway, that is, what we talk about can be queried on the management and control platform of the API gateway, or on the service directory query function provided by the DevOps platform.

At this point, I believe you have a deeper understanding of "how to understand the method of service exposure in Kurbernetes". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report