Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to easily get started with Kubernetes Ingress

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

How to simply start Kubernetes Ingress, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain for you in detail, people with this need can come to learn, I hope you can gain something.

I don't know if you have noticed a strange phenomenon, although Kubernetes Ingress API is still in the beta state, many companies have used it to expose Kubernetes services. Engineers working on the project say Kubernetes Ingress API is increasingly likely to take off its beta tag. In fact, Kubernetes Ingress API has been in the beta state for several years and began to enter this stage in the fall of 2015, to be exact. However, the long beta phase gives Kubernetes contributors time to refine the specification and align it with already built implementation software (HAProxy, NGINX, Traefik, etc.), thus standardizing API to reflect the most common and required functions.

With this feature GA approaching, now should be the right time to help beginners quickly understand how Ingress works. In short, Ingress is a rule that maps how services within the cluster bridge the gap and expose the outside world where customers can use it. At the same time, an agent called Ingress controller listens at the edge of the cluster network (monitoring the rules to be added) and maps each service to a specific URL path or domain name for public use. While Kubernetes maintainers develop API, other open source projects also implement Ingress Controller and add their own unique capabilities to their agents.

I'll introduce these concepts and help you understand the driving forces behind the Ingress pattern.

Routing problem

When you create a Pod in Kubernetes, you need to assign a selector tag to it, as shown in the following snippet of Deployment manifest:

The Deployment creates three copies of the my-app running the Docker image and assigns an app=foo tag to them. In addition to accessing Pod directly, they are usually grouped under Service, which allows them to be used on a single cluster IP address (but only in the same cluster). Service acts as an abstraction layer, hiding the short-lived nature of pod, which can be added or subtracted or replaced at any time. It can also perform basic circular load balancing.

For example, the following Service definition collects all Pod with the selector label app = foo and routes traffic averagely in it.

However, this service can only be accessed from within the cluster and from other Pod running nearby. Kubernetes Operator is trying to figure out how to provide access to clients outside the cluster. This problem has emerged since the early days, and the two mechanisms are integrated directly into the Service specification to deal with. When you write a service manifest, it includes a field called type, whose value is NodePort or LoadBalancer. This is an example of setting the type to NodePort:

Services of type NodePort are easy to use. In essence, these services want Kubernetes API to assign them a random TCP port and expose it outside the cluster. The convenience of doing this is that clients can use this port to target any node in the cluster, and their messages will be forwarded to the correct location. It's like you can make any phone call in the United States, and the person who answers the phone will make sure you get the right person for you.

The disadvantage is that the value of the port must be between 30000 and 32767, although this range securely avoids the range of common ports, but it is obviously not very standard compared to the common HTTP port 80 and HTTPS 443. In addition, randomness itself is an obstacle because it means that you don't know what the value is in advance, which makes configuring NAT and firewall rules more challenging-especially if you need to set a different random port for each service.

Another option is to set the type to LoadBalancer. However, there are some prerequisites-it works only if you are running in a cloud managed environment such as GKE or EKS and you can use the cloud provider's load balancer technology because it is automatically selected and configured. The disadvantage is that it is expensive because using this type of service starts a managed load balancer and a new public IP address for each service, which costs extra.

Ingress routing

Assigning a random port or external load balancer is easy to operate, but it also presents unique challenges. Defining many NodePort services can cause random port confusion, while defining many load balancer services will result in paying more for cloud resources than is actually required. These situations cannot be completely avoided, but it may be possible to reduce its scope of use, even if you only need to allocate a random port or a load balancer to expose many internal services. Therefore, this platform requires a new abstraction layer that can integrate many services behind the entry point (entrypoint).

At that time, Kubernetes API introduced a new type of manifest called Ingress, which provided a new idea for routing problems. It works like this: you write an Ingress manifest that declares how you want the client to route to the service. Manifest doesn't actually do anything on its own, and you have to deploy Ingress Controller to your cluster to monitor these claims and perform actions on them.

Like any other application, Ingress controller is Pod, so they are part of the cluster and can see other Pod. They are built using reverse proxies that have been developed in the market for many years, so you can choose HAProxy Ingress Controller, NGINX Ingress Controller, etc. The underlying agent provides it with layer 7 routing and load balancing. Different agents put their own feature sets into tables. For example, HAProxy Ingress Controller does not need to be reloaded as frequently as NGINX Ingress Controller, because it assigns slot to the server and uses Runtime API to populate slot at run time. This makes the Ingress Controller have better performance.

Ingress Controller itself is located within the cluster, and like other Kubernetes Pod, it is also vulnerable to "imprisonment" in the same "prison". You need to expose them to the outside world through NodePort or LoadBalancer-type services. However, now you have only one entry point, and all traffic will pass through this point: a service connects to an Ingress Controller,Ingress Controller, in turn, to many internal Pod. Controller has the ability to check for HTTP requests and can direct clients to the correct Pod based on the characteristics it finds, such as URL paths or domain names.

Refer to this example of Ingress, which defines how the URL path / foo should connect to the back-end service named foo-service, while the URL path / bar is directed to the service named bar-service.

As shown above, you still need to set up services for your Pod, but you do not need to set type fields on Pod, because routing and load balancing will be handled by the Ingress layer. The role of the service is reduced to grouping Pod with a common name. Eventually, the two paths, / foo and / bar, will be served by a public IP address and domain name, such as example.com/foo and example.com/bar. In essence, this is the API gateway mode, in which a single address routes requests to multiple back-end applications.

Add Ingress Controller

The declarative approach to Ingress manifest is that you can specify what you want without knowing how to implement it. One of the jobs of Ingress Controller is to execute, and it needs to monitor the new ingress rules and configure its underlying proxies to develop the corresponding routes.

You can install HAProxy Ingress Controller using the Kubernetes package management tool Helm. First, install Helm by downloading the Helm binary and copying it to a folder contained in the PATH environment variable (for example, / usr/local/bin/). Next, add the HAProxy Technologies Helm library and deploy Ingress Controller using the helm install command.

Verify that the Ingress Controller has been created by running the command kubectl get service to list all running services.

HAProxy Ingress Controller runs in the pod of the cluster and publishes access to external clients using Service resources of type NodePort. In the output shown above, you can see that port 31704 is selected for HTTP and port 32255 is selected for HTTPS. You can also view the HAProxy information statistics page on port 30347. HAProxy Ingress Controller provides detailed metrics about the traffic flowing through it, so you can better observe the traffic entering the cluster.

When controller creates a service of type NodePort, this means that a random port needs to be assigned and the port number is often very high, but now you only need to manage a few such ports, that is, you only need to manage the ports connected to the Ingress Controller, and there is no need to create one port for each service. You can also configure it to use the LoadBalancer type, as long as you operate on the cloud. It looks like this:

Overall, you don't need to manage too much Ingress Controller. After installation, it basically performs its work in the background. You only need to define Ingress manifest,controller to connect them immediately. The definition of Ingress manifest is different from the referenced service, so you can control when the service is exposed.

Ingress resources integrate how external clients access services in an Kubernetes cluster by allowing API gateway-style traffic routing. The proxy service is transferred through the public entry point (entrypoint), and you can use intent-driven and YAML declarations to control when and how the service is exposed.

When the function of Ingress API is GA, you will definitely see this mode becoming more and more popular. Of course, there may be some minor changes, mainly to align API with the functionality already implemented in the existing controller. Other improvements may guide controller on how to continue to evolve to meet the vision of Kubernetes maintainers. All in all, now is a good time to start using this feature!

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report