In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how to succinctly and elegantly implement the service exposure of Kubernetes. The content of the article is of high quality, so the editor will share it with you for reference. I hope you will have some understanding of the relevant knowledge after reading this article.
How to use port mapping to expose Kubernetes workloads in Rancher 2.0.
The editor will introduce the options in Kubernetes for exposing public ports for workloads and their advantages and disadvantages.
When deploying an application using a container, you often need to route external traffic to the application container.
The standard way to provide external access is to expose the public port on the node where the application is deployed, or to place a load balancer in front of the application container.
It is believed that users of Cattle using Rancher 1.6 are familiar with how to use port mapping to expose services. In this article, we will explore how to use port mapping to expose Kubernetes workloads in Rancher 2.0. Using a load balancing solution is a bigger topic, and we will have a special article on it in the future.
Port Mapping in Rancher 1.6
In Rancher 1.6, users can deploy containerized applications and expose them through port mapping.
The user can select a specific port on the host or have Rancher assign a random port and open the port to allow external access. This public port routes traffic to the dedicated port of the service container running on the host.
Port Mapping in Rancher 2.0
Rancher 2.0 also supports the addition of port mappings to workloads deployed on Kubernetes clusters. The options in Kubernetes to expose public ports for workloads are:
HostPort
NodePort
As shown above, the UI for port mapping in Rancher 2.0 is very similar to the 1.6 experience. When Rancher creates a deployment for a Kubernetes cluster, it adds the necessary Kubernetes HostPort or NodePort specifications internally.
Let's take a look at HostPort and NodePort in more detail.
What is HostPort?
When creating workloads in Kubernetes, you must specify HostPort settings in the Kubernetes YAML specification in the "containers" section. When you select HostPort for mapping, Rancher does this internally.
When HostPort is specified, the port is exposed to external access on the host where the pod container is deployed. Traffic on: will be routed to the dedicated port of the pod container.
Here is how the Kubernetes YAML of our Nginx workload specifies the HostPort settings under the 'ports' section:
Using HostPort for Kubernetes pod is equivalent to exposing a common port for the Docker container in Rancher 1.6.
Advantages of HostPort:
With the HostPort setting, you can request that any available port on the host be exposed.
The configuration is simple, and the HostPort settings are placed directly in the Kubernetes pod specification. Compared to NodePort, there is no need to create other objects to expose the application.
Disadvantages of HostPort:
Using HostPort restricts the scheduling of pod because only hosts with specified ports are available for deployment.
If the size of the workload is greater than the number of nodes in the Kubernetes cluster, the deployment fails.
Any two workloads specified with the same HostPort will not be deployed on the same node.
If the host running pod fails, Kubernetes will have to reschedule pod to a different node. As a result, the IP address that can access the workload will change, thus destroying the external client of the application. The same thing happens when pod restarts, and Kubernetes reschedules them on different nodes.
What is NodePort?
Before we delve into how to create a NodePort to expose Kubernetes workloads, let's take a look at some background on Kubernetes services.
Kubernetes service
The Kubernetes service is a REST object that abstracts access to Kubernetes pod. Kubernetes pod-monitored IP addresses cannot be used as reliable endpoints for public access workloads because pod can be dynamically destroyed and recreated, thus changing its IP address.
Kubernetes services provide static endpoints for pod. Therefore, through the interface of the Kubernetes service, even if the pod switches IP addresses, external clients that rely on the workloads initiated through these pod can continue to access the workload without interruption, and have no sense of the re-creation of the back-end pod.
By default, you can access the service in the Kubernetes cluster on the internal IP. This internal scope is defined using the type parameter of the service specification. Therefore, by default, yaml is type:ClusterIP for services.
If you want to expose services outside the Kubernetes cluster, see these ServiceType options in Kubernetes:
Https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types .
One of the types of ServiceType is what we will call NodePort, which provides external access to the Kubernetes service created for the workload window.
How to define NodePort
Looking back at the workload running the Nginx image. For this workload, we need to expose private container port 80.
To do this, we can create a NodePort service for the workload. The NodePort service specification is as follows:
If we specify a NodePort service, Kubernetes will assign a port on each node. The selected NodePort will be visible in the service specification after creation, as shown above. Alternatively, we can specify a specific port to be used as NodePort in the specification when creating the service. If no specific NodePort is specified, the ports in the range configured on the Kubernetes cluster (default: 30000-32767) are randomly selected.
From outside the Kubernetes cluster, traffic entering: is directed to the workload (which is done by the kube-proxy component). NodeIP can be the IP address of any node in the Kubernetes cluster.
Advantages of NodePort:
Creating a NodePort service will provide a static public endpoint for the workload pod. Therefore, even if the pod is dynamically destroyed, the Kubernetes can deploy the workload anywhere in the cluster without changing the common endpoint.
The size of pod is not limited by the number of nodes in the cluster. Nodeport allows you to separate public access from the number and location of pod.
Disadvantages of NodePort:
When using NodePort,: is reserved for each node in the Kubernetes cluster, even if the workload has never been deployed on that node.
You can only specify ports from the configured range, not any random ports.
Additional Kubernetes objects (Kubernetes services of type NodePort) are required to expose your workload. Therefore, it is not easy to understand how your application is exposed.
From Docker Compose to Kubernetes YAML
The above describes how Cattle users can add port mappings to Rancher 2.0 UI compared to 1.6. Now let's see how we can do the same thing with compose files and Rancher CLI.
We can use the Kompose tool to convert the docker-compose.yml file from Rancher 1.6 to Kubernetes YAML, and then use Rancher CLI to deploy the application in the Kubernetes cluster.
This is the docker-compose.yml configuration of the above Nginx service running on 1.6:
Kompose generates YAML files for the Kubernetes deployment and service objects required to deploy Nginx workloads in Rancher 2.0. The Kubernetes deployment specification defines the pod and container specifications, while the service specification defines public access to pod.
Add HostPort through Kompose and Rancher CLI
Even if docker-compose.yml specifies exposed ports, Kompose will not add the required HostPort constructs to our deployment specification. Therefore, in order to replicate the port mapping in the Rancher 2.0 cluster, we can manually add the HostPort construct to the pod container specification in nginx-deployment.yaml and deploy using Rancher CLI.
Add NodePort through Kompose and Rancher CLI
To add NodePort services to your deployment through Kompose, add the tag kompose.service.type to the docker-compose.yml file according to the Kompose document:
Https://github.com/kubernetes/kompose/blob/master/docs/user-guide.md#labels
Now that docker-compose.yml contains the required NodePort services and deployment specifications, we can use this docker-compose.yml to run Kompose. With Rancher CLI, we can deploy successfully through NodePort to expose the workload.
This article discusses how to use port mapping in Rancher 2.0 to expose application workloads to public access. The port mapping function in Rancher 1.6 can be easily converted to the Kubernetes platform. In addition, Rancher 2.0 UI provides the same intuitive experience for mapped ports when creating or upgrading workloads.
This is the end of the service exposure on how to implement Kubernetes succinctly and elegantly. I hope the above content can be helpful to you and learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.