Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Kubernetes Node total solution

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Tonight, the fourth online training of "how to build CI/CD assembly line for enterprises" will be broadcast. Go to the link: http://live.vhall.com/729465809 and you can make an appointment for registration for free! Jieshao

Kubernetes has more than 48000 stars and more than 75000 commit on GitHub, with technology giants represented by Google as its main contributors. It can be said that Kubernetes has quickly taken charge of the container ecosystem and become the real leader of the container orchestration platform.

Kubernetes provides powerful features such as scrolling and rollback of deployment, container health check, automatic container recovery, automatic container expansion based on metrics, service load balancing, service discovery (suitable for micro-service architecture) and so on. In this article, we will discuss the important basic concepts of Kubernetes, the master node architecture, and focus on node components.

Understanding Kubernetes and its abstraction

Kubernetes is an open source orchestration engine for automatically deploying, extending, managing, and providing infrastructure for hosting containerized applications. At the infrastructure level, a Kubernetes cluster consists of a set of physical or virtual machines, each running in a specific role.

The Master machine is like the brain of all business, responsible for orchestrating all the containers running on the node machine. Each node is equipped with a container runtime. The node receives instructions from the master and then performs actions to create the pod, delete the pod, or adjust network rules.

The Master component is responsible for managing the Kubernetes cluster. They manage the life cycle of the pod, and the pod is the basic unit of deployment within the Kubernetes cluster. Master Server runs the following components:

Kube-apiserver-the main component that exposes the API for other master components.

Etcd-distributed key / value repository that Kubernetes uses to persist all cluster information.

Kube-scheduler-the node running pod is determined according to the information in the pod specification.

Kube-controller-manager-responsible for node management (detection of node failures), pod replication, and endpoint creation.

Cloud-controller-manager-daemon that acts as an abstraction layer between API and different cloud provider tools (storage volumes, load balancers, etc.).

Node components are worker machines in Kubernetes and are managed by master. The node can be a virtual machine (VM) or a physical machine-- Kubernetes works well on both types of systems. Each node contains the necessary components to run pod:

Kubelet-monitors the API servers for the pod on that node to make sure they are functioning properly

CAdvisor-collect metrics related to pod running on a specific node

Kube-proxy-monitors the API server for real-time changes in pod or services to keep the network up to date

Container Runtime-responsible for managing container images and running containers on this node

Detailed explanation of Kubernetes Node components

All in all, the two most important components, kubelet and kube-proxy, are running on the node, in addition to a container engine responsible for running application containerized applications.

Kubelet

Kubelet handles all communication between master and the node running on it. It receives commands from autonomous devices in the form of manifest, and manifest defines the workload and operation parameters. It interacts with the container runtime that is responsible for creating, starting, and monitoring pod.

Kubelet also periodically checks the configured activity probes and preparations. It constantly monitors the status of the pod and starts a new instance if a problem occurs. Kubelet also has an internal HTTP server that displays a read-only view on port 10255. In addition, there is a health check endpoint on / healthz, as well as some other state endpoints. For example, we can get a list of running pod at / pods. We can also get details of the machines on which kubelet is running at / spec.

Kube-proxy

The kube-proxy component runs on each node and is responsible for proxying UDP, TCP, and SCTP packets (it does not know about HTTP). It maintains network rules on the host and handles packet transmission between the pod, the host, and the outside world. It is like the network agent and load balancer of pod running on the node, using NAT in iptables to achieve east / west load balancing.

The kube-proxy process lies between the network connected to the Kubernetes and the pod running on that particular node. It is essentially the core network component of Kubernetes and is responsible for ensuring effective communication across all elements of the cluster. When a user creates a Kubernetes service object, the kube-proxy instance is responsible for transforming the object into a meaningful rule on the local iptables rule set located on the worker node. Iptables is used to convert the virtual IP assigned to the service object into all the pod IP of the service map.

Container runtime

The container runtime is responsible for pulling images from public or private image repositories and running containers according to these images. The most popular container engine today is undoubtedly Docker, but Kubernetes also supports other container runtimes such as rkt, runc, and so on. As we mentioned above, kubelet interacts directly with the container runtime to start, stop, or delete the container.

CAdvisor

CAdvisor is an open source agent that monitors resource usage and analyzes the performance of containers. CAdvisor was originally created by Google and is now integrated with kubelet.

The cAdvisor instance on each node collects, aggregates, processes, and exports metrics for all running containers, such as CPU, memory, file, and network usage. All data is sent to the scheduler to ensure that the scheduler understands the performance and resource usage within the node. This information is used to perform various orchestration tasks, such as scheduling, horizontal pod extension, managing container resource restrictions, and so on.

Understanding node component endpoints from hands-on operations

Next, we will install a Kubernetes cluster (with the help of Rancher) to start exploring some of the API exposed by node components. To do the following, we need:

Google Cloud Platform account (any public cloud is the same)

A host on which the subsequent Rancher will run (either personal PC / Mac or VM in the public cloud)

On the same host, install kubectl and Google Cloud SDK. Verify your relevant credential (gcloud init and gcloud auth login) to ensure that gcloud can properly access your Google Cloud account

Kubernetes cluster running on GKE (same as running EKS or AKS)

Start the Rancher instance

First, start the Rancher instance. This process is very simple, just refer to the quick finger south:

Https://rancher.com/quick-start/

Deploy a GKE cluster using Rancher

To set up and configure a Kubernetes cluster using Rancher, you can also follow the instructions:

Https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters/gke/

After deploying the cluster, we can quickly deploy Nginx for testing:

In order to interact with Kubernetes API, we need to start the proxy server on the local computer:

Let's check the progress to see if it is working properly and if it is listening on the default port:

Now, in the browser, check the various endpoints exposed by kubelet:

Next, display the list of nodes available to the cluster:

We can use spec to check all the listed nodes that use API. In the example in this article, we created a 3-node cluster using the n1-standard-1 machine type (1 root-sized disk of vCPU,3.75GB RAM,10GB). We can confirm these specifications by accessing dedicated endpoints:

Using the same kubelet API at different endpoints, we can examine the Nginx pod we created to see which node they are running on.

First, list the pod that is running:

Now, curl the / proxy/pods endpoint of each node and view the list of pod it is running:

We can also check the cAdvisor endpoint, which outputs large amounts of data in Prometheus format. By default, this is available at the / metrics HTTP endpoint:

If you SSH to the node and call the kubelet port directly, you can get the same cAdvisor or pod information:

Clear

To clean up the resources we used in this article, simply delete the Kubernetes cluster from Rancher UI (select the cluster and click the Delete button). This will delete all nodes being used by our cluster and the associated IP addresses. If you are using VM to run Rancher in the public cloud, you also need to deal with it. Find out your instance name and delete it:

Conclusion

In this article, we discussed the key components of the Kubernetes node machine. After that, we deployed a Kubernetes cluster using Rancher and completed a small deployment to help us learn to use kubelet API.

For more information about Kubernetes and its architecture, the official Kubernetes documentation is a good place to start: https://kubernetes.io/docs/concepts/overview/components/

At the same time, the free online training series recently organized by Rancher Labs [Kubernetes Master Class] is also an excellent choice for Kubernetes to get started. Tomorrow night (April 24) at 20:30, the fourth course of this season, "how to build CI/CD assembly line" will be broadcast soon, you can go to the link: http://live.vhall.com/729465809 to book this course, then you can use the same link to watch the live broadcast!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report