In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
In this issue, the editor will bring you about how to understand the Kubernetes on the device cluster. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
Kubernetes is a container orchestration tool that originated from Google and has been very popular in the last three or five years. After defeating its competitors, Kubernetes now unquestionably dominates the cloud computing environment. After acquiring influential startups such as Heptio and Bitnami, VMware has become a significant contributor to Kubernetes's global community. In March 2020, VMware released Cloud Foundation 4, a revolutionary Tanzu platform that fully supports more efficient and transparent operation and maintenance management of Kubernetes in the cloud.
At the same time, a considerable number of users and manufacturers are constantly trying to apply Kubernetes to the edge computing environment. However, after all, edge computing is different from cloud computing, and many of the basic assumptions taken for granted in the cloud are not valid on the edge, or the cost is too high to be realistic.
This article will analyze the reasons and compare the advantages and disadvantages of different technical schemes. Here we focus on the device layer rather than Cloud Edge (Cloud Edge) or Mobile Edge Computing (MEC). As far as Kubernetes is concerned, the technical environment of the latter two is not much different from that in the cloud and can basically migrate seamlessly.
Kubernetes on the device cluster
Basic assumptions of native Kubernetes
Kubernetes was originally designed to run in a cloud computing environment, so its basic assumptions are cloud computing resources, infrastructure as a service (IaaS) characteristics, including:
-Computing is sufficient and distributable
-the network is stable and two-way interconnected.
-Storage is volatile, local, or persistent, networked
-Management is remote, automated, and self-serving
-Security is guaranteed, controllable, and programmable
The architectural design ideas derived from Kubernetes take full advantage of the above features, such as:
-Multi-instance master node, multi-level abstraction of slave nodes, and distributed deployment
-two-way connectivity and high-frequency synchronization between master and slave nodes
-metadata storage is persistent and networked, and stateful applications can be persistent
-remote, cross-cloud management
-Security policy automation
Limitations of the device layer
However, the design idea of Kubernetes is not entirely applicable to the device layer, because the general characteristics of resources here are:
-the calculation is limited.
-northbound networks are unstable, narrowband, and expensive
-Storage is basically local and easy to lose
-Management is traditionally local and manual
-Security is not completely controllable.
How to solve these problems is the focus of different technical solutions when Kebernetes is applied to the device layer.
Super-converged persistent storage
The super-converged device cluster scheme introduced in the previous part can better solve the problem of local storage loss. There are also some open source persistent storage solutions based on bare metal (Bare Metal) in the industry to choose from, which I will not repeat here.
The steps to deploy Kubernetes applications on a virtualized device cluster are as follows:
-install the open source software govmomi and govc, configure virtual machines according to the vSphere storage Kubernetes guidelines, especially set disk.EnableUUID to true for all virtual machines to ensure that the ID of VMDK is constant and unique
-use kubernetes.io/vsphere-volume as the provider of persistence volume (PersistentVolume) and declare StorageClass on vsanDatastore
-normal creation of PersistentVolume and PersistentVolumeClaim
This makes it possible to achieve high availability in a three-tier structure:
-if the device fails, the device cluster agent / manager can rebuild the virtual machine node on another device
-if the virtual machine node fails, the device cluster agent / manager can discover and restart the node
-if the Pod/ container fails, the Pod/ container will be rebuilt by Kubernetes.
In any of the above cases, the data saved under the persistent storage path is not lost.
The security issues will be introduced as a whole in the later section.
The following discussion focuses on computing, networking, management, and maintenance.
Early exploration
Target
Target is a famous large supermarket chain in the United States. In early 2017, Target began to use Kubernetes and Spinnaker to build its own Unimatrix platform for remote cluster management. Target adopts the mode of Fleet Management (Fleet Management), which deploys the whole cluster containing master-slave node devices to 1850 stores. Each cluster is composed of three completely master-slave reused node devices, and the clusters in each store are independent of each other. And support Kubernetes API on Unimatrix to connect with the rich application ecology in the cloud Kubernetes community. The Unimatrix solution deploys Agent to stores to interface communication between the Kubernetes cluster and the cloud.
[https://tech.target.com/infrastructure/2018/06/20/enter-unimatrix.html](https://tech.target.com/infrastructure/2018/06/20/enter-unimatrix.html)
[http://eshepickett.com/achieving-enterprise-agility-at-the-retail-edge/](http://eshepickett.com/achieving-enterprise-agility-at-the-retail-edge/)
Chick-Fill-A
Chick-Fill-An is a very well-known restaurant chain in the United States. Move 2000 stores from Docker to Kubernetes platform in 2018. Each store consists of a group of Intel NUC devices to form a three-node cluster, using Kubernetes and a large number of open source software integration for fleet management (Fleet Management).
Https://qconnewyork.com/system/files/presentation-slides/caopia-chickfilamilkingthemostoutof1000sofk8sclusters_0.pdf
Https://medium.com/@cfatechblog/bare-metal-k8s-clustering-at-chick-fil-a-scale-7b0607bd3541
On the whole, the scheme of Chick-Fill-An is similar to that of Target, in which the full cluster is deployed to the edge devices, and the fleet is managed in other ways, which is supplemented with Kubernetes to form a multi-tier management structure.
Existing scheme
The above two examples are self-built solutions by Kubernetes users to solve the problems they encounter. Here are several technical solutions that are being promoted in the industry.
KubeEdge
Https://kubeedge.io/
KubeEdge became a sandbox project under the Cloud Native Computing Foundation (CNCF) in 2019 and is designed specifically for the device layer of the Internet of things and edge computing scenarios. In its architecture, the CloudCore is placed on the cloud together with the Kubernetes master node, the EdgeCore part runs on the device, and the network between them is only visible in one direction.
The interaction between EdgeController in CloudCore and Kubernetes API server is mainly accomplished by calling API of kubernetes.Clientset in downstream.go and upstream.go programs under https://github.com/kubeedge/kubeedge/blob/master/cloud/pkg/edgecontroller/controller/. For example:
/ / DownstreamController watch kubernetes api server and send change to edge
Type DownstreamController struct {
KubeClient * kubernetes.Clientset
……
}
/ / UpstreamController subscribe messages from edge and sync to k8s api server
Type UpstreamController struct {
KubeClient * kubernetes.Clientset
/ / message channel
……
}
In the EdgeCore part, except for the existing container runtime (CRI) tools, other programs that interact with CloudCore are self-built to achieve functions similar to Kubelet. The interface between EdgeHub and CloudHub is not compatible with Kubelet, but is implemented based on websocket or QUIC protocols.
Https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/quic-design.md
The most special thing is that there is communication between MQTT Broker and EdgeCore on the edge, mapping the Internet of things protocol into it, and then forwarding it to DeviceTwin through EventBus. So far, we have only seen the mapping between ModBus and Bluetooth, and this architecture, which combines edge container platform and edge application logic, is very rare.
In order to support a full set of Kubelet mechanisms with self-built code and maintain compatibility with Kubernetes API servers, KubeEdge needs to catch up with the release of Kubernetes one by one. KubeEdge 1.2 released in February 2020 is compatible with Kubernetes 1.17 released in December 2019.
K3S
Https://k3s.io/
K3s is an open source project released by Rancher. Its main design idea is to put the Kubernetes cluster on the edge side after miniaturization and lightweight, and cooperate with the cloud side through other management channels. This is very similar to the traditional thinking of Target or Chick-Fil-An introduced in the previous section.
Using the basic code of Kubernetes and encapsulated by self-built code, K3S packages all the programs into the same binary file, which is very convenient to install. But its command line must be triggered by K3s, such as sudo K3s kubectl get node. The code from third parties in K3S is mainly concentrated in the k3s/pkg/generated directory. K3s can be roughly thought of as an unofficial, lightweight, API-compliant project of Kubernetes. The latest release of K3S in May 2020 supports Kubernetes 1.17.5.
The miniaturized K3S embeds lightweight database SQLite and also supports plug-in databases. K3s' dependence on non-OS external software is also very low. If supplemented by load balancing nodes, K3s can achieve a high availability deployment model.
Virtual Kubelet
Https://virtual-kubelet.io/
Virtual Kubelet, a sandbox project of the CNCF Foundation, is an API-compatible implementation of kubelet to allow nodes implemented by other services in the cloud or edge to communicate with the Kubernetes master like kubelet. Although Virtual Kubelet was originally intended to support container platforms such as serverless, it also supports other types of services. Virtual Kubelet provides a pluggable Provider interface, and developers can implement the following functions required by Kubelet:
-necessary plumbing support at the back end to manage the lifecycle of supporting resources in the context of pod, containers, and Kubernetes
-conforms to the API provided by Virtual Kubelet
-restrict all access to the Kubernetes API server and provide a well-defined callback mechanism to obtain data such as Secrets and ConfigMaps
For example, Azure IoT Edge Connector is an implementation based on the Virtual Kubelet Provider interface. Through the interaction between encapsulated IoT Edge Provider and Virtual Kubelet, edge applications can be deployed to devices in a standard Kubernetes API way. Of course, the deployment is implemented in an asynchronous way that is different from that of cloud applications.
Https://github.com/Azure/iot-edge-virtual-kubelet-provider
Similarly, it is technically possible to implement Provider that supports the deployment of other edge applications, while the master and slave nodes are actually on the cloud side and are managed through the channels of the existing edge computing platform.
MicroK8s
MicroK8s is an open source software released by Ubuntu, which is packaged into a collection of Kubernetes applications in the form of snap. It is characterized by:
-small: package the necessary programs together to facilitate deployment
-simple: single snap package installation, no external dependency, simplified management and operation and maintenance
-Security: security updates from upstream are fast
-now: update the upstream code in a timely manner, you can choose the latest or any version
-Comprehensive: contains accumulated general manifest collections.
There is very little self-built code in MicroK8s, which mainly implements the function of snap packaging, which is a very simplified Kubernetes distribution. The format of the Snap package is mainly used in Ubuntu-like systems and is also supported on other Linux distributions.
The command line for MicroK8s must be triggered by microk8s, such as sudo mirok8s kubectl get node. Its master and slave nodes need to be deployed on the edge side, and then managed remotely from the cloud side through other channels.
Option comparison
The above introduces several mainstream open source projects and technical solutions that deploy Kubernetes to the edge. Each scheme has its own advantages and disadvantages, and the comparison is as follows:
From the point of view of ordinary developers and users, the ideal scenario for deploying Kubernetes on the edge side should have the following characteristics:
-low resource consumption
-fully compatible
-Code synchronization with Kubernetes upstream
-Certified distribution
-easy to manage
-the primary node is on the cloud side
-Slave node at the edge
However, as can be seen from the above table, no existing solution can meet all the idealized requirements. Such requirements may only be met after the refactoring of the Kubernetes project.
No silver bullets.
Under the existing conditions, if all the idealized needs can not be met, can we take a step back to see which ones can be abandoned or compromised? For example:
Is it necessary to put the primary node on the cloud side?
The main value of the master node on the cloud side and the slave node on the edge is unified and simplified management. If the multi-tier management mechanism is acceptable and the edge side consumes more resources, we can make concessions at this point.
Is it necessary to use Kubernetes? What does it mean to calculate the edge?
The main value of Kubernetes, for many users, is rich ecological software, edge and cloud-side management of a unified API. If a large number of Kubernetes applications are not deployed in the user's cloud environment, the direct value of cloud-based collaboration is not obvious.
Is it necessary to deploy distributed edge applications?
Kubernetes is a mainstream tool for orchestrating distributed container applications, but many edge applications do not require distributed deployment. If so, Kubernetes may not be as easy to use as Docker Compose.
In short, under the existing conditions, users need to choose their own Kubernetes deployment tools according to their actual situation and needs, if Kubernetes is necessary. There is no one-size-fits-all solution, that is, "no silver bullet".
This paper introduces and compares various existing technical schemes for applying Kubernetes to the edge. In fact, the EdgeX Foundry hosted by the Linux Foundation Edge Plan is an edge computing application framework that requires distributed deployment.
The above is how to understand the Kubernetes on the device cluster shared by the editor. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.