In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Today, I will talk to you about how to analyze the architecture and principle of SuperEdge edge containers. Many people may not know much about it. In order to make you understand better, the editor has summarized the following for you. I hope you can get something according to this article.
Preface
Superedge is the Kubernetes-native edge computing management framework launched by Tencent. Compared with openyurt and kubeedge,superedge, Kubernetes not only has zero intrusion and edge autonomy, but also supports unique advanced features such as distributed health inspection and edge service access control, which greatly reduces the impact of cloud network instability on services, and facilitates the release and governance of edge cluster services to a great extent.
Characteristics
Kubernetes-native:superedge extends on the basis of native Kubernetes, adding a dry component of edge computing, which is completely intrusive to Kubernetes; in addition, the native Kubernetes cluster can be enabled to enable edge computing by simply deploying superedge core components; in addition, zero intrusion makes it possible to deploy any Kubernetes native workload (deployment, statefulset, daemonset, and etc) on the edge cluster.
Edge autonomy: superedge provides L3-level edge autonomy. When the edge node is unstable or disconnected from the cloud network, the edge node can still operate normally without affecting the deployed edge services.
Distributed health check: superedge provides edge-to-end distributed health check capability. Each edge node will deploy edge-health. The edge nodes in the same edge cluster will check each other's health and vote on the node status. In this way, even if there is a problem with the cloud network, as long as the connection between the edge nodes is normal, the node will not be expelled. In addition, distributed health inspection also supports grouping, dividing cluster nodes into multiple groups (nodes in the same computer room are divided into the same group), and the nodes in each group check each other. The advantage of this is to avoid the large data exchange between nodes when the size of the cluster increases, which makes it difficult to reach agreement. At the same time, it also adapts to the situation that edge nodes are grouped naturally on the network topology. The whole design avoids a large number of pod migration and reconstruction caused by the instability of the cloud side network, and ensures the stability of the service.
Service access control: superedge developed ServiceGroup to implement service access control based on edge computing. Based on this feature, you only need to build two kinds of Custom Resource, DeploymentGrid and ServiceGrid, and you can easily deploy a set of services in different data centers or regions that belong to the same cluster, and the requests between services can be completed within the server room or within the local region (closed loop), thus avoiding cross-region access to services. Making use of this feature can greatly facilitate the publishing and governance of edge cluster services.
Yunbian tunnel: superedge supports self-built tunnels (currently supports TCP and HTTP and HTTPS) to solve cloud connection problems in different network environments. Achieve unified operation and maintenance of IP edge nodes without public network
Overall architecture
The functions of the components are summarized as follows:
Cloud component
In addition to the native Kubernetes master components (cloud-kube-apiserver,cloud-kube-controller and cloud-kube-scheduler) deployed in the edge cluster, the main management components in the cloud include:
Tunnel-cloud: responsible for maintaining the network tunnel with the edge node tunnel-edge, which currently supports the TCP/HTTP/HTTPS protocol
Application-grid controller: service access control Kubernetes Controller corresponding to ServiceGroup, responsible for managing DeploymentGrids and ServiceGrids CRDs, and generating corresponding Kubernetes deployment and service from these two kinds of CR. At the same time, self-developed service topology awareness enables service closed-loop access.
Edge-admission: determines the health of the edge node through the status report of the distributed health check of the edge node, and assists cloud-kube-controller to perform related processing actions (hit taint)
Edge component
In addition to the kubelet,kube-proxy that the native Kubernetes worker node needs to deploy, the following edge computing components have been added to the side:
Lite-apiserver: the core component of edge autonomy, which is the proxy service of cloud-kube-apiserver. It caches some requests made by edge node components to apiserver. When these requests are encountered and have problems with the cloud-kube-apiserver network, they will be directly returned to the client.
Edge-health: edge-end distributed health check service, which is responsible for performing specific monitoring and detection operations, and voting to determine whether the node is healthy.
Tunnel-edge: responsible for establishing a network tunnel with the cloud edge cluster tunnel-cloud, and accepting API requests and forwarding them to the edge node component (kubelet)
Application-grid wrapper: combined with application-grid controller to complete closed-loop service access within ServiceGrid (service topology awareness)
Functional Overview Application deployment & Service access Control
Superedge can support application deployment for all workloads of native Kubernetes, including:
Deployment
Statefulset
Daemonset
Job
Cronjob
For edge computing applications, it has the following unique characteristics:
In edge computing scenarios, multiple edge sites are often managed in the same cluster, and each edge site has one or more computing nodes.
At the same time, we want to run a set of services with business logic connections in each site, and the services in each site are a complete set of functions that can provide services for users.
Due to network restrictions, services with business connections do not want or cannot be accessed across sites
In order to solve the above problems, superedge creatively constructs the concept of ServiceGroup, which makes it convenient for users to deploy a set of services in different computer rooms or regions that belong to the same cluster, and enables requests between services to be completed within the local computer room or local region (closed loop), thus avoiding cross-regional access to services.
Several key concepts are involved in ServiceGroup:
NodeUnit
NodeUnit is usually one or more instances of computing resources located in the same edge site. It is necessary to ensure that the private networks of nodes in the same NodeUnit are connected.
The services in the ServiceGroup group run within a NodeUnit
ServiceGroup allows users to set the number of pod (belongs to deployment) that the service runs in one NodeUnit
ServiceGroup can limit calls between services to this NodeUnit
NodeGroup
NodeGroup contains one or more NodeUnit
Ensure that the services in the ServiceGroup are deployed on each NodeUnit in the collection
When NodeUnit is added to the cluster, the services in ServiceGroup are automatically deployed to the new NodeUnit.
ServiceGroup
ServiceGroup contains one or more business services
Applicable scenarios:
Business needs to be packed and deployed
Need to run in each NodeUnit and guarantee the number of pod
Calls between services need to be controlled in the same NodeUnit, and traffic cannot be forwarded to other NodeUnit
Note: ServiceGroup is an abstract resource concept. Multiple ServiceGroup can be created in a cluster.
The following is a specific example to illustrate the ServiceGroup function:
# step1: labels edge nodes$ kubectl get nodesNAME STATUS ROLES AGE VERSIONnode0 Ready 16d v1.16.7node1 Ready 16d v1.16.7node2 Ready 16d v1.16.The nodeunit1 (nodegroup and servicegroup zone1) $kubectl-- kubeconfig config label nodes node0 zone1=nodeunit1 # nodeunit2 (nodegroup and servicegroup zone1) $kubectl-- kubeconfig config label nodes node1 zone1=nodeunit2 $kubectl-- kubeconfig config label nodes node2 zone1=nodeunit2# step2: deploy echo DeploymentGrid$ cat
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.