In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces the relevant knowledge of "what are the common container application management solutions in Kubernetes edge scenarios". The editor shows you the operation process through actual cases, and the operation method is simple, fast and practical. I hope this article "what are the common container application management solutions in Kubernetes edge scenarios" can help you solve the problem.
1. Edge simple service scenario
In the marginal requirements that the author has come into contact with, some of the user business scenarios are relatively simple, such as: dial-up testing service. The characteristic of this scenario is that users want to deploy the same service on each node, and each node can start a pod. In this scenario, users are generally recommended to deploy directly using daemonset. Readers can read the official kubernetes documentation about the characteristics and usage of daemonset.
two。 Edge single-site deployment micro-service scenario
The second scenario is to deploy the edge SAAS service. As the customer's trade secrets are involved, we will not give an example here. Users will deploy a complete set of micro-services, including account services, access services, business services, storage and message queues, in an edge computer room. The dns of kubernetes is used to register and discover services between the services. In this case, customers can use deployment and service directly, and the main difficulty is not service management but marginal autonomy.
Readers can read the official kubernetes documentation on how to use deployment and service, and we will introduce related articles on the edge autonomy of TKE@edge in the future.
3. Edge multi-site deployment micro-service scenario
Scene characteristics
In edge computing scenarios, multiple edge sites are often managed in the same cluster, and there are one or more computing nodes in each edge site.
You want to run a set of services with business logic connections in each site. The services in each site are a complete set of micro-services, which can provide services for users.
Due to network restrictions, services with business connections do not want or cannot be accessed across sites
Conventional scheme
1. Restrict services to one node
The characteristics of the scheme are as follows:
Services are deployed as daemonset so that there is a pod for all services on each edge node. As shown in the figure above, there are two services An and B in the cluster. After deployment with daemonset, there is a Pod-An and Pod-B on each edge node.
The service is accessed through localhost to lock the call chain in the same node. As shown in the figure above, Pod-An and Pod-B are accessed by localhost
The disadvantages of the scheme are:
Each service can have only one Pod within the same node, due to the limitation of daemonset working mechanism, which is extremely inconvenient for services that need to run multiple Pod on the same node.
Pod needs to use hostnetWork mode so that Pod can be accessed through localhost+port. This means that users are required to manage the service's use of resources well to avoid resource competition, such as port competition.
two。 Services have different names at different sites.
The characteristics of the scheme are as follows:
The same service has different names at different sites to lock access between services within the same site. As shown in the figure above, there are two services An and B in the cluster, which are named svr-A-1 and Svc-B-1 in site-1 and svr-A-2 and Svc-B-2 in site-2.
The disadvantages of the scheme are:
Services have different names at different sites, so services cannot simply be called through service names An and B. instead, Svc-A-1 and Svc-B-1 are used in site-1 and Svc-A-2 and Svc-B-2 are used in site-2. It is extremely unfriendly to the business that implements micro-services with K8s dns.
Scene pain point
1.k8s itself does not provide a solution directly for the following scenarios.
First of all, there are many regional deployment problems: usually, an edge cluster will manage many edge sites (there are one or more computing resources in each edge site), and the central cloud scene is often the central computer room in some large areas. the edge area is more than the central cloud scene, maybe a small city has an edge computer room, and the number of regions may be very large. In native k8s, the creation of pod is difficult to specify, unless node affinity is used to deploy for each region, but if there are a dozen or even dozens of regions, take the deployment that requires multiple services to be deployed in each region as an example, the names and selector of each deployment need to be different, with dozens of regions, which means hundreds of corresponding different name,selector,pod labels and affinity deployment yaml are required. Just writing these yaml is a huge amount of work.
Services services need to be associated with regions, such as transcoding and composition services in audio and video services. To complete access to audio and video services in their own region, users want the mutual calls between services to be limited to the local region, rather than cross-region access. This also requires users to prepare hundreds of local deployment-specific service deployment yaml for different selector and name.
A more complicated problem is that if the service name is used for mutual access between services in the user program, then in the current environment, because the name of service varies from region to region, the original application does not even work and needs to be adapted separately for each region, which is too complex.
two。 In addition, in order to keep the scheduling scheme of containerized services consistent with those previously running on vm or physical machines, users naturally want to assign a public network IP to each pod, but the number of public network IP is obviously insufficient.
To sum up, although the native k8s can meet the demand 1) in disguise, the actual solution is very complex, and there is no good solution for demand 2).
In order to solve the above pain points, TKE@edge creatively proposed and implemented the serviceGroup feature, two yaml files can easily achieve even hundreds of regional service deployment, without the need for application adaptation or modification.
Introduction to seviceGroup function
ServiceGroup can easily deploy a set of services in different data centers or regions that belong to the same cluster, and the requests between services can be completed within the server room or within the local region, avoiding cross-region access of services.
Native k8s cannot control the location of specific nodes created by deployment's pod, which needs to be done indirectly through overall planning of the affinity of nodes. When the number of edge sites and the number of services to be deployed are too large, the management and deployment are extremely complex, even there is only theoretical possibility. At the same time, in order to limit the mutual invocation between services to a certain range, the business side needs to create its own service for each deployment. The workload of management is huge and error-prone and causes online business anomalies.
ServiceGroup is designed for this scenario. Customers only need to use DeploymentGrid and ServiceGrid kubernetes resources self-developed by tkeedge provided by ServiceGroup to easily deploy services to these node groups and control service traffic. In addition, they can also ensure the number of services and disaster recovery in each region.
Key concepts of serviceGroup
1. Overall architecture
NodeUnit
NodeUnit is usually one or more instances of computing resources located in the same edge site. It is necessary to ensure that the private networks of nodes in the same NodeUnit are connected.
The services in the ServiceGroup group run within a NodeUnit
Tkeedge allows users to set the number of pod that the service runs in a NodeUnit
Tkeedge can limit calls between services to this NodeUnit
NodeGroup
NodeGroup contains one or more NodeUnit
Ensure that the services in the ServiceGroup are deployed on each NodeUnit in the collection
Automatically deploy the services in ServiceGroup to the new NodeUnit when NodeUnit is added to the cluster
ServiceGroup
ServiceGroup contains one or more business services
Applicable scenarios: 1) the business needs to be packaged and deployed; 2) it needs to be run in each NodeUnit and the number of pod is guaranteed; 3) or the calls between services need to be controlled in the same NodeUnit, and traffic cannot be forwarded to other NodeUnit.
Note: ServiceGroup is an abstract resource. Multiple ServiceGroup can be created in a cluster.
two。 Types of resources involved
DepolymentGrid
The format of DeploymentGrid is similar to that of Deployment. The field is the template field of the original deployment, and the special field is the gridUniqKey field, which indicates the key value of the label grouped by the node.
ApiVersion: tkeedge.io/v1 kind: DeploymentGridmetadata: name: namespace: spec: gridUniqKey:
ServiceGrid
The format of ServiceGrid is similar to that of Service. The field is the template field of the original service, and the special field is the gridUniqKey field, which indicates the key value of the label grouped by the node.
ApiVersion: tkeedge.io/v1 kind: ServiceGrid metadata: name: namespace: spec: gridUniqKey:
3. Use the example
Taking the deployment of nginx at the edge as an example, we want to deploy nginx services in multiple node groups. We need to do the following:
1) determine the unique identity of the ServiceGroup
This step is logical planning and does not require any practical operation. We will use the UniqKey of the serviceGroup logical tag we are currently creating as: zone.
2) grouping edge nodes
To do this, you need to use the TKE@edge console or kubectl to type the label,tke@edge console operator to the edge node, as shown below:
3) on the node list page of the cluster, click "Edit tag" to label the node.
Drawing lessons from the diagram in the "overall architecture" chapter, we select Node12, Node14, label,zone=NodeUnit1;Node21, Node23 and label,zone=NodeUnit2.
Note: in the previous step, the key of label is the same as the UniqKey of ServiceGroup. Value is the only key,value of NodeUnit. The same node belongs to the same NodeUnit. The same node can type multiple label to achieve the purpose of dividing the NodeUnit from multiple dimensions, such as adding label,test=a1 to the Node12.
If there are multiple ServiceGroup in the same cluster, please assign a different Uniqkey to each ServiceGroup.
4) deploy deploymentGrid
ApiVersion: tkeedge.io/v1kind: DeploymentGridmetadata: name: deploymentgrid-demo namespace: defaultspec: gridUniqKey: selector: matchLabels: appGrid: nginx replicas: 2 template: metadata: labels: appGrid: nginx spec: containers:\-name: nginx image: nginx:1.7.9 ports:\-containerPort: 80apiVersion: tkeedge.io/v1kind: ServiceGrid metadata: name: servicegrid-demo namespace: defaultspec: gridUniqKey: zone template : selector: appGrid: nginx ports:\-protocol: TCP port: 80 targetPort: 80 sessionAffinity: ClientIP
5) deploy serviceGrid
You can see that the gridUniqKey field is set to zone, so the key of label used when grouping nodes is zone. If there are three groups of nodes, you can add the label of zone: zone-0, zone: zone-1, zone: zone-2 for them respectively. In this case, there are nginx deployment and corresponding pod in each group of nodes, and accessing the unified service-name within the node will only send the request to the nodes of this group.
In addition, for node groups that are added to the cluster after DeploymentGrid and ServiceGrid are deployed, this feature automatically creates the specified deployment and service within the new node group.
This is the end of the content about "what are the common container application management solutions in Kubernetes edge scenarios". Thank you for reading. If you want to know more about the industry, you can follow the industry information channel. The editor will update different knowledge points for you every day.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.