In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly analyzes how to analyze the relevant knowledge points of SuperEdge topology algorithm, the content is detailed and easy to understand, the operation details are reasonable, and has a certain reference value. If you are interested, you might as well follow the editor to have a look, and follow the editor to learn more about "how to analyze SuperEdge topology algorithms".
The preface SuperEdge introduces that SuperEdge is an edge container management system based on native Kubernetes. The system extends the cloud native capability to the edge side, and well realizes the management and control of the cloud to the edge end. At the same time, superedge developed service group to implement service access control based on edge computing, which greatly simplifies the process of application deployment from the cloud to the edge. Topology aware characteristics of SuperEdge service group
SuperEdge service group uses application-grid-wrapper to realize topology awareness and completes the closed-loop access of services in the same nodeunit.
Before delving into application-grid-wrapper, here is a brief introduction to the topology awareness features natively supported by community Kubernetes [1]
The Kubernetes service topology awareness feature was released in alpha version v1.17 to implement routing topologies and nearest access features. Users need to add a topologyKeys field to the service to indicate the topology key type, and only endpoint with the same topology domain will be accessed. Currently, there are three topologyKeys to choose from:
"kubernetes.io/hostname": access endpoint within this node (same kubernetes.io/hostname label value). If not, service access failed "topology.kubernetes.io/zone": access to endpoint within the same zone domain (same topology.kubernetes.io/zone label value); if not, service access failure "topology.kubernetes.io/region": access to endpoint within the same region domain (same topology.kubernetes.io/region label value); if not, service access failed
In addition to entering a topology key as above, you can also construct a list of these key, for example: ["kubernetes.io/hostname", "topology.kubernetes.io/zone", "topology.kubernetes.io/region"], which means: if the endpoint; in this node does not exist, then access the endpoint; in the same zone if it no longer exists, then access the endpoint in the same region. If none of them exist, the access fails.
In addition, you can add a "*" to the end of the list (only the last item) to indicate that if the previous topology domain fails, then access any valid endpoint, that is, there is no restricted topology. Examples are as follows: # A Service that prefers node local, zonal, then regional endpoints but falls back to cluster wide endpoints.
ApiVersion: v1
Kind: Service
Metadata:
Name: my-service
Spec:
Selector:
App: my-app
Ports:
-protocol: TCP
Port: 80
TargetPort: 9376
TopologyKeys:
-"kubernetes.io/hostname"
-"topology.kubernetes.io/zone"
-"topology.kubernetes.io/region"
-"*"
On the other hand, the topology awareness and community comparison implemented by service group have the following differences:
The service group topology key can be customized, or gridUniqKey, and is more flexible to use, while community implementations currently have only three options: "kubernetes.io/hostname", "topology.kubernetes.io/zone", and "topology.kubernetes.io/region". Service group can only fill in one topology key, that is, it can only access the valid endpoint in this topology domain, but cannot access the endpoint; of other topology domains, and the community can access other alternative topology domain endpoint through the topologyKey list and "*".
For topology awareness implemented by service group, service is configured as follows:
# A Service that only prefers node zone1al endpoints.
ApiVersion: v1
Kind: Service
Metadata:
Annotations:
TopologyKeys:'["zone1"]'
Labels:
Superedge.io/grid-selector: servicegrid-demo
Name: servicegrid-demo-svc
Spec:
Ports:
-port: 80
Protocol: TCP
TargetPort: 8080
Selector:
AppGrid: echo
After introducing the topology awareness of the service group implementation, we delve into the source code analysis implementation details. Again, start the analysis with a usage example:
# step1: labels edge nodes
$kubectl get nodes
NAME STATUS ROLES AGE VERSIO
Nnode0 Ready 16d v1.16.7
Node1 Ready 16d v1.16.7
Node2 Ready 16d v1.16.7
# nodeunit1 (nodegroup and servicegroup zone1)
$kubectl-kubeconfig config label nodes node0 zone1=nodeunit1
# nodeunit2 (nodegroup and servicegroup zone1)
$kubectl-kubeconfig config label nodes node1 zone1=nodeunit2
$kubectl-kubeconfig config label nodes node2 zone1=nodeunit2
...
# step3: deploy echo ServiceGrid
$cat lite-apiserver-> kube-apiserver
Therefore, application-grid-wrapper will start the service and accept requests from kube-proxy, as follows:
Func (s * interceptorServer) Run (debug bool, bindAddress string, insecure bool, caFile, certFile, keyFile string) error {
...
Klog.Infof ("Start to run interceptor server")
/ * filter
, /
Server: = & http.Server {Addr: bindAddress, Handler: s.buildFilterChains (debug)}
If insecure {
Return server.ListenAndServe ()
}
...
Server.TLSConfig = tlsConfig
Return server.ListenAndServeTLS (",")
}
Func (s * interceptorServer) buildFilterChains (debug bool) http.Handler {
Handler: = http.Handler (http.NewServeMux ())
Handler = s.interceptEndpointsRequest (handler)
Handler = s.interceptServiceRequest (handler)
Handler = s.interceptEventRequest (handler)
Handler = s.interceptNodeRequest (handler)
Handler = s.logger (handler)
If debug {
Handler = s.debugger (handler)
}
Return handler
} here, the interceptorServer is created first, and then the handler function is registered, from outside to inside as follows:
Debug: accepts debug request and returns wrapper pprof operation information
Logger: print request log
Node: accepts the kube-proxy node GET (/ api/v1/nodes/ {node}) request and returns node information
Event: accepts the kube-proxy events POST (/ events) request and forwards the request to lite-apiserver
Func (s * interceptorServer) interceptEventRequest (handler http.Handler) http.Handler {
Return http.HandlerFunc (func (w http.ResponseWriter, r * http.Request) {if r.Method! = http.MethodPost | |! strings.HasSuffix (r.URL.Path, "/ events") {
Handler.ServeHTTP (w, r)
Return
}
TargetURL, _: = url.Parse (s.restConfig.Host)
ReverseProxy: = httputil.NewSingleHostReverseProxy (targetURL)
ReverseProxy.Transport, _ = rest.TransportFor (s.restConfig)
ReverseProxy.ServeHTTP (w, r)
})
}
Service: accepts the kube-proxy service List&Watch (/ api/v1/services) request and returns (GetServices) based on the storageCache content
Endpoint: accepts the kube-proxy endpoint List&Watch (/ api/v1/endpoints) request and returns (GetEndpoints) based on the storageCache content
Let's first focus on the logic of the cache part, and then go back to the specific http handler List&Watch processing logic.
In order to achieve topology awareness, wrapper maintains a cache, including: node,service,endpoint. You can see that the handlers for these three types of resources are registered in setupInformers:
Type storageCache struct {
/ / hostName is the nodeName of node which application-grid-wrapper deploys on
HostName string
WrapperInCluster bool
/ / mu lock protect the following map structure
Mu sync.RWMutex
ServicesMap map [types.NamespacedName] * serviceContainer
EndpointsMap map [types.NamespacedName] * endpointsContainer
NodesMap map [types.NamespacedName] * nodeContainer
/ / service watch channel
ServiceChan chan
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.