Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the structural definition of Kubernetes Replication Controller

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what is the structural definition of Kubernetes Replication Controller". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "what is the structural definition of Kubernetes Replication Controller"?

ReplicationManager

ReplicationManager is the ReplicationController controller object, which is easy to distinguish from ReplicationController Resource API Object in code. The following code is the structural definition of ReplicationManager.

Pkg/controller/replication/replication_controller.go:75// ReplicationManager is responsible for synchronizing ReplicationController objects stored in the system with actual running pods.type ReplicationManager struct {kubeClient clientset.Interface podControl controller.PodControlInterface / / internalPodInformer is used to hold a personal informer. If we're using / / a normal shared informer, then the informer will be started for us. If / / we have a personal informer, we must start it ourselves. If you start / / the controller using NewReplicationManager (passing SharedInformer), this / / will be null internalPodInformer cache.SharedIndexInformer / / An rc is temporarily suspended after creating/deleting these many replicas / / It resumes normal action after observing the watch events for them. BurstReplicas int / / To allow injection of syncReplicationController for testing. SyncHandler func (rcKey string) error / / A TTLCache of pod creates/deletes each rc expects to see. Expectations * controller.UIDTrackingControllerExpectations / / A store of replication controllers, populated by the rcController rcStore cache.StoreToReplicationControllerLister / / Watches changes to all replication controllers rcController * cache.Controller / / A store of pods, populated by the podController podStore cache.StoreToPodLister / / Watches changes to all pods podController cache.ControllerInterface / / podStoreSynced returns true if the pod store has been synced at least once. / / Added as a member to the struct to allow injection for testing. PodStoreSynced func () bool lookupCache * controller.MatchingCache / / Controllers that need to be synced queue workqueue.RateLimitingInterface / / garbageCollectorEnabled denotes if the garbage collector is enabled. RC / / manager behaves differently if GC is enabled. GarbageCollectorEnabled bool}

Focus on the following objects:

PodControl: provides the operation interface of Create/Delete Pod.

BurstReplicas: the maximum number of concurrency allowed for each batch of Create/Delete Pods.

SyncHandler: the function that actually executes Replica Sync.

Expectation: maintains the Uid Cache of the Pod in the desired state, and provides an interface to modify the Cache.

RcStore: the Indexer of the ReplicationController Resource object, the data provided and maintained by rcController.

RcController: used to watch all ReplicationController Resource,watch to change update to rcStore.

PodStore: the Indexer of Pod, the data is provided and maintained by podController.

PodController: used to watch all Pod Resource,watch to change update to podStore.

Queue: the RC used to store the sync, which is a queue of type RateLimit.

LookupCache: a cache that provides matching information between Pod and RC to improve query efficiency.

Where did ReplicationController start?

Read my blog post: Kubernetes ResourceQuotaController internal implementation principles and source code analysis may have an impression, which also mentioned how controller manager starts ResourceQuotaController, ReplicationController is the same. When kube-controller-manager calls newControllerInitializers for controller initialization, startReplicationController is registered to start the ReplicationController controller.

Cmd/kube-controller-manager/app/controllermanager.go:224func newControllerInitializers () map [string] InitFunc {controllers: = map [string] InitFunc {} controllers ["endpoint"] = startEndpointController controllers ["replicationcontroller"] = startReplicationController controllers ["podgc"] = startPodGCController controllers ["resourcequota"] = startResourceQuotaController controllers ["namespace"] = startNamespaceController controllers ["serviceaccount"] = startServiceAccountController controllers ["garbagecollector"] = startGarbageCollectorController controllers ["daemonset"] = startDaemonSetController controllers ["job"] = startJobController controllers ["deployment"] = startDeploymentController controllers ["replicaset"] = startReplicaSetController controllers ["horizontalpodautoscaling"] = startHPAController controllers ["disruption"] = startDisruptionController controllers ["statefuleset"] = startStatefulSetController controllers ["cronjob"] = startCronJobController controllers ["certificatesigningrequests"] = startCSRController return controllers}

The code continues to follow startReplicationController, and it's simple to start a goroutine, call replicationcontroller.NewReplicationManager to create a ReplicationManager and execute the Run method to get started.

Cmd/kube-controller-manager/app/core.go:55func startReplicationController (ctx ControllerContext) (bool, error) {go replicationcontroller.NewReplicationManager (ctx.InformerFactory.Pods (). Informer (), ctx.ClientBuilder.ClientOrDie ("replication-controller"), ResyncPeriod (& ctx.Options), replicationcontroller.BurstReplicas, int (ctx.Options.LookupCacheSizeForRC) Ctx.Options.EnableGarbageCollector,) .Run (int (ctx.Options.ConcurrentRCSyncs), ctx.Stop) return true, nil} create ReplicationManager

As analyzed above, controller-manager creates a ReplicationManager object through NewReplicationManager, which is actually a ReplicationController controller.

Pkg/controller/replication/replication_controller.go:122// NewReplicationManager creates a replication managerfunc NewReplicationManager (podInformer cache.SharedIndexInformer, kubeClient clientset.Interface, resyncPeriod controller.ResyncPeriodFunc, burstReplicas int, lookupCacheSize int GarbageCollectorEnabled bool) * ReplicationManager {eventBroadcaster: = record.NewBroadcaster () eventBroadcaster.StartLogging (glog.Infof) eventBroadcaster.StartRecordingToSink (& v1core.EventSinkImpl {Interface: kubeClient.Core (). Events (")}) return newReplicationManager (eventBroadcaster.NewRecorder (v1.EventSource {Component:" replication-controller "}), podInformer, kubeClient, resyncPeriod, burstReplicas, lookupCacheSize GarbageCollectorEnabled)} pkg/controller/replication/replication_controller.go:132// newReplicationManager configures a replication manager with the specified event recorderfunc newReplicationManager (eventRecorder record.EventRecorder, podInformer cache.SharedIndexInformer, kubeClient clientset.Interface, resyncPeriod controller.ResyncPeriodFunc, burstReplicas int, lookupCacheSize int, garbageCollectorEnabled bool) * ReplicationManager {if kubeClient! = nil & & kubeClient.Core (). RESTClient (). GetRateLimiter ()! = nil {metrics.RegisterMetricAndTrackRateLimiterUsage ("replication_controller") KubeClient.Core (). RESTClient (). GetRateLimiter ()} rm: = & ReplicationManager {kubeClient: kubeClient, podControl: controller.RealPodControl {KubeClient: kubeClient, Recorder: eventRecorder,}, burstReplicas: burstReplicas, expectations: controller.NewUIDTrackingControllerExpectations (controller.NewControllerExpectations ()) Queue: workqueue.NewNamedRateLimitingQueue (workqueue.DefaultControllerRateLimiter (), "replicationmanager"), garbageCollectorEnabled: garbageCollectorEnabled,} rm.rcStore.Indexer, rm.rcController = cache.NewIndexerInformer (& cache.ListWatch {ListFunc: func (options v1.ListOptions) (runtime.Object) Error) {return rm.kubeClient.Core () .ReplicationControllers (v1.NamespaceAll) .List (options)}, WatchFunc: func (options v1.ListOptions) (watch.Interface Error) {return rm.kubeClient.Core () .ReplicationControllers (v1.NamespaceAll) .watch (options)},}, & v1.ReplicationController {}, / / TODO: Can we have much longer period here? FullControllerResyncPeriod, cache.ResourceEventHandlerFuncs {AddFunc: rm.enqueueController, UpdateFunc: rm.updateRC, / / This will enter the sync loop and no-op, because the controller has been deleted from the store. / / Note that deleting a controller immediately after scaling it to 0 will not work. The recommended / / way of achieving this is by performing a `stop` operation on the controller. DeleteFunc: rm.enqueueController,}, cache.Indexers {cache.NamespaceIndex: cache.MetaNamespaceIndexFunc},) podInformer.AddEventHandler (cache.ResourceEventHandlerFuncs {AddFunc: rm.addPod, / / This invokes the rc for every pod change, eg: host assignment. Though this might seem like overkill / / the most frequent pod update is status, and the associated rc will only list from local storage, so / / it should be ok. UpdateFunc: rm.updatePod, DeleteFunc: rm.deletePod,}) rm.podStore.Indexer = podInformer.GetIndexer () rm.podController = podInformer.GetController () rm.syncHandler = rm.syncReplicationController rm.podStoreSynced = rm.podController.HasSynced rm.lookupCache = controller.NewMatchingCache (lookupCacheSize) return rm}

ReplicationManager is mainly configured in newReplicationManager, such as:

Configure queue through workqueue.NewNamedRateLimitingQueue.

Configure expectations through controller.NewUIDTrackingControllerExpectations.

Configure rcStore, podStore, rcController, podController.

It's important to configure syncHandler to rm.syncReplicationController, so I'll list it separately. As we will talk about later, syncReplicationController is the way to do the core work, and it can be said that the automatic maintenance of Replica is done by it.

Execute ReplicationManger.Run to start work

Now that the ReplicationManager has been created, we have to get to work. The Run method is the starting point for work, starting with watching and syncing.

Pkg/controller/replication/replication_controller.go:217// Run begins watching and syncing.func (rm * ReplicationManager) Run (workers int, stopCh rm.burstReplicas {diff = rm.burstReplicas} / / TODO: Track UIDs of creates just like deletes. The problem currently / / is we'd need to wait on the result of a create to record the pod's / / UID, which would require locking * across* the create, which will turn / / into a performance bottleneck. We should generate a UID for the pod / / beforehand and store it via ExpectCreations. ErrCh: = make (chan error, diff) rm.expectations.ExpectCreations (rcKey, diff) var wg sync.WaitGroup wg.Add (diff) glog.V (2). Infof ("Too few% Q +% Q replicas, need% d, creating% d", rc.Namespace, rc.Name, * (rc.Spec.Replicas), diff) for I: = 0; I < diff Var err error if rm.garbageCollectorEnabled + {go func () {defer wg.Done () var err error if rm.garbageCollectorEnabled {var trueVar = true ControllerRef: = & metav1.OwnerReference {APIVersion: getRCKind () .GroupVersion () .String () Kind: getRCKind () .Kind, Name: rc.Name, UID: rc.UID, Controller: & trueVar } err = rm.podControl.CreatePodsWithControllerRef (rc.Namespace, rc.Spec.Template, rc, controllerRef)} else {err = rm.podControl.CreatePods (rc.Namespace, rc.Spec.Template) Rc)} if err! = nil {/ / Decrement the expected number of creates because the informer won't observe this pod glog.V (2). Infof ("Failed creation, decrementing expectations for controller% qmax% Q", rc.Namespace Rc.Name) rm.expectations.CreationObserved (rcKey) errCh

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report