In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "how to scale down stateful applications in kubernetes". In daily operation, I believe that many people have doubts about how to scale down stateful applications in kubernetes. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts about "how to scale down stateful applications in kubernetes". Next, please follow the editor to study!
Each Pod created through StatefulSet has its own PersistentVolumeClaim (PVC) and PersistentVolume (PV). When the size of a StatefulSet is scaled down by a copy, one of its Pod terminates, but the associated PersistentVolumeClaim and the PersistentVolume bound to it remain the same. As they subsequently scale up, they will reconnect to the Pod.
Scaling a StatefulSet
Now imagine using StatefulSet to deploy a stateful application whose data is partitioned in its pod. Each instance saves and processes only part of the data. When you reduce the size of a stateful application, one of the instances will be terminated and its data should be redistributed to the rest of the Pod. If you do not reassign the data, it remains inaccessible until you extend it again.
Redistributing data on scale-down
Redistribute data during a normal shutdown
You might think, "since Kubernetes supports the normal shutdown mechanism of Pod, can Pod simply reassign its data to other instances during the shutdown process?" Actually, it can't. There are two reasons why not:
Pod (or rather, its container) may receive termination signals for reasons other than downsizing. The application running in the container does not know why the program is terminated, so it does not know whether to empty the data.
Even if the application can tell whether it is downsized or terminated for other reasons, it needs to ensure that the shutdown program can be completed even after hours or days. Kubernetes does not provide this guarantee. If the application process dies during the shutdown, it will not restart, so there is no opportunity to fully distribute the data.
Therefore, it is not a good idea to believe that Pod will be able to redistribute (or otherwise process all its data) during a normal shutdown and will cause the system to be very fragile.
Use tear-down containers?
If you are not new to Kubernetes, you probably know what an initialization container is. They run before the main container of the container, and must all be done before the main container starts.
What if we have a tear-down container (similar to an init container) but run again after the Pod's main container terminates? Can they perform data redistribution in our stateful Pod?
Suppose the tear-down container can determine whether the Pod is terminated due to capacity reduction. And assume that Kubernetes (or more specifically Kubelet) will ensure that the tear-down container completes successfully (by restarting it each time a non-zero exit code is returned). If both assumptions are true, we will have a mechanism to ensure that stateful containers can always downsize and redistribute their data proportionally.
But?
Sadly, when a transient error occurs in the tear-down container itself and restarts the container one or more times so that it completes successfully, the tear-down container mechanism like above will only deal with those cases. But what about those unfortunate moments when the cluster node hosting Pod dies during the tear-down process? Obviously, the process cannot be completed, so the data cannot be accessed.
It is now clear that we should not perform data reallocation when Pod is closed. Instead, we should create a new Pod (perhaps on a completely different cluster node) to perform the redistribution process.
This brings us the following solutions:
When you shrink the StatefulSet, you must create a new container and bind it to an isolated PersistentVolumeClaim. We call it "drain pod" because its job is to redistribute the data elsewhere (or process it in some other way). Pod must have access to orphaned data and can use it to do whatever it wants. Because the redistributor for each application varies widely, the new container should be fully configurable-users should be able to run any container they want within drain Pod.
StatefulSet Drain Controller
Since the StatefulSet controller does not currently provide this functionality, we can implement an additional controller whose sole purpose is to handle StatefulSet downsizing. I recently implemented a proof of concept for this kind of controller. You can find the source code on GitHub:
Luksa/statefulset-scaledown-controllergithub.com
Let's explain how it works.
After deploying the controller to the Kubernetes cluster, you can add the drain container template to any StatefulSet simply by adding comments to the StatefulSet manifest. This is an example:
ApiVersion: apps/v1kind: StatefulSetmetadata: name: datastore annotations: statefulsets.kubernetes.io/drainer-pod-template: {"metadata": {"labels": {"app": "datastore-drainer". At this point, the study on "how to scale down stateful applications in kubernetes" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.