In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Today, the editor will share with you the relevant knowledge points about how to solve the 502 errors when upgrading the application of the K8s service springboot project. The content is detailed and the logic is clear. I believe most people still know too much about this knowledge, so share this article for your reference. I hope you can get something after reading this article. Let's take a look at it.
As the development mode of small-step fast running and fast iteration is recognized and adopted by more and more Internet enterprises, the frequency of application changes and upgrades becomes more and more frequent. In order to meet different upgrade needs and ensure that the upgrade process is carried out smoothly, a series of deployment and release models have been born.
Downtime release-completely stop the old application instance, and then release the new version. The main purpose of this release mode is to solve the problem that the new and old versions are incompatible and cannot coexist, but the disadvantage is that the service is completely unavailable for a period of time.
Blue and green release-deploy the same number of new and old application instances online at the same time. After the test of the new version is passed, the traffic is cut to the new service instance at once. This publishing mode solves the problem that the service is completely unavailable in downtime publishing, but it will cause a large consumption of resources.
Rolling release-gradually replace the application instance in batches. This publishing mode will not interrupt the service and will not consume too many additional resources, but because the instances of the new and old versions are online at the same time, it may cause compatibility problems due to the switching of requests from the same client in the new and old versions.
Canary release-gradually switch traffic from the old version to the new version. If no problem is found after observing for a period of time, the traffic on the new version will be further expanded while the traffic on the old version will be reduced.
Athumb B test-launch two or more versions at the same time, collect user feedback on these versions, analyze and evaluate the best version for formal adoption.
K8s application upgrade
In K8s, pod is the basic unit for deployment and upgrade. Generally speaking, a pod represents an application instance, and pod will be deployed and run in the form of Deployment, StatefulSet, DaemonSet, Job, and so on. The following describes the upgrade method of pod in these deployment forms.
Deployment
Deployment is the most common deployment form of pod. Here we will take the spring boot-based java application as an example. The application is a simple version abstracted from the real application, which is very representative and has the following characteristics:
After the application is started, it takes a certain amount of time to load the configuration, during which time the service cannot be provided.
Just because an application can be started doesn't mean it can provide services normally.
Applications may not automatically exit if they are unable to provide services.
During the upgrade process, you need to ensure that the application instance that is about to go offline will not receive a new request and have enough time to process the current request.
Parameter configuration
In order to enable applications with the above characteristics to achieve zero downtime and no production interruption upgrade, it is necessary to carefully configure the relevant parameters in Deployment. The configuration related to the upgrade here is as follows (see spring-boot-probes-v1.yaml for the complete configuration).
Kind: Deployment...spec: replicas: 8 strategy: type: RollingUpdate rollingUpdate: maxSurge: 3 maxUnavailable: 2 minReadySeconds: 120... Template:... Spec: containers:-name: spring-boot-probes image: registry.cn-hangzhou.aliyuncs.com/log-service/spring-boot-probes:1.0.0 ports:-containerPort: 8080 terminationGracePeriodSeconds: 60 readinessProbe: httpGet: path: / actuator/health port: 8080 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 failureThreshold: 1 livenessProbe: httpGet: path: / actuator/health port: 8080 initialDelaySeconds: 40 periodSeconds: 20 successThreshold: 1 failureThreshold: 3... Configure strategy
The replacement policy for pod can be configured through strategy. The main parameters are as follows.
.spec.containgy.type-used to specify the type of policy that replaces pod. This parameter can be Recreate or RollingUpdate, and the default is RollingUpdate.
Recreate-K8s deletes all existing pod before creating a new pod. This method is suitable for scenarios where new and old versions are incompatible and cannot coexist. However, because this method will cause the service to be completely unavailable for a period of time, it should be used with caution outside the above scenarios.
RollingUpdate-K8s will gradually replace pod in batches and can be used to achieve hot service upgrades.
.spec.uplogy.rollingUpdate.maxSurge-specifies the maximum number of additional pod that can be created during a rolling update, either as a number or as a percentage. The higher the value, the faster the upgrade, but the more system resources are consumed.
.spec.uplogy.rollingUpdate.maxUnavailable-specifies the maximum number of pod that is not available during a rolling update, which can be a number or a percentage. The higher the value, the faster the upgrade, but the more unstable the service will be.
By adjusting maxSurge and maxUnavailable, you can meet the upgrade requirements in different scenarios.
If you want to upgrade as quickly as possible while ensuring system availability and stability, you can set maxUnavailable to 0 and give maxSurge a large value.
If the system resources are tight and the pod load is low, in order to speed up the upgrade, you can set maxSurge to 0 and give maxUnavailable a large value. It should be noted that if maxSurge is 0 Magi maxUnavailable and DESIRED is used, the entire service may be unavailable, and RollingUpdate will be degenerated into downtime for publishing.
The sample selects a compromise, setting maxSurge to 3 and maxUnavailable to 2, balancing stability, resource consumption, and upgrade speed.
Configuration probe
K8s provides the following two types of probes:
ReadinessProbe-by default, once all the containers in a pod are started, K8s assumes that the pod is ready and sends traffic to that pod. However, after some applications are started, it still needs to complete the loading of data or configuration files to provide services, so it is not rigorous to judge whether the container is ready by whether the container is started or not. By configuring the ready probe for the container, k8s can more accurately judge whether the container is ready or not, thus building a more robust application. K8s ensures that service is allowed to send traffic to the pod only if all containers in the service pass the ready probe. Once the readiness probe fails, K8s stops sending traffic to that pod.
LivenessProbe-by default, K8s considers a container in the running state to be available. However, this judgment can be problematic if the application cannot automatically exit when there is a problem or unhealth (such as a serious deadlock). By configuring the active probe for the container, K8s can judge whether the container is running normally or not more accurately. If the container fails the activity probe, kubelet stops it and decides what to do next according to the restart policy.
The configuration of the probe is very flexible, and the user can specify the probe frequency, detection success threshold, detection failure threshold and so on. For the meaning and configuration method of each parameter, please refer to the document Configure Liveness and Readiness Probes.
The sample is equipped with a ready probe and an active probe for the target container:
The initialDelaySeconds of the ready probe is set to 30 because the application takes an average of 30 seconds to complete initialization.
When configuring the active probe, it is necessary to ensure that the container has sufficient time to reach the ready state. If the parameters initialDelaySeconds, periodSeconds, and failureThreshold are set too small, it may cause the container to be restarted before it is ready, so that the ready state can never be reached. The configuration in the sample ensures that if the container is ready within 80 seconds after startup, it will not be restarted, with a sufficient buffer relative to the average initialization time of 30 seconds.
PeriodSeconds of the ready probe is set to 10 and failureThreshold is set to 1. In this way, when the container is abnormal, no traffic will be sent to it after about 10 seconds.
The periodSeconds of the active probe is set to 20 and failureThreshold is set to 3. In this way, when the container is abnormal, it will not be restarted after about 60 seconds.
Configure minReadySeconds
By default, once the newly created pod becomes ready, k8s assumes that the pod is available, thus removing the old pod. But sometimes problems may not be exposed until the new pod actually handles user requests, so a more robust approach is to observe a new pod for a while and then delete the old pod.
The parameter minReadySeconds controls the observation time when the pod is ready. If the containers in the pod work properly during this period of time, K8s will consider the new pod available and delete the old pod. When configuring this parameter, you need to weigh carefully. If the setting is too small, it may cause insufficient observation, and if the setting is too large, it will slow down the upgrade progress. The sample sets the minReadySeconds to 120 seconds, which ensures that the ready pod can go through a complete activity detection cycle.
Configure terminationGracePeriodSeconds
When K8s is ready to delete a pod, it sends a TERM signal to the container in the pod and removes the pod from the service's endpoint list. If the container cannot be terminated within the specified time (default is 30 seconds), K8s will send a SIGKILL signal to the container to force the termination of the process. The detailed process of Pod termination can be found in the document Termination of Pods.
Since the application takes up to 40 seconds to process requests, the sample sets an elegant shutdown time of 60 seconds in order to process requests that have arrived at the server before closing. You can adjust the value of terminationGracePeriodSeconds according to the actual situation for different applications.
Observe escalation behavior
The above configuration can ensure a smooth upgrade of the target application. We can trigger the pod upgrade by changing any field of PodTemplateSpec in Deployment and observe the upgrade behavior by running the command kubectl get rs-w. The changes in the number of copies of the old and new versions of pod observed here are as follows:
Create maxSurge new pod. At this point, the total number of pod reaches the allowable limit, that is, DESIRED + maxSurge.
Start the delete process of maxUnavailable old pod immediately before the new pod is ready or available. The number of pod available is DESIRED-maxUnavailable.
When an old pod is completely deleted, a new pod will be added immediately.
A new pod passes the readiness probe and becomes ready, and K8s sends traffic to that pod. However, due to the failure to reach the required observation time, the pod will not be considered available.
The normal operation of a ready pod during the observation period is considered to be available, and the deletion process of an old pod can be started again.
Repeat steps 3, 4, and 5 until all the old pod is deleted and the new pod available reaches the target number of replicas.
Failed rollback
The upgrade of an application is not always plain sailing, and it is possible to encounter situations where the behavior of the new version does not meet expectations and needs to be rolled back to a stable version during or after the upgrade is completed. K8s records every change to PodTemplateSpec (if you update the template label or container image). In this way, if there is a problem with the new version, you can easily roll back to the stable version according to the version number. The detailed steps of rolling back Deployment can be found in the document Rolling Back a Deployment.
StatefulSet
StatefulSet is a common form of deployment for stateful pod. Many parameters are also provided for this type of pod,k8s to flexibly control their upgrade behavior. The good news is that most of these parameters are the same as the pod in the upgrade Deployment. This paper focuses on the differences between the two.
Policy Typ
In k8s 1.7 and later, StatefulSet supports two policy types, OnDelete and RollingUpdate.
OnDelete-when the PodTemplateSpec in StatefulSet is updated, a new version of pod is created only after the old pod is manually deleted. This is the default update strategy, on the one hand, to be compatible with K8s 1.6 and previous versions, and on the other hand, to support scenarios where the new and old versions of pod are incompatible and cannot co-exist during the upgrade process.
RollingUpdate-K8s will gradually replace the pod managed by StatefulSet in batches. It differs from RollingUpdate in Deployment in that the replacement of pod is ordered. For example, a StatefulSet contains N pod, which are assigned sequence numbers that increase monotonously from 0 at deployment time, and are replaced in reverse order when scrolling updates.
Partition
You can upgrade only part of the pod with the parameter .spec.updatedategy.rollingUpdate.partition. After partition is configured, only pod with an ordinal greater than or equal to partition will be rolled up, and the rest of the pod will remain unchanged.
Another application of Partition is that canaries can be upgraded by constantly reducing the value of partition. For specific operation methods, please refer to the document Rolling Out a Canary.
DaemonSet
DaemonSet ensures that a copy of pod runs on all (or some) k8s worker nodes, often used to run monitoring or log collectors. For pod in DaemonSet, the parameters used to control their escalation behavior are almost the same as Deployment, except that they differ slightly in terms of policy type. DaemonSet supports both OnDelete and RollingUpdate policy types.
OnDelete-when the PodTemplateSpec in DaemonSet is updated, a new version of pod is created only after the old pod is manually deleted. This is the default update strategy, on the one hand, to be compatible with k8s 1.5 and previous versions, and on the other hand, to support scenarios where the new and old versions of pod are incompatible and cannot co-exist during the upgrade process.
RollingUpdate-its meaning and configurable parameters are the same as Deployment's RollingUpdate.
The specific steps of scrolling update DaemonSet can be found in the document Perform a Rolling Update on a DaemonSet.
Job
Deployment, StatefulSet, and DaemonSet are generally used to deploy and run resident processes, while pod in Job exits after performing a specific task, so there is no concept of rolling updates. When you change the PodTemplateSpec in a Job, you need to manually delete the old Job and pod and rerun the job with the new configuration.
Summary
The functions provided by K8s can enable most applications to upgrade with zero downtime and no production interruption, but there are also some unsolved problems, including the following:
Currently, K8s native only supports two types of deployment upgrade strategies: downtime release and rolling release. If the application has requirements such as blue-green release, canary release, Amax B testing, etc., you need to carry out secondary development or use some third-party tools.
Although K8s provides the rollback function, the rollback operation must be done manually and cannot be rolled back automatically according to the conditions.
Some applications also need to be executed step by step in batches during capacity expansion or reduction, and K8s does not provide similar functions yet.
Instance configuration:
LivenessProbe: failureThreshold: 3 httpGet: path: / user/service/test port: 8080 scheme: HTTP initialDelaySeconds: 40 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 1name: dataline-devports:-containerPort: 8080 protocol: TCPreadinessProbe: failureThreshold: 1 httpGet: path: / user/service/test port: 8080 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1
After testing, when updating the sprintboot application, there is no error report of 502 in the access.
These are all the contents of this article entitled "how to solve 502 errors when upgrading the application of K8s service springboot project". Thank you for reading! I believe you will gain a lot after reading this article. The editor will update different knowledge for you every day. If you want to learn more knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.