In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "Kubernetes trampling example analysis". In daily operation, I believe many people have doubts about Kubernetes trampling example analysis. The editor consulted all kinds of data and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts of "Kubernetes trampling example analysis"! Next, please follow the editor to study!
1. The barren tune walks the board.
In the past month or two, the production of K8s cluster frequently appeared for a short time of 503 Service Temporarily Unavailable, but it could not be reproduced actively, which was quite depressed and stressful.
The HTTP 5xx response status code is used to define server-side errors.
Internal Server Error: the requested server encounters an unexpected situation and prevents it from executing the request, usually for a single request, and the entire site sometimes provides services. 502 Bad Gateway Error implies that a server in the connection link is offline or unavailable; 503 Service Unavailable means that there is a problem on the actual Web server hosting your application. two。 The troubleshooting record basically appears every 2-3 days, each time for 2-3 minutes, when the whole station is 503. Because it cannot be actively reproduced, we troubleshoot the EFK log of the corresponding time period: impala connection problem. Big data's operation and maintenance colleagues found that the impala request initiated by webapp was not aligned with the impala cluster clock, resulting in webapp impalaODBC Driver not being connected to the impala cluster.
When you enter the k8s cluster node, it is true that the clock alignment service of some nodes is not started, and the clock alignment service is 2 minutes slower than Beijing time from time to time. This can indeed explain the impala connection authentication failure caused by the time difference.
The clocks of all K8s nodes were synchronized on August 26th, and nearly a week later, there was no problem. On September 3, there was another short-term 503 lack of service, and the EFK log showed that it was still an impala connection problem. Here, big data colleagues failed to locate the specific reason, temporarily defined as occasional / jitter? 3. Thinking and deduction
There is only an impala connection problem at the fault site, and I don't understand that the impala connection problem will lead to webapp service offline.
Our webapp has both toB and toC business, and the site is strongly dependent on mongodb and weakly dependent on impala:impala, even if it can not be connected, but cannot be checked. The write operations related to sso+ orders on the site should still be available.
Recalling the K8s probe we saw a few days ago, too bad, our ready probe seems to have detected impala.
/ / Detection logic exposed on ASP.NetCore: impala & & mongodb
Services.AddHealthChecks ()
.AddCheck (nameof (ImpalaHealthCheck), tags: new [] {"readyz"})
.AddCheck (nameof (MongoHealthCheck), tags: new [] {"readyz"})
App.UseHealthChecks ("/ readyz", new HealthCheckOptions)
{
Predicate = (check) = > check.Tags.Contains ("readyz")
});
It is strongly speculated that if the ready probe fails to detect impala 3 times, the Pod will be marked as Unready, and the Pod will be removed from the webapp service load balancer and no longer allocate traffic, resulting in nginx meaningless back-end service, site 503.
Quickly find a beta environment, disconnect the impala, and verify the conjecture.
4. Problem review
Bugfix is not inferred by me, but deduced purely from experience. It is not a clear inference, and it can be counted as trampling on it for everyone in advance.
Docker health check can only detect, Kubernetes survival, ready probe not only has the ability to detect, but also the ability to make decisions.
Here is a problem with our k8s ready probe usage strategy:
If a problem with webapp weak dependency impala is detected, the entire webapp service will be offline. The container is not ready only if strong dependency is detected, which is also the original intention of the ready probe.
It is strongly recommended to set the probe and probe parameters reasonably according to the webapp structure to avoid frequent restarts or service downlines caused by unrealistic health check failures.
At this point, the study on "Kubernetes trampling example Analysis" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.