In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to correctly handle the DB pattern during Kubernetes deployment. It is very detailed and has a certain reference value. Friends who are interested must read it!
Text:
Architecture (Architecture)
Let's consider the following "two-tier" scenario.
An "application layer" with multiple stateless microservice replicas
A "database layer" with one database (in actual production, there may be multiple copies for redundancy)
There is an API for creating and reading users
The database layer is responsible for data persistence
Implementation: top-down
Services (Service)
The application layer and database layer shown in the figure above are implemented through underlying services, which are essentially provided by Services:
DNS name so that requests can be easily sent to "tier"
Transparently route the request to the underlying Pods
Pods
Both the microservice and the database run in a closed container to achieve resource isolation. The containers themselves are encapsulated in Pods, and Pods is the smallest deployable unit in Kubernetes.
Deployments (deployment)
The number of Pod replicas configured and the management of their lifecycles are done by deployment, which is what we need to pay attention to when solving the problem.
Database Migrations (database migration)
But when you change the microservice code, you also need to change the data schema to adapt it. A simple way is to migrate through the database. The steps are as follows:
1. Version your architecture
two。 Writes each change to a dedicated script (also known as "migration") mode, which is recognized by the version number
3. Package all the scripts and code
4. At startup, check your schema version, and if it is out of date, apply the necessary migration to match the pattern.
The advantages of this approach are:
Simplicity: no new mobile infrastructure is introduced at run time
Easy to deploy: you will have the correct version pattern during the development, test, and production phases.
Current effect
Kubernetes allows us to easily implement the above settings, it completes a lot of complex work for us, and simplifies the whole process a lot.
In fact, the entire "application layer" described above is basically configured in the following two YAML listings:
However, the above does not provide all the features you need, and you may want to ask the following questions:
What happens during the deployment of the new version? Will there be a downtime? How does the capacity change?
What if a mistake is made and the new version crashes on deployment?
What happens if the microservice crashes after a period of time?
What should I do if I need a rollback?
Now let's answer these questions:
Implementation: details determine success or failure!
Rolling Out (launch)
By default, Kubernetes deploys Pod using the "rolling update" policy, deleting one old Pod (maxUnavailable: 1) at a time and adding a new Pod (maxSurge: 1), which means that there are three copies, and when you scroll the new version, you will temporarily lose 33% of your ability to serve the end user.
Let's solve this problem by changing maxUnavailable to 0. First, Kubernetes will deploy a new Pod and delete the old Pod only if the deployment is successful. However, the disadvantage is that spare capacity is needed in the cluster to run this additional copy temporarily, so if you have full capacity, you may need to add additional nodes.
But the advantage is that, in theory, zero downtime will not affect the end user.
Readiness (ready)
When Kubernetes thinks it is "ready", it adds a Pod to its server's load balance. By default, "ready" only means that all containers for Pod have been started, and Kubernetes can execute them. However, if you want to establish a connection to the database and run the schema migration at startup, it may take a while, and it is clear that we need to better define "ready".
From a business perspective, our microservices are ready to respond to end-user requests. Therefore, we can tell Kubernetes the exact information by configuring HTTP readinessProbe. In addition, we need to create a database connection and run the migration before starting the HTTP server.
Generally speaking, it's not a bad thing to wait a while after each Pod is launched.
Now, if we crash at startup or cannot connect to the database, the newly deployed failed Pod will not be added to the load balancing in the "application layer", and the rollout will stop there. This means that even if a problem occurs at this stage, the end user will not be affected.
Liveness (activity)
Kubernetes also periodically checks to see if Pod is still alive, as well as by default. In our example, if the database client goes corrupted in some way, we might want Kubernetes to remove the affected pod from the load balancer and then start a new one. This can be achieved by adding checks (which ideally represent the health of the system), revealing it to Kubernetes, or configuring a livenessProbe.
Rolling back (rollback)
But if the operation fails, you may want to roll back to the latest working version. Good engineering practice can help to achieve this. In our scenario, the most important thing is the backward compatibility of our microservice database schema.
For example, by adding and explicitly selecting columns, we will be able to run the previous version of the microservice in the latest mode and allow a smooth rollback from v1.1.0 to v1.0.0 without changing any mode.
Renaming columns will not be backward compatible. In this case, you may want to use "down migrations" to revert to the previous schema version. However, it should be noted that scrolling back and forth will break the "zero downtime". In fact, end users encounter a variety of temporary errors, depending on the phase of deployment and the copy they encounter. If you can't accept such an error, you may need to launch a microservice version that supports both the new and the old modes (by having two clients, choosing the right one or trying both at the same time). Then another version with mobility is introduced to make schema changes.
There may be a lot of problems, so you need to test it carefully.
For large systems, you may need to study "blue-green deployments", but this can be difficult to implement.
Talk less and act more!
To implement these settings, you can refer to the following suggestions.
We recommend using Weave Cloud because it makes it easy to roll back and forth, and it also provides complete system observability when you change the system.
For example, you can use Weave Cloud visualization settings: you can make sure that the new copy is handling traffic when a new version of the microservice is launched:
And you can query any collected metrics and clearly see the impact of the new version (blue vertical line).
The above is all the content of the article "how to properly handle the DB pattern during Kubernetes deployment". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.