In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
Today, I will talk to you about how to understand the high availability and elastic scaling of containers in Kubernetes. Many people may not know much about it. In order to make you understand better, the editor has summarized the following for you. I hope you can get something from this article.
The editor will show you how to run this docker image in Kubernetes.
What are the benefits of running our application on Kubernetes? According to Jerry's study of Kubernetes in more than ten days, my understanding is that Kubernetes can help application developers ensure that their applications are deployed and run in a highly available, scalable and fault-tolerant manner, and application developers do not have to spend a lot of time and effort to learn the underlying details of Kubernetes.
In other words, the construction of the Kubernetes environment and the configuration of the system can all be handed over to the administrators of Kubernetes, while application developers, as consumers of Kubernetes, only need to remember a few simple kubectl commands to easily complete the deployment of applications to Kubernetes, and can enjoy the above non-functional improvements brought by Kubernetes as a platform for applications almost at no additional cost.
Let's continue with the UI5 application we used in the previous article.
Jerry is poor and does not have the money to buy additional servers to build their own Kubernetes cluster environment. Fortunately, Kubernetes has not abandoned us poor programmers, and we can also choose the solution of Kubernetes Clusters as a Service.
I created a Google Cloud Platform-based Kubernetes cluster on Gardener inside SAP, named jerry1204:
You can see that the Kubernetes version on the created cluster is still relatively new, 1.12.3 is only lower than the 1.13 version that was just released on December 3rd.
Click the Dashboard hyperlink in the Access tab above to operate the Kubernetes cluster. Of course, veteran drivers like the Kubernetes training course of SAP Shanghai Research Institute prefer to use the command line.
Because it is a free cluster, only one worker node is assigned:
The Kubernetes cluster work node information seen in the console is the same as that seen on the command line kubectl get node-o wide:
Two important concepts in Kubernetes: pod and deployment
Let's use the command line to consume the mirror i042416/ui5-nginx that our previous article uploaded to Docker Hub:
Kubectl run jerry-ui5-image=i042416/ui5-nginx
There's a lot going on behind this command line.
First of all, what is the mechanism of Kubernetes to ensure the high availability, scalability and fault tolerance of applications running on Kubernetes?
The answer is pod. Click on the architecture diagram of the Kubernetes above, and then zoom in to see that the node (node) contains multiple pod, and each pod contains multiple containers. As its Chinese meaning (pods, aircraft, spacecraft or ships detachable from the main body) implies, pod is the carrier through which the application runs and is the basic operating unit of Kubernetes. The whole Kubernetes system is designed around pod, such as the deployment and operation of pod, how to ensure that the total number of pod running is equal to a constant value, how to expose the services provided by applications in pod to external access, and so on.
Going back to our previous command line, we tried to execute another command, kubectl get pod, and found that a pod had been created for 40 seconds (Age = 40s).
Use the describe command to view the details of this pod:
Kubectl describe pod jerry-ui5-6ffd46bb95-6bgpg
The docker:// after the Container ID below shows that this is a docker container. Of course, it does not mean that Kubernetes only supports Docker as a container technology. For example, Kubernetes also supports Rocket of CoreOS.
The Events area of the describe named output reveals the life cycle state transition of a pod from birth to service:
Scheduled- > Pulling- > Pulled- > Created- > Started
Image.gif
You can also see a lot of information from the from field of each state.
From: default-scheduler of the Scheduled status. Scheduler is one of the components of Kubernetes, which is responsible for dispatching pod to the appropriate node. What kind of node is appropriate depends on the implementation of Kubernetes Scheduler scheduling algorithm. Jerry has not studied it. If you think of Scheduler as a black box, its input is pod and a list of multiple nodes, and the output is the binding of Pod to a matching node. The string that begins with the shoot--jerrytest after the content "Successfully assigned XXX to shoot--jerrytest-jerry1204-worker-yamer-z1-XXX" displayed by the status message is the name of the node to which the pod is assigned.
The from fields of the following states are all kubelet,shoot--jerrytest-jerry1204-worker-yamer-z1-XXX, in which kubelet is an important module on the Kubernetes node, which is responsible for maintaining and managing all containers running on this node, and ensuring that the running state of pod is consistent with the expectations of users.
You can also see the running pod in the Kubernetes web console:
In addition to pod, the second important concept of Kubernetes is deployment.
Execute another command, kubectl get deploy, and find that the kubectl run command generates not only a pod but also a deployment. Output from this command Desired, Current and Up-to-Date, Available under the numbers, combined with the previously mentioned Kubernetes almost all the designs are around pod to expand this guiding idea, it is not difficult to guess that this generated deployment is also for pod services.
In fact, beginners of Kubernetes can understand the main responsibility of deployment as ensuring the quantity and health of pod.
In the previous article, we already knew how to start a docker image using docker run. However, in the world of Kubernetes, we do not deal directly with docker containers running in pod, but expose the services provided by applications in pod to outside consumption through Kubernetes service.
So far, there has been no application-related service generation on the Kubernetes cluster. The command kubectl get svc returns only one standard service for Kubernetes.
So we use the command kubectl expose to create a service based on the deployment just generated using kubectl run:
Kubectl expose deployment jerry-ui5-type=LoadBalancer-port=80-target-port=80
Image.gif
Once the expose command is executed, the get svc command returns a service with the same name as deployment, and the IP address exposed to the outside world is 35.205.230.209:
So, using url 35.205.230.209/webapp, you can access the SAP UI5 running in Kubernetes pod:
Some of the experiences that Kubernetes ensures high availability and scalability of applications
So far, our SAP UI5 application is only running in a single pod of a worker node on the Kubernetes cluster, and we haven't felt any difference between the performance of this application and that of running in the Docker container before.
Let's try the horizontal scaling function of Kubernetes deployment now. Select deployment in the Kubernetes console and execute the Scale command from the menu
Set the number of new pod: 3 in the screenshot below, which means to tell the deployment that after the command is executed, it must strive to ensure that the number of pod controlled and monitored by it must be equal to 3 at any time.
Of course, the graphics menu on this console also has corresponding commands on the command line, which we will use later.
After the OK button is clicked, execute kubectl get pod again. You can observe that two new deployment are generated after the horizontal expansion is executed, so this time the get pod command returns a total of 3 pod, of which the last two pod are just created after the horizontal expansion is executed by Age.
Use the kubectl describe command to view the details of deployment, and see the runtime details of the pod controlled by deployment in the Replicas field:
3 desired | 3 updated | 3 total | 3 available | 0 unavailable
We now intentionally delete a pod with kubectl delete and check it again. A new pod is generated automatically in an instant, and the total number of pod running is still 3.
Kubernetes rolling upgrade (Rolling Update) feature experience
Rolling upgrade is a major feature of Kubernetes, as the name implies, this is a smooth transition upgrade way, through a container-by-container instead of upgrade, to achieve uninterrupted service upgrade. The following figure shows the output of the describe command from deployment, where the field StrategyType field indicates that the default upgrade method for deployment created by kubectl run is a rolling upgrade.
I designed such a simple upgrade scenario: my SAP UI5 application version 1.0 runs on 10 pod of a Kubernetes node at the same time. Upgrade to version 2.0 by rolling upgrade without interruption of the whole application.
Since the image I uploaded to Docker Hub in the previous article is labeled as the default latest, I need to create two images on Docker Hub labeled v1.0 and v2.0, respectively.
The following command line pushes an image labeled v1.0 to Docker Hub:
Modify the title of the UI5 application details page and add a (v2.0) after the text to indicate that the application of this version is version 2.0:
Also push this 2.0 image to Docker Hub:
The images of the two versions of Docker Hub are ready:
First, use the 1.0 image to launch a single pod to execute the SAP UI5 application:
Kubectl run jerry-ui5-image=i042416/ui5-nginx:v1.0
Use the scale command to scale a single pod horizontally to 10 pod:
Kubectl scale-replicas=10 deployment/jerry-ui5
The figure above shows that kubectl get pod returns 10 running pod.
Use the following command to trigger a rolling upgrade to upgrade the deployent-based image named jerry-ui5 from v1.0 to v2.0:
Kubectl set image deployment/jerry-ui5 i042416/ui5-nginx=i042416/ui5-nginx:v2.0
Use kubectl rollout status deployment/jerry-ui5 to view the real-time progress of rolling upgrades. The log shown above shows how many old versions of pod are waiting to be terminated and how many new versions of pod are already available at some point in time.
X old replicas are pending termination
X of Y updated replicas are available
Open a pod at any point to view the details and find that the docker image it uses is already v2.0:
Finally, I see the title of the order details page in the browser, which appears later (v2.0), confirming once again that the rolling upgrade has been successfully completed.
This article introduces only the tip of the iceberg of Kubernetes features. We need to learn more details. After all, SAP cloud platform will soon support Kubernetes environment. Thank you for reading.
After reading the above, do you have any further understanding of how to understand the high availability and auto scaling of containers in Kubernetes? If you want to know more knowledge or related content, please follow the industry information channel, thank you for your support.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.