Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Challenges and Solutions of expanding Kubernetes in Hybrid Cloud Environment

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Introduction

Suppose your business is online: you have your own data center and a private cloud to run your website. You need to deploy many servers to run the application and store its data.

Perhaps most of the time, the overall traffic on your site is very stable. But maybe sometimes, the traffic of the website will face a sudden growth, how should you deal with it?

First, you need to be able to scale your application to cope with increased traffic. If you don't want to spend extra money on new hardware that is used only a few times a year, you can consider switching to a hybrid cloud model.

Moving from a private cloud to a hybrid cloud can save a lot of time and cost. After extending the application (part) to the public cloud, you only need to pay for the resources you use when you use them.

But how do you choose the public cloud? Can you select multiple public clouds?

In short, the answer is yes, and you will most likely need to select multiple public cloud providers. You may have different teams, different applications, and different requirements, so one cloud provider may not be able to meet all your needs. In addition, many organizations need to comply with laws, regulations, and policies that require their data to actually reside in specific locations. The strategy of using multiple public clouds can help organizations meet these stringent and diverse requirements. They can also choose from multiple data center areas or available areas to be as close to end users as possible, providing them with the best performance and minimum latency.

The challenge of scaling across the cloud

You've decided to use the cloud, so let's go back and think about the original question. Your application has a micro-service deployment architecture for your application, which runs on containers that need to be extended. And this is where Kubernetes plays a role. Kubernetes is a solution that helps you manage and orchestrate containerized applications in node clusters. While Kubernetes will help you manage and scale deployments, nodes, and clusters, it cannot help you easily manage and scale them across cloud providers. We will discuss this point in more detail later.

A Kubernetes cluster is a set of machines (physical / virtual), and Kubernetes provides the cluster with resources to run applications. First, the basic Kubernetes concepts you need to understand are:

A Pod is a unit that controls one or more containers and is scheduled as an application. In general, you should create a Pod for each application so that you can extend and control them individually.

The node component is the worker machine in Kubernetes. Nodes can be virtual machines (VM) or physical machines, depending on the cluster. Each node contains the services needed to run the pod and is managed by the main component.

The master component manages the life cycle of the Pod. If Pod dies, Controller will create a new Pod;. If you expand or reduce Pod, then Controller will create / destroy your Pod. For more information about Controller types, you can see here:

Https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/

The role of these three components is to extend and dispatch containers. The main component indicates the command of scheduling and capacity expansion, and then the node arranges the pod accordingly according to the command.

The above is just a very basic concept of Kubernetes, and this article "Zero basic │ takes you to understand Kubernetes" can take you to understand Kubernetes in more detail.

When trying to solve the problem of scaling across the cloud with Kubernetes, you will encounter some key challenges:

It is difficult to manage multiple clouds and clusters, set users and set policies

Complexity of installation and configuration

Users or teams in different environments will have different experiences

Rancher can help you solve the above challenges. Rancher is an open source container management platform for running Kubernetes in production. The following features of Rancher can help us manage and expand our applications, regardless of whether computing resources are hosted locally or on multiple clouds:

Common infrastructure management across multiple clusters and clouds

Easy-to-use Kubernetes configuration and deployment interface

Easily expand Pod and clusters with one click

Access Control and user Management (ldap,AD)

Workload, RBAC, policy and project management

On cloudy and possibly even multiple infrastructures that can run Kubernetes, Rancher can be a single point of control for your multiple Kubernetes clusters.

Let's take a look at how we manage multiple Kubernetes clusters in two different regions.

Start an instance of Rancher 2.0

First, start the Rancher 2.0 instance. For specific methods, please refer to this quick start guide: https://rancher.com/quick-start/

Get started with Rancher and Kubernetes

Let's create two managed Kubernetes clusters in GCP in two different regions. To do this, you need a service account key.

In the Global tab, we can see all available clusters and their status. Starting with the Provisioning state, when the cluster is ready, the state changes to Active.

Now we have deployed many pod for each node from the Kubernetes cluster. These pod will be used by the internal systems of Kubernetes and Rancher.

Let's continue to deploy the workload for the two clusters. Select the default project in order; this opens the Workloads tab. Click Deploy to set the name and Docker image of the first cluster to httpd and the second cluster to nginx because we want to expose our Web server to the Port mapping area. Select Internet traffic in a layer-4 load balancer.

If you click on the nginx / httpd workload, you will see that Rancher actually creates a deployment to manage ReplicaSet as recommended by Kubernetes. You will also see the Pod created by this ReplicaSet.

Extend Pod and clustering

The Rancher instance is managing two clusters:

Us-east1b-cluster, running 5 httpd Pod

Europe-west4-a cluster, running 1 nginx Pod

Click the "-" (minus sign icon) under the Scale column to reduce httpd Pod. We soon saw a reduction in the number of Pod.

To extend pod, click the "+" (plus icon) under the Scale column. Once finished, you can immediately see that the Pod is being created and the ReplicaSet is extending the event. Use the right menu of Pod to try to delete one of the pod and watch how ReplicaSet recreates it to match the desired state.

As a result, the number of httpd Pod of the first cluster has changed from 5 to 2, and the nginx Pod of the second cluster has changed from 1 to 7. Now, the second cluster seems to be running out of resources.

With Rancher, we can also expand the cluster by adding additional nodes. Let's try this and edit the number of nodes to 5.

Although Rancher showed us the "orchestration cluster", it was Kubernetes who upgraded the cluster master server and resized the node pool behind the scenes.

Wait a minute, eventually you should see five nodes up and running.

Let's check the Global tab so that we can have a global understanding of all the clusters that Rancher is managing.

Now we can add more Pod (if we want), because now we have new resources available. Let's try to change the pod number to 13.

Most importantly, all of these operations are done without downtime. When expanding the Pod up or down or resizing the cluster, the hit public IP,HTTP response status code for the httpd / nginx deployment is always 200.

Total knot

Let's review our Kubernetes cluster scaling exercise:

We created two clusters using Rancher

We deployed a workload with 1 nginx Pod and 5 httpd Pod

Expand and reduce the capacity of these two deployments

Adjusted the size of the cluster

All of this is done with a few simple clicks on Rancher-friendly and intuitive UI. Of course, you can also do this entirely using API.

In either case, you have a central point from which you can manage all Kubernetes clusters, observe their status, or scale your deployment as needed. If you are looking for a tool to help you with infrastructure management and container orchestration in hybrid / multi-cloud, multi-area clusters, the open source Rancher Kubernetes platform may be perfect for you.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report