Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Portworx demo: migrating stateful applications and data between K8S clusters

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

More and more enterprises choose Kubernetes as their infrastructure, which can help us shorten the time to market of software projects, reduce infrastructure costs, and improve software quality. Because Kubernetes is relatively new, the IT team is learning how to run and maintain applications on Kubernetes in a production environment. This article will explore migrating Kubernetes applications to another new cluster when additional computing power is needed.

Portworx demo video link

Some reasons why the current Kubernetes cluster needs to be expanded

1. A cluster is about to be fully occupied, and you need to migrate your workload to a new place with more computing resources.

two。 The services you want to use are in another region or cloud, but to use these services, you need to transfer applications and data.

3. When the hardware expires, the hardware needs to be upgraded to the next generation, and the computing specifications, requirements and memory capacity of the new hardware have changed, which leads to the necessity of migration.

4. Cluster resources are limited and the cost of extending instance is getting higher and higher, so you need to adopt a new cluster architecture, which requires network-attached block storage instead of local disks, so that storage can be extended independently of computing.

5. Developers want to transfer workloads to a cluster with different hardware, network, operating systems, or other configurations for testing or grading.

The reasons listed above are not exhaustive, but they also illustrate that it is necessary to expand the Kubernetes environment and migrate workloads from one cluster to another under many conditions. This problem is relatively simple when it comes to stateless applications, but for stateful services such as databases, queues, critical storage, big data, and machine learning applications, you have to move data to a new, expanded environment before application design can speed up.

Solving data Mobility issues: new features of PX-Enterprise ™

PX-Motion not only has the ability to transfer data across environments, it can also transfer application configuration and related stateful resources, such as PV (permanent volumes), making it very easy for operations teams to move a volume, an Kubernetes namespace, or an entire Kubernetes cluster between environments, even if there is permanent data in it.

This article will discuss the function and ability of PX-Motion. At the same time, we will demonstrate how to transfer an Kubernetes namespace and all applications running in it to a new Kubernetes cluster with resource expansion capabilities. In this demonstration, Cluster 1 represents a cluster that is overutilized, inflexible, and unable to meet our growing application needs. Cluster 2 represents a more flexible and scalable cluster to which we will transfer the workload.

In addition to transferring the entire Kubernetes namespace between clusters, we will also show how to migrate applications configured in cluster 1 that use local storage to cluster 2 that uses network-attached block storage.

In this way, you will see that we need to transfer the real data, rather than through the trick of managing block device mapping.

In general, when moving a stateful Kubernetes application to another cluster, you need to:

Pair the two clusters to specify a target cluster and a destination cluster; start the migration using PX-Motion, including moving data volumes and configurations; and after the data and configuration migration is complete, Kubernetes automatically deploys the application to the new environment.

Let's get started!

Configuration and setup

In the demonstration, we use google Kubernetes Engine (GKE) as the Kubernetes cluster, but you can also do the following in any Kubernetes cluster. Using the DaemonSet Spec obtained by Portworx installer online spec generator, install Portworx on each cluster. Step by step through the installation, the Portworx installation process is not detailed here, but you can check the documentation on portworx.com to learn how to install Portworx on Kubernetes. The schematic diagram of the environmental architecture is as follows. Note the following aspects of Cluster 1 and Cluster 2:

Cluster NameMachine TypeStorage TypeCluster 1 (Source) n1-standard-2local-ssdCluster 2 (Destination) n1-standard-8provisioned PDs

In this case, the resources of the first cluster (Cluster 1) are limited, and the operations team wants to move from the local SSD to the automatically configured permanent disk (PD) of the larger instance.

Why? Local SSD is more efficient when dealing with specific workloads, but it has its own limitations, which is why we are talking about moving applications from one namespace to another. According to Google, local SSD restrictions include:

"because the local SSD is physically attached to the node's virtual machine instance, all data stored in it exists only on this node. And because the data is stored locally, your application must be able to deal with situations where the data is not available. "" The data stored in SSD is short-term. The Pod that writes to the local SSD loses the ability to access the data stored on disk when it is scheduled to leave this node. " In addition, if the node is revoked, upgraded, or repaired, the data will be erased. "We cannot add local SSD to the existing node pool."

Portworx can overcome some of the above limitations because it can replicate data to other highly available hosts in the cluster. But if we want to keep adding additional storage to our cluster without scaling up our computing, there will still be some restrictions on using local storage. The second cluster on GKE described above uses Portworx Disk Template, which automatically allows Portworx to manage disks from the Google cloud, which is a little more flexible than local disks.

The second cluster runs ahead of time and now uses an automatically configured PD to migrate the workload.

The operation of a large number of applications requires more computing power

The source cluster is as follows. It consists of a large number of applications running in a single namespace (NameSpace): Cassandra, Postgres,WordPress and MySQL. All of these applications generate a very high load in the cluster. The following is the application running in the demo namespace. Note that it is feasible and common to run multiple namespaces on a single Kubernetes cluster. In the demonstration, we moved only one namespace and let the rest of the namespaces continue to run without change.

$kubectlconfig use-context $kubectlget po-n demoNAME READY STATUS RESTARTS AGEcassandra-1-0 1 17mcassandra-1 1 Running 0 17mcassandra-1-1 1 Running 0 16mcassandra-1-21 Running 0 14mcassandra-2 -0 1 Running 1 Running 0 8498757465-gqs5h-1 1 Running 0 16mcassandra-2-2 1 Running 0 14mmysql-1-7f58cf8c7c-srnth 1 Running 0 2mmysql-2-8498757465-gqs5h 1 Acer 1 Running 0 2mpostgres-2-68c5d6b845-c4gbw 1 26mpostgres-77bf94ccb5-hs7dh 1 Running 0 26mpostgres-77bf94ccb5-hs7dh 1 Running 0 26mwordpress-mysql-2-5fdffbdbb4-dpqm9 1 5fdffbdbb4-dpqm9 1 Running 0 17m

At some point, when more applications, such as MySQL databases, are added, the cluster encounters its memory limitations and errors such as "OutOfmemory", as shown below. To solve this problem, we migrated the demo namespace to a new cluster, thus adding new available resources to the demo namespace.

$kubectlget po-n demoNAME READY STATUS RESTARTS AGEcassandra-1-0 1 14mcassandra-1 1 Running 0 16mcassandra-1-1 1 Running 0 14mcassandra-1-2 1 Running 0 13mcassandra -2-0 1 Running 0 16mcassandra-2-1 1 Running 0 14mcassandra-2-2 1 Running 0 13mmysql-1-7f58cf8c7c-srnth 1 Running 0 1mmysql-2-8498757465-gqs5h 1 OutOfmemory 1 Running 0 25smysql-3-859c5dc68f-2gcdj 0 9smysql-3-859c5dc68f-4wzmd 0 OutOfmemory 0 10smysql-3-859c5dc68f-4wzmd 0 OutOfmemory 0 9smysql-3-859c5dc68f-57npr 0 859c5dc68f-6t8fn 0 OutOfmemory 0 11smysql-3-859c5dc68f-6t8fn 0 Terminating 0 16smysql-3-859c5dc68f- 7hcf6 0 OutOfmemory 0 6smysql-3-859c5dc68f-7zbkh 0 OutOfmemory 0 5smysql-3-859c5dc68f-8s5k6 0 OutOfmemory 0 9smysql-3-859c5dc68f-d49nv 0 859c5dc68f-dbtd7 0 OutOfmemory 0 10smysql-3-859c5dc68f-dbtd7 0 OutOfmemory 0 15smysql-3-859c5dc68f- Hwhxw 0 OutOfmemory 0 19smysql-3-859c5dc68f-rc8tg 0 OutOfmemory 0 12smysql-3-859c5dc68f-vnp9x 0 OutOfmemory 0 18smysql-3-859c5dc68f-xhgbx 0 859c5dc68f-zj945 0 OutOfmemory 0 12smysql-3-859c5dc68f-zj945 0 OutOfmemory 0 14spostgres-2-68c5d6b845- C4gbw 1 to 1 Running 0 24mpostgres-77bf94ccb5-hs7dh 1 to 1 Running 0 24mwordpress-mysql-2-5fdffbdbb4-dpqm9 1 to 1 Running 0 16m

In addition to PX-Motion, the newly released PX-Enterprise also includes PX-Central ™, an interface for monitoring, data analysis, and management that can be configured for Grafana, Prometheus, and Alertmanager. These dashboards monitor volumes, clusters, etcd, and other content. In the case discussed in this article, you can understand resource issues by looking at the cluster-level dashboard.

The PX-Central screenshot shown below shows the memory and CPU being used by the cluster. The high CPU and memory utilization of the cluster brings problems for expansion, and the cluster overload problem is likely to lead to the "OutOfMemory" problem mentioned above.

Use PX-Motion to migrate an Kubernetes namespace, including its data.

Now that the problem has been found, let's use PX-Motion to migrate the data to the new cluster. First, we pair up the two GKE clusters to achieve the migration connection between the source cluster and the target cluster. The pairing of clusters is similar to that of Bluetooth players and mobile phones. The pairing process is to connect two different devices.

prerequisite

If you are trying PX-Migration, please make sure that all the prerequisites have been met.

In order to migrate the workload from cluster 1 to cluster 2, we need to configure PX-Motion. The first thing to do is to configure the target cluster. To achieve this, first establish access to pxctl ("pixie-cuttle"), that is, Portworx CLI. The following is how pxctl operates on the workstation with kubectl access.

$kubectl config use-context $PX_POD_DEST_CLUSTER=$ (kubectl get pods-context-lname=portworx-n kube-system-o jsonpath=' {.items [0] .metadata.name}') $aliaspxctl_dst= "kubectl exec $PX_POD_DEST_CLUSTER-context\-n kube-system / opt/pwx/bin/pxctl"

Next, set up the target cluster so that it is ready to pair with the source cluster. The target cluster should first run Portworx objectstore. We need to set up an object storage endpoint on the target cluster to level the location of the data during the migration process. Then, create a token for the source cluster to be used during the pairing process.

$pxctl_dst- volume create-size 100 objectstore$pxctl_dst-objectstore create-v objectstore$pxctl_dst-cluster token showToken is

You can now create a cluster pairing YAML configuration document to apply to the source Kubernetes cluster. This clusterpair.yaml document will contain information on how to validate with the target cluster scheduler and Portworx storage. Run the following command and edit the YAML document to establish a cluster pairing:

$storkctlgenerate clusterpair-context > clusterpair.yaml

Description: you can replace "metadata.name" with your own name.

Note: in the following example, options.token can use tokens generated by the "cluster token show" command above.

Note: in the following example, for options.ip, you will need an accessible IP or DNS of a load balancer or Portworx node to access ports 9001 and 9010.

When using GKE, we need to add a license to Stork before applying it to the cluster. Strok is Kubernetes's OSS intelligent scheduler extension and migration tool used by PX-Enterprise, and we also need to know how to validate the new cluster in order to migrate applications. First, use Google Cloud Command to generate a service account. Then, edit the Stork deployment and validation to ensure that it has access to the target cluster. Please see the following instructions.

Next, apply the cluster and pair it with the source cluster using kubectl.

$kubectl config use-context $kubectlcreate-f clusterpair.yaml

After the application is completed, use the set storkctl to check the status of the cluster pairing.

$storkctlget clusterpair

Kubectl and pxctl can also view cluster pairs.

$kubectldescribe clusterpair new-cluster | grep pairedNormal Ready 2m stork Storage successfully pairedNormal Ready 2m stork Scheduler successfullypaired$ pxctlcluster pair listCLUSTER-ID NAME ENDPOINT CREDENTIAL-IDc604c669 px-cluster-2 http://35.185.59.99:9001 a821b2e2-788f

Start the migration

Next, there are two ways to start the migration action: generate the migration CLI through storkctl, or refer to the spec file that describes the migration. We use the second method, see below, to migrate the demo resources and volumes.

ApiVersion:stork.libopenstorage.org/v1alpha1kind: Migrationmetadata:name: demo-ns-migrationspec:clusterPair: new-clusterincludeResources: truestartApplications: truenamespaces:- demo

Use kubectl to create the migration according to the above spec documentation.

`kubectlcreate-f migration.yaml`

Check the migration status. A successful migration is divided into the following steps: volume → application → completion

$storkctlget migrationNAME CLUSTERPAIR STAGE STATUS VOLUMES RESOURCES CREATEDdemo-ns-migration new-cluster Volumes InProgress 2 Nov18 12 0 EST$ storkctlget migrationNAME CLUSTERPAIR STAGE STATUS VOLUMES RESOURCES CREATEDdemo-ns-migration new-cluster Application InProgress 37 08 Nov18 15:14 EST$ storkctlget migrationNAME CLUSTERPAIR STAGE STATUS VOLUMES RESOURCES CREATEDdemo-ns-migration new-cluster Application InProgress 12 30 pm 37 08 Nov18 15:25 EST$ storkctlget migrationNAME CLUSTERPAIR STAGE STATUS VOLUMES RESOURCES CREATEDdemo-ns-migration new-cluster Final Successful 12/12 37/37 08 Nov 18 15:27 EST

To understand which resources, such as volumes, PVC, state sets, and replication sets are in progress or completed, you can use the "kubectldescribe" command.

$kubectldescribe migration demo-ns-migration

The status of the migration can also be viewed using the pxctl of the source Portworx cluster.

$pxctl-cloudmigrate statusCLUSTERUUID: c604c669-c935-4ca4-a0bc-550b236b2d7bTASK-ID VOLUME-ID VOLUME-NAME STAGE STATUS6cb407e0-e38e-demo-cassandra-data-1-cassandra-1-0673298860130267347 pvc-2c2604f4-e381-11e8-a985-42010a8e0017 Done Complete6cb407e0-e38e-demo-cassandra-data-1-cassandra-1-1 782119893844254444 Pvc-7ef22f64-e382-11e8-a985-42010a8e0017 Done Complete6cb407e0-e38e-demo-cassandra-data-1-cassandra-1-2 486611245472798631 pvc-b8af3c05-e382-11e8-a985-42010a8e0017 Done Complete

In this way, according to the status of the cluster migration resources, the migration is complete, and the following figure shows the process. The namespaces, applications, configurations, and data from cluster 1 are all migrated to cluster 2.

Then, look at the target cluster to make sure that the application has indeed completed the migration and is running well, because we are using the "startApplications: true" attribute.

$kubectl config use-context$ kubectl get po-n demoNAME READY STATUS RESTARTS AGEcassandra-1-0 1 7mcassandra-1-1 Running 0 7mcassandra-1-1 1 Running 0 5mcassandra-1-2 1 5mcassandra-1-1 Running 0 4mcassandra- 2-0 1 Running 0 7mcassandra-2-1 1 7mcassandra-2 1 Running 0 5mcassandra-2-2 1 Running 0 4mmysql-1-7f58cf8c7c-gs8p4 1 Running 0 7mmysql-2-8498757465-4gkr2 1 / 1 Running 0 7mpostgres-2-68c5d6b845-cs9zb 1 7mpostgres-77bf94ccb5-njhx4 1 Running 0 7mpostgres-77bf94ccb5-njhx4 1 Running 0 7mwordpress-mysql-2-5fdffbdbb4-ppgsl 1 5fdffbdbb4-ppgsl 1 Running 0 7m

Perfect! All the programs are running! Now we go back to the PX-CentralGrafana dashboard and we can see that there is less memory and CPU used on the cluster. This screenshot shows the CPU and memory usage of the worker node after the workload migration.

This is exactly what we hope to achieve. The following is the amount of CPU and memory available between clusters 1 and 2 shown on the GKE dashboard, so the above results are valid.

Now that we have the extra computing power, we can create an additional MySQL database. This database will have sufficient resources to run on the new cluster.

$kubectlcreate-f specs-common/mysql-3.yamlstorageclass.storage.k8s.io "mysql-tester-class-3" createdpersistentvolumeclaim "mysql-data-3" createddeployment.extensions "mysql-3" created$ kubectlget po-n demoNAME READY STATUS RESTARTS AGEcassandra-1-0 1 Running 0 22mcassandra-1-1 1 Running 0 20mcassandra-1-2 1 Running 0 18mcassandra-2-0 1 Running 0 22mcassandra-2-1 1 Running 0 20mcassandra-2-2 1 20mcassandra-2-2 1 Running 0 18mmysql-1-7f58cf8c7c-gs8p4 1 4gkr2 1 Running 0 22mysql-2-8498757465-4gkr2 1 Running 0 22mmysql-3-859c5dc68f-6mcc5 1 Running 0 12spostgres-2-68c5d6b845-cs9zb 1 68c5d6b845-cs9zb 1 Running 0 22mpostgres-77bf94ccb5-njhx4 1 Running 0 22mwordpress-mysql -2-5fdffbdbb4-ppgsl 1 Compact 1 Running 0 22m

Success!

The benefits of cluster expansion are obvious. Users and operators can delete old namespaces or applications from the source cluster, or directly delete the entire cluster to recycle these resources. The new cluster uses auto-configured PD instead of local SSD, so its storage and computing capacity can be expanded according to the needs of the IT team.

Conclusion

PX-Motion has the ability to migrate Portworx volumes and Kubernetes resources between clusters. The above case takes advantage of the function of PX-Motion to enable the team to seamlessly expand the Kubernetes environment. It is easy to migrate namespaces, volumes, or entire applications between environments on Kubernetes. Amplification is not the only feature of PX-Motion. Please check out our other PX-Motion series for more information.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report