Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy blue and green for stateful workloads across different versions of K8S

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

The ecology of the container is breaking out! Not only is the application layer changing rapidly, but the platform for managing applications, Kubernetes, is also changing rapidly. This presents a problem that must be solved for the Ops team. How can the IT team ensure that an application works well on different versions of Kubernetes?

PX-Motion demo video: how to deploy blue and green for stateful workloads across different versions of Kubernetes

Blue-green deployment is a technology specifically designed to solve this problem and can reduce the risk of downtime or errors during deployment in a production environment. In a blue-green deployment scenario, users need to build two identical production environments (called blue and green, respectively), which differ only in the new changes that need to be deployed. Only one environment is activated at a time, and data transfer between the two environments is part of the deployment process. This technology is very effective for stateless applications without any data, but there are some difficulties for stateful applications such as databases, because users have to keep two copies of production data. In this case, you may need to use Postgres, MySQL, and other database backup and recovery scripts, or customized operation manuals or automated scripts, to manually move data from one data source to another, which can be very complex and time-consuming.

Portworx uses PX-Motion to solve the problem of data management in the blue and green deployment of stateful applications. PX-Motion enables IT teams to easily migrate data and application configurations between environments, greatly simplifying the blue and green deployment of stateful applications.

This blog post will discuss the functions and capabilities of PX-Motion. Specifically, the author will show you how to deploy the stateful LAMP stack running on two different versions of Kubernetes in blue and green.

To sum up, we will:

Pairing two Kubernetes clusters (called source and target clusters, respectively) enables data, configuration, and Pods to be transferred between the two clusters as part of the blue-green deployment.

Use Kubernetes to deploy a LAMP stack to the source cluster and verify that the application runs.

Using PX-Motion, you can migrate Kubernetes deployment, encrypted files, replica sets, services, persistent volumes, persistent volume connections, and data from the source cluster to the target cluster for testing and verification. After the migration is complete, all Pods can continue to run on the source cluster. Now we have two clusters running, blue and green.

Use Kubernetes to verify that our application and our own data are running properly on the target cluster.

After the deployment verification on the new cluster is complete, we can update our load balancing settings so that all traffic is directed to the new cluster. At this point, the blue and green deployment is complete.

Let's get started!

Install PX-Motion

prerequisite

If you are trying PX-Migration, please make sure that all the prerequisites (https://docs.portworx.com/cloud-references/migration/migration-stork/#overview)) have been met.

Pairing Kubernetes clusters to prepare for data migration

We need to pair the two clusters before migrating the workload from the source cluster (Kubernetes 1.10.3) to the target cluster (Kubernetes 1.12.0). The concept of pairing is equivalent to pairing a mobile phone with a Bluetooth player to make two different devices work together.

The first thing to do in cluster pairing is to configure the target cluster. First, establish access to pxctl ("pixie-cuttle"), that is, Portworx CLI. The following describes how to use pxctl on workstations that can be accessed by kubectl.

$kubectl config use-context `$PX_POD_DEST_CLUSTER=$ (kubectl get pods-- context-l name=portworx-n kube-system-o jsonpath=' {.items [0] .metadata.name}') $alias pxctl_dst= "kubectl exec $PX_POD_DEST_CLUSTER\-- context-n kube-system / opt/pwx/bin/pxctl"

Next, set up the target cluster object store so that it is ready to pair with the source cluster. We need to set up an object storage endpoint on the target cluster as the location where the data is graded during the migration.

$pxctl_dst-volume create-size 100objectstore$ pxctl_dst-objectstore create-v objectstore$ pxctl_dst-cluster token showToken is

Next, create a cluster pair YAML configuration document that corresponds to the source Kubernetes cluster. This clusterpair.yaml (https://docs.portworx.com/cloud-references/migration/migration-stork/#overview) document will contain information about how to validate with the target cluster scheduler and Portworx storage. Run the following command and edit the YAML document to establish a pairing:

$storkctl generate clusterpair-- context > clusterpair.yaml`

Description: you can replace "metadata.name" with your own name.

Note: in the following example, options.token can use tokens generated by the "cluster tokenshow" command above.

Note: in the following example, for options.ip, the IP or DNS of the load balancer or Portworx node will be required so that we can access ports 9001 and 9010.

Next, using kubectl, apply this cluster pairing to the source cluster.

$kubectl config use-context $kubectl create-f clusterpair.yaml

Under this architecture, cluster pairing connects over the Internet (VPC to VPC). This requires ensuring that our target storage is well connected to the source cluster. Please refer to the following instructions. (https://docs.portworx.com/cloud-references/migration/)

Note: these steps are temporary measures, and the subsequent release of the new version will be replaced by an automated process.

Description: similar steps are required for cloud to cloud, local environment to cloud, and cloud to local environment.

If all steps are successful, use storkctl to list the cluster pairs, and the program will display the Ready status of the storage and scheduler. If Error is displayed, use kubectl describe clusterpair for more information.

$storkctl get clusterpairNAME STORAGE-STATUS SCHEDULER-STATUS CREATEDgreen Ready Ready 19 Nov 18 11:43 EST$ kubectl describe clusterpair new-cluster | grep paired Normal Ready 2m stork Storage successfully paired Normal Ready 2m stork Scheduler successfully paired

Pxctl can also be used to list cluster pairs.

$pxctl_src cluster pair listCLUSTER-ID NAME ENDPOINT CREDENTIAL-IDc604c669 px-cluster-testing http://portworx-api.com:9001 a821b2e2-788f

Our cluster has now been matched successfully.

Test the workload on Kubernetes 1.12.0

Now that the Kubernetes 1.10.3 source cluster has been paired with the 1.12.0 target cluster, we can migrate the running workload, configuration and data from one cluster to another to test whether the applications on the target cluster 1.12.0Kubernetes are running properly. During and after the migration, all Pods will continue to run on the source cluster. We now have two clusters, blue and green, that differ only in the version of Kubernetes they are running.

$kubectl config use-context

If you want to check which version of Kubernetes you are currently using, run the kubectl version command. This command outputs the current client and server versions. As shown below, the server version is 1.10.3.

$kubectl version-- short | awk-Fv'/ Server Version: / {print "Kubernetes Version:" $3} 'Kubernetes Version: 1.10.3-eks

Deploy the application on 1.10.3

When migrating the workload, we need a workload that already exists on the source cluster. In the demonstration, we will use Heptio's sample LAMP stack to create a LAMP stack (http://docs.heptio.com/content/tutorials/lamp.html)) on the source cluster, thus using Portworx on MySQL volumes. This stack contains a storage category, including Portworx, encrypted files, the front end of HPH web pages, and a mySQL database with copies of Porworx volumes.

Kubectl create ns lamp $kubectl create-f. -n lampjob.batch "mysql-data-loader-with-timeout" createdstorageclass.storage.k8s.io "portworx-sc-repl3" createdpersistentvolumeclaim "database" createddeployment.extensions "mysql" createdservice "mysql" createddeployment.extensions "php-dbconnect" createdservice "web" createdsecret "mysql-credentials" created

Use kubectl to retrieve the Pods to make sure it is in the Running state.

$kubectl get po-n lampNAME READY STATUS RESTARTS AGEmysql-6f95f464b8-2sq4v 1 8wnvf 1 Running 0 1mmysql-data-loader-with-timeout-f2nwg 1 Running 0 1mphp Running 0 1mphp-dbconnect-6599c648-ckjqb 1 / 1 Running 0 1mphp-dbconnect-6599c648-qv2dj 1/1 Running 0 1m

Extract the Web service. Record the CLUSTER-IP and EXTERNAL-IP of the service. After the migration is complete, the two data will change because they are on the new cluster.

$kubectl get svc web-n lamp-o wideNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE SELECTORweb LoadBalancer 172.20.219.134 abe7c37c.amazonaws.com 80:31958/TCP 15m app=php-dbconnect

Access the endpoint or use curl to confirm that WordPress is installed, operational, and connected to the MySQL.

MySQL connection

$curl-s abe7c37c.amazonaws.com/mysql-connect.php | jq {"outcome": true}

Verify that the PVC is also created for the MySQL container. Below we will see the PVC, the database, and three copies each for deployment. This volume is the ReadWriteOnce volume block of MySQL.

$kubectl get pvc- n lampNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEdatabase Bound pvc-c572277d-ec15-11e8-9f4d-12563b3068d4 2Gi RWO portworx-sc-repl3 28m

Volume information can also be displayed using pxctl. The output of the Volume list command is as follows.

$pxctl_src-- volume listID NAME SIZE HA STATUS 618216855805459265 pvc-c572277d-ec15-11e8-9f4d-12563b3068d4 2 GiB 3 attached on 10.0.3.145

Migrate the application to Kubernetes 1.12.0

Configure the local kubectl client to use the target cluster that is running 1.12.0.

$kubectl config use-context

Run the kubectl Version command, which will output the current client and server versions, as shown below as running 1.12.0.

$kubectl version-- short | awk-Fv'/ Server Version: / {print "Kubernetes Version:" $3} 'Kubernetes Version: 1.12.0

Verify that the LAMP stack Pods is running. As shown below, the Namespace of the cluster has no resources, which means that the migration has not taken place.

$kubectl get poNo resources found.

Next, using the Stork client storkctl, create a migration that migrates LAMP stack resources and volumes from the 1.10.3 cluster to the 1.12.0 cluster. The input to the storkctl create migration command includes clusterPair, namespaces, and optionally includeResources and startApplications to include the relevant resources and start the application after the migration is complete. For more information about this command, please click here (https://docs.portworx.com/cloud-references/migration/migration-stork/).

Storkctl-- context\ create migration test-app-on-1-12\-- clusterPair green\-- namespaces lamp\-- includeResources\-- startApplicationsMigration test-app-on-1-12 created successfully

After the migration is created, use storkctl to get the migration status.

$storkctl-- context get migrationNAME CLUSTERPAIR STAGE STATUS VOLUMES RESOURCES CREATEDtest-app-on-1-12 green Volumes InProgress 0 *

Pxctl can also be used to view the migration status. The volume will show the STAGE and STATUS related to the migration.

Pxctl_src cloudmigrate statusCLUSTER UUID: 33293033-063c-4512-8394-d85d834b3716TASK-ID VOLUME-ID VOLUME-NAME STAGE STATUS 85d3-lamp-database 618216855805459265 pvc-c572277d-ec15-11e8-9f4d-12563b3068d4 Done Initialized

When complete, the migration displays STAGE → Final and STATUS → Successful.

$storkctl-- context get migrationNAME CLUSTERPAIR STAGE STATUS VOLUMES RESOURCES CREATEDtest-app-on-1-12 green Final Successful 1 *

Now get the Pods from the target cluster. As shown below, both PHP and MySQL are now running on the destination cluster.

$kubectl get po-n lampNAME READY STATUS RESTARTS AGEmysql-66d895ff69-z49jl 1 11mphp-dbconnect-f756c7854-c48x8 1 Running 0 11mphp talk dbConnectMel f756c7854-2fc2l 1 Running 0 11mphp-dbconnect-f756c7854-c48x8 1 11mphp-dbconnect-f756c7854-c48x8 1 Running 0 11mphp-dbconnect-f756c7854-h8tgh 1 bank 1 Running 0 11m

Notice that CLUSTER-IP and EXTERNAL-IP have now changed. This means that the service is now running on the new Kubernetes 1.12 cluster and therefore contains different subnets than before.

$kubectl get svc web-n lamp-o wideNAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE SELECTORweb LoadBalancer 10.100.0.195 aacdee.amazonaws.com 80:31958/TCP 12m app=php-dbconnect

If the website can be accessed on the 1.12.0 cluster, works properly, and the data has been correctly migrated to the 1.12.0 cluster, the same output will be returned.

Web web front end

MySQL connection

$curl-s http://aacdee.amazonaws.com//mysql-connect.php | jq {"outcome": true}

Below we can see the output of kubectl get po-n lamp on the source (bottom) and target (top) clusters. Notice the AGE of Pods, where there is the recently migrated LAMP stack in the destination cluster (above).

Both clusters run the same programs and data after migration.

Review the whole process:

The first step is to pair the 1.10.3 EKS cluster with the 1.12.0 cluster.

The LAMP stack (Linux, Apache, MySQL, PHP) is deployed on the 1.10.3 cluster.

Use PX-Motion to migrate Kubernetes deployment, encrypted files, replica sets, services, persistent volumes, persistent volume connections, and LAMP stack data to the 1.12.0 cluster.

The application is accessed on the 1.12.0 cluster and verified that it is running correctly.

Persistent volumes and connections are migrated between clusters using PX-Motion (https://docs.portworx.com/cloud-references/migration), and Kubernetes resources and replicas are started on the target cluster using Portworx Stork.

Now we have two fully operational Kubernetes clusters and two environments, the blue and green deployment environments. In practice, you need to do all the tests on the green cluster to ensure that the application does not have unexpected problems on the new cluster. After confirming that the test is complete, switch the load balancer from the blue cluster to the new green cluster, and the deployment is complete!

* * conclusion

PX-Motion has the ability to migrate Portworx volumes and Kubernetes resources between clusters. The above example is the process of using PX-Motion to help the team achieve blue-green deployment: testing its workload and data on the new version of Kubernetes, and helping the team run applications on the new green cluster. Real-world load and data testing on different versions of Kubernetes enables the operations team to gain confidence before releasing a new version of Kubernetes. Blue and green deployment is not the only feature of PX-Motion, please see our other PX-Motion blog posts to learn more. **

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report