In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article will explain in detail how to run the PostgreSQL database on K8S through Rancher. The content of the article is of high quality, so the editor shares it for you as a reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.
Run highly available PostgreSQL through Rancher Kubernetes Engine
Rancher Kubernetes Engine (RKE) is a lightweight Kubernetes installer that supports Kubernetes installation on bare metal and virtual machines. RKE solves the complexity of Kubernetes installation. Installation through RKE is relatively simple, regardless of the underlying operating system.
Portworx is a cloud native storage and data management platform to support persistent workloads on Kubernetes. With Portworx, users can manage databases on different infrastructures and on different container schedulers. It provides a single data management layer for all stateful services (Stateful Service).
The editor lists the steps: deploy and manage a highly available PostgreSQL cluster on AWS's Kubernetes cluster through RancherKubernetes Engine (RKE).
To sum up, running highly available PostgreSQL on Amazon requires:
Install a Kubernetes cluster through Rancher KubernetesEngine
Install Portworx, a cloud native storage solution, as a DaemonSet of Kubernetes.
Create a storage class to define your storage requirements, such as replication factors, snapshot policies, and performance
Deploy PostgreSQL using Kubernetes
Test failure recovery through nodes in a killing or cordoning cluster
If possible, dynamically resize PG Volume, snapshot and backup Postgres to S3
How to create a Kubernetes cluster through RKE
RKE is a tool for installing and configuring Kubernetes. Environments that can be supported include bare metal, virtual machines or IaaS. We will create a 3-node Kubernetes cluster on AWS EC2.
For more detailed steps, you can refer to this tutorial from The New Stack. (https://thenewstack.io/run-stateful-containerized-workloads-with-rancher-kubernetes-engine-and-portworx/)
After doing this, we will create a cluster of 1 master and 3 worker nodes.
Install Portworx on Kubernetes
Installing Portworx on RKE's Kubernetes is no different from installing it through Kops on a Kubernetes cluster. Portworx has detailed documentation listing each step of the operation (https://docs.portworx.com/portworx-install-with-kubernetes/cloud/aws/)) to complete running the Portworx cluster on Kubernetes in the AWS environment.
The New Stacktutorial (https://thenewstack.io/run-stateful-containerized-workloads-with-rancher-kubernetes-engine-and-portworx/) also includes all the steps for deploying Portworx DaemonSet in Kubernetes.
When the Kubernetes cluster is up and running, the Portworx installation and configuration is complete, and we begin to deploy a highly available PostgreSQL database.
Create a Postgres storage class
By storing class objects, an Admin can define classes for different Portworx volumes in the cluster. These classes are used in the deployment of dynamic volumes. The storage class itself defines the replication factor, IO case (such as database or CMS), and priority (such as SSD or HDD). These parameters affect the availability and output of the workload, so they can be set separately on a per-volume basis. This is important because the requirements for the database of the production system are completely different from the development of the test system.
In the following example, we deploy a storage class whose replication factor is set to "db" and priority to "high". This means that the storage is optimized for low transfer rate database load (Postgres) and automatically deployed in the cluster with the highest performance storage.
$kubectl create-f https://raw.githubusercontent.com/fmrtl73/katacoda-scenarios-1/master/px-k8s-postgres-all-in-one/assets/px-repl3-sc.yamlstorageclass "px-repl3-sc" created
Create a Postgres PVC
We can now create a PersistentVolume Claim (PVC) based on the storage class. The advantage of dynamic deployment is that claims can be created without explicitly deploying persistent volume Persistent Volume (PV).
$kubectl create-f https://raw.githubusercontent.com/fmrtl73/katacoda-scenarios-1/master/px-k8s-postgres-all-in-one/assets/px-postgres-pvc.yamlpersistentvolumeclaim "px-postgres-pvc" created
The password for PostgreSQL will be created as Secret. Run the following command to create the Secret in the correct format.
$echo postgres123 > password.txt$ tr-d'\ n '.strippedpassword.txt & & mv .strippedpassword.txt password.txt$ kubectl create secret generic postgres-pass-- from-file=password.txtsecret "postgres-pass" created
Deploy PostgreSQL on Kubernetes
Finally, let's create an instance of PostgreSQL as a Kubernetes deployment object. For simplicity, we only deploy a separate Postgres Pod. Because Portworx provides synchronous replication to achieve high availability. Therefore, a single Postgres instance is the best way to deploy the Postgres database. Portworx also supports multi-node Postgres deployment, depending on your needs.
$kubectl create-f https://raw.githubusercontent.com/fmrtl73/katacoda-scenarios-1/master/px-k8s-postgres-all-in-one/assets/postgres-app.yamldeployment "postgres" created
Make sure that the Pods of Postgres is running.
$kubectl get pods-l app=postgres-o wide-- watch
Wait until Postgres pod becomes running.
We can check the Portworx volume by using the pxctl tool that runs with PostgresPod.
$VOL= `kubectl get pvc | grep px-postgres-pvc | awk'{print $3}'`$PX_POD=$ (kubectl get pods-l name=portworx-n kube-system-o jsonpath=' {.items [0] .metadata.name}') $kubectl exec-it $PX_POD-n kube-system-/ opt/pwx/bin/pxctl volume inspect ${VOL}
The output of the command confirms that the volume supporting the PostgreSQL database instance has been created.
Error recovery of PostgreSQL
Let's populate the database with 5 million rows of sample data.
We first find the Pod running PostgreSQL to access the shell.
$POD= `kubectl get pods-l app=postgres | grep Running | grep 1plus 1 | awk'{print $1}'`$ kubectl exec-it $POD bash
Now that we have entered Pod, we can connect to Postgres and create the database.
# psqlpgbench=# create database pxdemo;pgbench=#\ lpgbench=#\ Q
By default, Pgbench creates four tables: (pgbench_branches,pgbench_tellers,pgbench_accounts,pgbench_history), and there are 100000 rows in the main pgbench_accounts table. So we created a simple 16MB-sized database.
Using the-s option, we can increase the number of rows in each table. In the above command, we filled in 50 on "scaling" so that pgbench would create a database of 50 times the default size.
Our pgbench_accounts now has 5 million lines. So our database becomes 800MB (50*16MB)
# pgbench-I-s 50 pxdemo
Wait until pgbench finishes creating the table. Let's go on to make sure
Pgbench_accounts now has 5 million lines populated.
# psql pxdemo\ dtselect count (*) from pgbench_accounts;\ qexit
Now, let's simulate the failure of the node on which PostgreSQL is running
$NODE= `kubectl get pods-l app=postgres-o wide | grep-v NAME | awk'{print $7}'`$kubectl cordon ${NODE} node "ip-172-20-57-55.ap-southeast-1.compute.internal" cordoned
Execute kubectl get nods to confirm that the scheduling of one of the nodes has expired.
$kubectl get nodes
Let's continue to delete the PostgreSQLpod.
$POD= `kubectl get pods-l app=postgres-o wide | grep-v NAME | awk'{print $1}'`$kubectl delete pod ${POD} pod "postgres-556994cbd4-b6ghn" deleted
Once the deletion is complete. Portworx STorageORchestrator for Kubernetes (STORK) (https://portworx.com/stork-storage-orchestration-kubernetes/), which resets the pod to create a node with a data replication set.
Once the Pod is deleted, it is reset to the node that has the data replication set. Portworx STorageORchestrator for Kubernetes (STORK) (https://portworx.com/stork-storage-orchestration-kubernetes/)-Portworx's customer storage scheduler, which allows multiple pod to be placed on the node where the data resides, and ensures that the correct node can be selected for scheduling Pod.
Let's run the following command to verify it. We will find that a new pod has been created and scheduled on a different node.
$kubectl get pods-l app=postgres
Let's redeploy the previous node.
$kubectl uncordon ${NODE} node "ip-172-20-57-55.ap-southeast-1.compute.internal" uncordoned
Finally, let's verify that the data is still available.
Let's take a look at the pod name and exec in the container.
$POD= `kubectl get pods-l app=postgres | grep Running | grep 1plus 1 | awk'{print $1}'`$ kubectl exec-it $POD bash
Now use psql to make sure our data is still there.
# psql pxdemopxdemo=#\ dtpxdemo=# select count (*) from pgbench_accounts;pxdemo=#\ qpxdemo=# exit
We see that the database tables are still there, and everything is correct.
Storage management in Postgres
After testing end-to-end database error recovery, we ran StorageOps on the Kubernetes cluster.
Expand volumes without downtime at all
Let's now demonstrate how to simply and dynamically add space to a volume when the space is about to fill up.
Open a shell in the container
$POD= `kubectl get pods-l app=postgres | grep Running | awk'{print $1}'`$kubectl exec-it $POD bash
Let's run a baseline transaction benchmark with pgbench, which will try to increase the volume capacity to 1Gib without success.
$pgbench-c 10-j 2-t 10000 pxdemo$ exit
There may be a variety of errors when running the above command. The first error indicates that Pod is out of space.
PANIC: could not write to file "pg_xlog/xlogtemp.73": No space left on device
Kubernetes does not support modifications after PVC is created. We use the pxctl CLI tool to operate on Portworx.
Let's get the name of the volume and look at it with the pxctl tool.
SSH to the node and run the following command
POD= `/ opt/pwx/bin/pxctl volume list-- label pvc=px-postgres-pvc | grep-v ID | awk'{print $1}'`$ / opt/pwx/bin/pxctl v I $POD
Notice that the volume is almost full with 10% left. Let's expand it with the following command.
$/ opt/pwx/bin/pxctl volume update $POD-size=2Update Volume: Volume update successful for volume 834897770479704521
Take a snapshot of the volume and restore the database
Portworx supports the creation of snapshots of Kubernetes PVCs. Let's create a snapshot of the Postgres PVC we created earlier.
$kubectl create-f https://github.com/fmrtl73/katacoda-scenarios-1/raw/master/px-k8s-postgres-all-in-one/assets/px-snap.yamlvolumesnapshot "px-postgres-snapshot" created
You can view all snapshots with the following command.
$kubectl get volumesnapshot,volumesnapshotdata
With the snapshot, let's delete the database.
$POD= `kubectl get pods-l app=postgres | grep Running | grep 1plus 1 | awk'{print $1}'`$ kubectl exec-it $POD bash$ psqldrop database pxdemo;\ l\ qexit
Snapshots are the same as volumes, and we can use it to create a new PostgreSQL instance. Let's restore the snapshot data to create a new PostgreSQL instance.
$kubectl create-f https://raw.githubusercontent.com/fmrtl73/katacoda-scenarios-1/master/px-k8s-postgres-all-in-one/assets/px-snap-pvc.yamlpersistentvolumeclaim "px-postgres-snap-clone" created
From the new PVC, we create a PostgreSQL Pod
$kubectl create-f https://raw.githubusercontent.com/fmrtl73/katacoda-scenarios-1/master/px-k8s-postgres-all-in-one/assets/postgres-app-restore.yamldeployment "postgres-snap" created
Verify that the pod is running.
$kubectl get pods-l app=postgres-snap
Finally, let's access the data created by the benchmark tool.
$POD= `kubectl get pods-l app=postgres-snap | grep Running | grep 1plus 1 | awk'{kubectl}'`$kubectl exec-it $POD bash$ psql pxdemo\ dtselect count (*) from pgbench_accounts;\ qexit
We found that the tables and data were normal. If we want to create a disaster recovery backup in another Amazon area, we can push the snapshot to Amazon S3. Portworx snapshots support all S3-compatible storage objects, so backups can also be other clouds or locally deployed data centers.
Portworx can be easily deployed through RKE to run stateful workloads in production systems on Kubernetes. Through integration with STORK, the DevOps and StorageOps teams were able to run database clusters seamlessly on Kubernetes. They can also run traditional operations for cloud native applications, such as expanding volumes, snapshots, backups, and disaster recovery. On how to run the PostgreSQL database on K8S through Rancher to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.