Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to deploy highly available PostgreSQL clusters in Kubernetes

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail how to deploy high-availability PostgreSQL clusters in Kubernetes. The content of the article is of high quality, so the editor will share it with you for reference. I hope you will have some understanding of the relevant knowledge after reading this article.

Patroni

Patroni is a template that uses Python to provide you with a customized, highly available solution that stores configuration information like ZooKeeper, etcd or Consul for maximum availability. If DBAs,DevOps engineers or SRE are looking for a rapid deployment of highly available PostgreSQL solutions in data centers, or other uses, I hope Patroni can help them.

Crunchy

The Crunchy container suite provides a Docker container that enables rapid deployment of PostgreSQL, as well as tools for management and monitoring. And support a variety of styles of deployment PostgreSQL clusters.

Stolon

Stolon is a highly available PostgreSQL management tool for cloud native. It is cloud native because it can provide high availability (kubernetes integration) for PostgreSQL inside the container, and it also supports other kinds of infrastructure (such as cloud Iaas, old-style infrastructure, etc.)

Nice diagrams and some user sharing on kubernets.io persuaded me to try the crunchy container. But after a while, I changed my mind.

I don't want to talk about some flaws in his design or anything else. But it makes me feel like I installed PostgreSQL manually in a container, and it doesn't feel like a cloud.

So I tried stolon. After installing and uninstalling it again and again, I ran its statefulset example and created it with helm chart.

If you want to know more about stolon, you can refer to the author's introduction.

Next I'll show you the installation process and demonstrate failover in a clustered environment. We assume that the installation is using helm chart.

Stolon architecture diagram (excerpt from the introduction of stolon)

Stolon is made up of three parts:

Keeper: he is responsible for managing the aggregation of instances of PostgreSQL to clusterview provided by sentinel (s).

Sentinel:it is responsible for discovering and monitoring keeper and calculating the optimal clusterview.

Proxy: access point of the client. It forces the master to connect to the PostgreSQL on the right and forces the connection to the unelected master to be closed.

Stolon uses etcd or consul as the main cluster state storage.

Installation$ git clone https://github.com/lwolf/stolon-chart$ cd stolon-chart$ helm install. / stolonYou can also install directly from my repositoryhelm repo add lwolf-charts http://charts.lwolf.orghelm install lwolf-charts/stolon

The installation process will do the following actions:

First, three etcd nodes are created with statefulset. Stolon-proxy and stolon-sentinel will also be deployed. Singe time job pauses the installation of the cluster until the state of the etcd node becomes availabe.

Chart will also create two services

Stolon-proxy- services are derived from official examples. He always points to the current master that should be written.

Stolon-keeper-Stolon itself does not provide load balancing for any read operations. But kubernetes's service can do this. Therefore, for users, the read operation of stolon-keeper is load balanced at the pod level.

When all the components become RUNNING, we can try to connect them.

We can deploy service using NodePort as a simple connection. Use two terminals to connect master service and slave service, respectively. During the post process, we assume that the stolon-proxy service (RW) has exposed port 30543 and the stolon-keeper service (RO) has exposed port 30544.

Connect to master and establish the test table

Psql-- host-- port 30543 postgres-U stolon-Wpostgres=# create table test (id int primary key not null,value text not null); CREATE TABLEpostgres=# insert into test values (1, 'value1'); INSERT 0 1postgres=# select * from test;id | value-----1 | value1 (1 row)

Connect to the slave and check the data. You can write some information to confirm that the request has been processed by slave.

Psql-host-port 30544 postgres-U stolon-Wpostgres=# select * from test;id | value-----1 | value1 (1 row)

After the test passes, let's try the failover function.

Test failover

This case is an example of statefullset in the official code library.

To put it simply, in order to simulate the hang up of master, we first deleted the statefulset of master and then deleted the pod of master.

Kubectl delete statefulset stolon-keeper--cascade=falsekubectl delete pod stolon-keeper-0

Then, in sentinel's log, we can see that the new master has been elected.

No keeper info available db=cb96f42d keeper=keeper0no keeper info available db=cb96f42d keeper=keeper0master db is failed db=cb96f42d keeper=keeper0trying to find a standby to replace failed masterelecting db as the new master db=087ce88a keeper=keeper1

Now, if we repeat the previous command in those two terminals, we can see the following output.

Postgres=# select * from test;server closed the connection unexpectedlyThis probably means the server terminated abnormallybefore or while processing the request.The connection to the server was lost. Attempting reset:Succeeded.postgres=# select * from test;id | value-----1 | value1 (1 row)

Kubernetes's service removes the unavailable pod and transfers the request to the available pod. So the new read connection is routed to the healthy pod.

Finally, we need to recreate the statefulset. The easiest way is to update the deployed helm chart.

Helm lsNAME REVISION UPDATEDSTATUS CHART NAMESPACEfactual-crocodile 1 Sat Feb 18 15:42:50 2017DEPLOYED stolon-0.1.0 defaulthelm upgrade factual-crocodile .2. Using chaoskube to simulate random pod hang-up

Another good way to test cluster resilience (resilience) is to use chaoskube. Chaoskube is a small service program that periodically kill and drop some pod in the cluster. It can also be deployed using helm charts.

Helm install-- set labels= "releasemanufacture factualcrocodilecommendent publication factual copyright crocodile"-- setinterval=5m stable/chaoskube

This command runs chaoskube, which deletes a pod every 5 minutes. It selects the pod of release=factual-crocodile in label, but ignores the pod of etcd.

After several hours of testing, my cluster environment is still consistent and working stably.

This is the end of how to deploy a highly available PostgreSQL cluster in Kubernetes. I hope the above content can be of some help and learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report