Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to run highly available WordPress and MySQL on Kubernetes

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article focuses on "how to run highly available WordPress and MySQL on Kubernetes". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to run highly available WordPress and MySQL on Kubernetes.

Architecture Overview

Now let me briefly introduce the technologies we are going to use and their functions:

Storage of WordPress application files: NFS storage with GCE persistent disk backup

Database cluster: MySQL with xtrabackup for parity

Application level: WordPress DockerHub image mounted to NFS storage

Load balancer and network: Kubernetes-based load balancer and service network

The architecture is as follows:

Create storage classes, services, and configuration mappings in Kubernetes

In Kubernetes, state sets provide a way to define the order of pod initialization. We will use a stateful MySQL collection because it ensures that our data node has enough time to copy records from the previous pods at startup. We configure this state set in such a way that the MySQL host starts before other satellite machines, so when we extend, we can send the clone directly from the host to the satellite machine.

First, we need to create a persistent volume storage class and configuration mapping to apply the master-slave configuration as needed.

We use persistent volumes to prevent the data in the database from being limited to any particular pods in the cluster. This approach prevents the database from losing data when the MySQL host pod is lost, and when the host pod is lost, it can reconnect to the attached machine with xtrabackup and copy the data from the attached machine to the host. MySQL replication is responsible for host-to-host replication, while xtrabackup is responsible for satellite-host replication.

To dynamically allocate persistent volumes, we use GCE persistent disks to create storage classes. However, Kubernetes provides storage schemes for various persistence volumes:

Create the class and deploy it using the directive: $kubectl create-f storage-class.yaml.

Next, we will create a configmap that specifies some of the variables set in the MySQL configuration file. These different configurations are selected by pod itself, but they also provide us with a convenient way to manage potential configuration variables. Create a YAML file named mysql-configmap.yaml to process the configuration, as follows:

Create the configmap and deploy it using the directive: $kubectl create-f mysql-configmap.yaml.

Next we will set up the service so that MySQL pods can communicate with each other, and our WordPress pod can use mysql-services.yaml to communicate with MySQL. This also starts the service load balancer for the MySQL service.

Through this service declaration, we have laid the foundation for implementing a cluster of MySQL instances with multiple writes and multiple reads. This configuration is necessary, and every WordPress instance may write to the database, so each node must be ready to read and write.

Execute the command $kubectl create-f mysql-services.yaml to create the above services.

So far, we have created the volume declaration storage class, which gives persistent disks to all containers that request them, we have configured configmap, we have set some variables in the MySQL configuration file, and we have configured a network layer service that is responsible for load balancing requests to MySQL servers. The above is just a framework for preparing stateful sets, and where the MySQL server actually runs, which we will continue to explore next.

Configure MySQL with state set

In this section, we will write a YAML configuration file to apply to MySQL instances that use state sets. Let's first define our state set:

Create three pods and register them with the MySQL service.

Define each pod according to the following templates:

Create an initialization container for the host MySQL server, named init-mysql. Exe.

Use a mysql:5.7 image for this container

Run a bash script to start xtrabackup

Mount two new volumes for the configuration file and configmap

Create an initialization container for the host MySQL server, named clone-mysql. Exe.

Use the xtrabackup:1.0 image of Google Cloud Registry for this container

Run the bash script to clone an existing xtrabackups of the previous sibling

Hang data and configuration files on two new volumes

This container effectively hosts cloned data so that new attached containers can obtain it

Create a basic container for the satellite MySQL server

Create an MySQL satellite container and configure it to connect to the MySQL host

Create an attached xtrabackup container and configure it to connect to the xtrabackup host

Create a volume declaration template to describe each volume, each of which is a 10GB persistent disk

The following configuration file defines the behavior of the primary and subsidiary nodes of the MySQL cluster, provides the bash configuration for running the satellite client, and ensures that the primary node is functioning properly before cloning. The subsidiary node and the primary node each get their own 10GB volumes, which they requested in the persistent volume storage class we defined earlier.

Save the file as mysql-statefulset.yaml, type kubectl create-f mysql-statefulset.yaml, and have Kubernetes deploy your database.

Now when you call $kubectl get pods, you should see three pods started or ready, with two containers on each pod. The primary node pod is represented as mysql-0, while the attached pods is mysql-1 and mysql-2.

Let pods execute for a few minutes to ensure that the xtrabackup service is properly synchronized between pod, and then deploy the WordPress.

You can check the log of a single container to make sure that no error messages are thrown. The command to view the log is $kubectl logs-f-c

The primary node xtrabackup container should show two connections from the attachment, and there should be no errors in the log.

Deploy highly available WordPress

The final step in the process is to deploy our WordPress pods to the cluster. To do this, we want to define the services and deployment of WordPress.

To make WordPress highly available, we want each container runtime to be fully replaceable, which means that we can terminate one and start the other without making changes to data or service availability. We also want to be able to tolerate the failure of at least one container, with a redundant container responsible for handling slack.

WordPress stores important site-related data in the application directory / var/www/html. For two WordPress instances that want to serve the same site, this folder must contain the same data.

When running highly available WordPress, we need to share the / var/www/html folder between instances, so we define a NGS service as the mount point for these volumes.

The following is the configuration for setting up the NFS service. I provide a pure English version:

Deploy the NFS service using the directive $kubectl create-f nfs.yaml. Now, we need to run $kubectl describe services nfs-server to get the IP address, which will be used later. Note: in the future, we can bind these together using the service name, but now you need to hard-code the IP address.

We now create a persistent volume declaration, establish a mapping with the NFS service we created earlier, and then attach the volume to WordPress pod, the / var/www/html root directory, which is where WordPress is installed. All installations and environments of WordPress pods in the cluster are preserved here. With these configurations, we can start and dismantle any WordPress node while the data can be left behind. Because the NFS service requires constant use of the physical volume, the volume will be retained and will not be recycled or misallocated.

Use the directive $kubectl create-f wordpress.yaml to deploy the WordPress instance.

The default deployment runs only one WordPress instance, and you can use the directive $kubectl scale-- replicas= deployment/wordpress to expand the number of WordPress instances.

To get the address of the WordPress service load balancer, you need to enter $kubectl get services wordpress and get the EXTERNAL-IP field from the result to navigate to WordPress.

Elasticity test

OK, now that we have deployed the services, let's dismantle them and see how our high-availability architecture handles this mess. In this deployment, the only single point of failure left is the NFS service (the reason is summarized in the conclusion at the end of the article). You should be able to test any other service to see how the application responds. Now I have started three copies of the WordPress service and one primary and two subsidiary nodes in the MySQL service.

First, let's kill the rest and leave only one WordPress node to see how the application responds: $kubectl scale-- replicas=1 deployment/wordpress

We should now see a decline in the number of pod deployed by WordPress. $kubectl get pods

You should see that the operation of WordPress pods has changed to 1Accord 1.

Click on the WordPress service IP and we will see the same site and database as before.

If you want to extend the recovery, you can use $kubectl scale-- replicas=3 deployment/wordpress.

Once again, we can see that the packet is left in three instances.

Let's test the state set of MySQL. We use the instruction to reduce the number of backups: $kubectl scale statefulsets mysql-- replicas=1

We will see that two attachments are lost from the instance, and if the primary node is lost at this time, its saved data will be saved on the GCE persistent disk. However, you must recover the data from disk manually.

If all three MySQL nodes are turned off, they cannot be replicated when a new node appears. However, if a primary node fails, a new primary node starts automatically and the data from the subsidiary node is reconfigured through xtrabackup. Therefore, when running the production database, I do not recommend running with a replication factor of less than 3.

At this point, I believe you have a better understanding of "how to run highly available WordPress and MySQL on Kubernetes". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report