Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to run highly available WordPress and MySQL on Kubernetes

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you how to run highly available WordPress and MySQL on Kubernetes, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's learn about it!

Run highly available WordPress and MySQL on Kubernetes

WordPress is the mainstream platform for editing and publishing Web content. In this tutorial, I will step through how to use Kubernetes to build a high availability (HA) WordPress deployment.

WordPress consists of two main components: a WordPress PHP server and a database for storing user information, posts, and Web site data. We need to make both components of the entire application fault tolerant while being highly available.

When the hardware and address change, running highly available services can be difficult: very difficult to maintain. With Kubernetes and its powerful network components, we can deploy highly available WordPress sites and MySQL databases without (almost) entering a single IP address.

In this tutorial, I'll show you how to create storage classes, services, configuration maps, and collections in Kubernetes, how to run highly available MySQL, and how to mount a highly available WordPress cluster to a database service. If you don't already have Kubernetes clusters, you can easily find and start them on Amazon, Google or Azure, or use Rancher Kubernetes Engine (RKE) on any server.

Architecture Overview

Now let me briefly introduce the technologies we are going to use and their functions:

Storage of WordPress application files: NFS storage with GCE persistent disk backup

Database cluster: MySQL with xtrabackup for parity

Application level: WordPress DockerHub image mounted to NFS storage

Load balancer and network: Kubernetes-based load balancer and service network

The architecture is as follows:

Create storage classes, services, and configuration mappings in K8s

In Kubernetes, state sets provide a way to define the order of pod initialization. We will use a stateful MySQL collection because it ensures that our data node has enough time to copy records from the previous pods at startup. We configure this state set in such a way that the MySQL host starts before other satellite machines, so when we extend, we can send the clone directly from the host to the satellite machine.

First, we need to create a persistent volume storage class and configuration mapping to apply the master-slave configuration as needed. We use persistent volumes to prevent the data in the database from being limited to any particular pods in the cluster. This approach prevents the database from losing data when the MySQL host pod is lost, and when the host pod is lost, it can reconnect to the attached machine with xtrabackup and copy the data from the attached machine to the host. MySQL replication is responsible for host-to-host replication, while xtrabackup is responsible for satellite-host replication.

To dynamically allocate persistent volumes, we use GCE persistent disks to create storage classes. However, Kubernetes provides storage schemes for various persistence volumes:

# storage-class.yamlkind: StorageClassapiVersion: storage.k8s.io/v1metadata: name: slowprovisioner: kubernetes.io/gce-pdparameters: type: pd-standard zone: us-central1-a

Create the class and deploy it using the directive: $kubectl create-f storage-class.yaml.

Next, we will create a configmap that specifies some of the variables set in the MySQL configuration file. These different configurations are selected by pod itself, but they also provide us with a convenient way to manage potential configuration variables.

Create a YAML file named mysql-configmap.yaml to process the configuration, as follows:

# mysql-configmap.yamlapiVersion: v1kind: ConfigMapmetadata: name: mysql labels: app: mysqldata: master.cnf: | # Apply this config only on the master. [mysqld] log-bin skip-host-cache skip-name-resolve slave.cnf: | # Apply this config only on slaves. [mysqld] skip-host-cache skip-name-resolve

Create a configmap and use the directive: $kubectl create-f mysql-configmap.yaml

To deploy it.

Next we will set up the service so that MySQL pods can communicate with each other, and our WordPress pod can use mysql-services.yaml to communicate with MySQL. This also starts the service load balancer for the MySQL service.

# mysql-services.yaml# Headless service for stable DNS entries of StatefulSet members.apiVersion: v1kind: Servicemetadata: name: mysql labels: app: mysqlspec: ports:-name: mysql port: 3306 clusterIP: None selector: app: mysql

Through this service declaration, we have laid the foundation for implementing a cluster of MySQL instances with multiple writes and multiple reads. This configuration is necessary, and every WordPress instance may write to the database, so each node must be ready to read and write.

Execute the command $kubectl create-f mysql-services.yaml to create the above services.

So far, we have created the volume declaration storage class, which gives persistent disks to all containers that request them, we have configured configmap, we have set some variables in the MySQL configuration file, and we have configured a network layer service that is responsible for load balancing requests to MySQL servers. The above is just a framework for preparing stateful sets, and where the MySQL server actually runs, which we will continue to explore next.

Configure MySQL with state set

In this section, we will write a YAML configuration file to apply to MySQL instances that use state sets.

Let's first define our state set:

1, create three pods and register them with the MySQL service.

2, define each pod according to the following template:

♢ creates an initialization container for the host MySQL server, named init-mysql. Exe.

♢ uses a mysql:5.7 image for this container

♢ runs a bash script to start xtrabackup

♢ mounts two new volumes for the configuration file and configmap

3. Create an initialization container for the host MySQL server, named clone-mysql.

♢ uses the xtrabackup:1.0 image of Google Cloud Registry for the container

♢ runs the bash script to clone an existing xtrabackups of the previous sibling

♢ hangs data and configuration files on two new volumes

♢ this container effectively hosts cloned data so that new satellite containers can obtain it

4, create a basic container for the satellite MySQL server

♢ creates a MySQL satellite container and configures it to connect to the MySQL host

♢ creates an attached xtrabackup container and configures it to connect to the xtrabackup host

5. Create a volume declaration template to describe each volume, each of which is a 10GB persistent disk

The following configuration file defines the behavior of the primary and subsidiary nodes of the MySQL cluster, provides the bash configuration for running the satellite client, and ensures that the primary node is functioning properly before cloning. The subsidiary node and the primary node each get their own 10GB volumes, which they requested in the persistent volume storage class we defined earlier.

ApiVersion: apps/v1beta1kind: StatefulSetmetadata: name: mysqlspec: selector: matchLabels: app: mysql serviceName: mysql replicas: 3 template: metadata: labels: app: mysqlspec: initContainers:-name: init-mysql image: mysql:5.7 command:-bash-"- c"-| set-ex # Generate mysql server-id from pod ordinal index. [[`hostname` = ~-([0-9] +) $]] | | exit 1 ordinal=$ {BASH_REMATCH [1]} echo [mysqld] > / mnt/conf.d/server-id.cnf # Add an offset to avoid reserved server-id=0 value. Echo server-id=$ ((100 + $ordinal)) > / mnt/conf.d/server-id.cnf # Copy appropriate conf.d files from config-map to emptyDir. If [[$ordinal-eq 0]] Then cp / mnt/config-map/master.cnf / mnt/conf.d/ else cp / mnt/config-map/slave.cnf / mnt/conf.d/ fi volumeMounts:-name: conf mountPath: / mnt/conf.d-name: config-map mountPath: / mnt/config-map-name: clone-mysql image: Gcr.io/google-samples/xtrabackup:1.0 command:-bash-"- c"-| set-ex # Skip the clone if data already exists. [[- d / var/lib/mysql/mysql]] & & exit 0 # Skip the clone on master (ordinal index 0) [[`hostname` = ~-([0-9] +) $]] | | exit 1 ordinal=$ {BASH_REMATCH [1]} [[$ordinal-eq 0]] & & exit 0 # Clone data from previous peer. Ncat-recv-only mysql-$ (($ordinal-1)) .mysql 3307 | xbstream-x-C / var/lib/mysql # Prepare the backup. Xtrabackup-- prepare-- target-dir=/var/lib/mysql volumeMounts:-name: data mountPath: / var/lib/mysql subPath: mysql-name: conf mountPath: / etc/mysql/conf.d containers:-name: mysql image: mysql:5.7 env:-name: MYSQL_ALLOW_EMPTY_PASSWORD value: "1 "ports:-name: mysql containerPort: 3306 volumeMounts:-name: data mountPath: / var/lib/mysql subPath: mysql-name: conf mountPath: / etc/mysql/conf.d resources: requests: cpu: 500m memory: 1Gi livenessProbe: exec: command: [" mysqladmin " "ping"] initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 readinessProbe: exec: # Check we can execute queries over TCP (skip-networking is off) Command: ["mysql", "- h", "127.0.0.1", "- e" "SELECT 1"] initialDelaySeconds: 5 periodSeconds: 2 timeoutSeconds: 1-name: xtrabackup image: gcr.io/google-samples/xtrabackup:1.0 ports:-name: xtrabackup containerPort: 3307 command:-bash-"- c"-| set-ex cd / var/lib/mysql # Determine binlog position of cloned data If any. If [[- f xtrabackup_slave_info]]; then # XtraBackup already generated a partial "CHANGE MASTER TO" query # because we're cloning from an existing slave. Mv xtrabackup_slave_info change_master_to.sql.in # Ignore xtrabackup_binlog_info in this case (it's useless). Rm-f xtrabackup_binlog_info elif [[- f xtrabackup_binlog_info]]; then # We're cloning directly from master. Parse binlog position. [[`info` = ~ ^ (. *?) [[: space:]] + (. *) $]] | | exit 1 rm xtrabackup_binlog_info echo "CHANGE MASTER TO MASTER_LOG_FILE='$ {BASH_REMATCH [1]}',\ MASTER_LOG_POS=$ {BASH_REMATCH [2]}" > change_master_to.sql.in fi # Check if we need to complete a clone by starting replication. If [[- f change_master_to.sql.in]]; then echo "Waiting for mysqld to be ready (accepting connections)" until mysql-h 127.0.0.1-e "SELECT 1"; do sleep 1; done echo "Initializing replication from clone position" # In case of container restart, attempt this at-most-once. Mv change_master_to.sql.in change_master_to.sql.orig mysql-h 127.0.0.1

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report