Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to run MySQL with Docker

2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "how to run MySQL with Docker". In daily operation, I believe many people have doubts about how to run MySQL with Docker. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "how to run MySQL with Docker"! Next, please follow the editor to study!

# # Docker Engine-Swarm mode

Running the MySQL container on multiple hosts has a certain degree of complexity, depending on the clustering technology you choose.

Before we try to run MySQL using container plus multi-host network, we first need to understand how the image works, how each resource is allocated (including disk, memory and CPU), network (overlay network driver, including flannel and weave by default, etc.) and fault tolerance mechanism (how the container implements relocation, failover and load balancing, etc.).

All these will affect the overall operation, uptime and performance of the database. We recommend that you use orchestration tools to ensure that the Docker engine cluster is more manageable and scalable. The latest Docker Engine (version 1.12, released on July 14, 2016) includes the Swarm pattern, which is designed to natively manage a Docker Engine cluster called Swarm.

It is important to note that Docker Engine Swarm mode and Docker Swarm are two different projects, and although they work similarly, they have different installation steps.

Let's take a look at the first preparatory work that needs to be done before we can proceed:

The following ports must be opened first:

2377 (TCP)-Cluster Management

7946 (TCP and UDP)-Node Communication

4789 (TCP and UDP)-overlay network traffic

There are 2 types of nodes:

Management node-the management node is responsible for performing the orchestration and cluster management functions necessary to maintain the necessary state of the Swarm. The management node selects a single supervisor to perform the orchestration task.

Work node-the work node is responsible for receiving and performing tasks from the management node. By default, the management node itself also exists as a worker node, but you can configure it to only perform administrative tasks.

In this article, we will deploy the application container on top of the load balancer Galera Cluster and connect it to an overlay network based on three Docker hosts (docker1, docker2 and docker3). We will use the Docker Engine Swarm schema as an orchestration tool.

# # Cluster Construction

First, let's include the Docker node in the Swarm cluster. The Swarm model requires the use of odd-numbered management nodes (of course, more than one) to maintain fault tolerance. Therefore, we need to have all three nodes as management nodes here. It is important to note that by default, the management node also acts as a working node.

First initialize the Swarm schema on docker1. After completion, the node will become the administrative node and the current administrator: [root@docker1] $docker swarm init-- advertise-addr 192.168.55.111 Swarm initialized: current node (6r22rd71wi59ejaeh7gmq3rge) is now a manager. To add a worker to this swarm, run the following command: docker swarm join\-- token SWMTKN-1-16kit6dksvrqilgptjg5pvu0tvo5qfs8uczjq458lf9mul41hc-dzvgu0h4qngfgihz4fv0855bo\ 192.168.55.111 docker swarm join 2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

We also need to add the other two nodes as management nodes. Use the join command to register the two nodes as management nodes: [docker1] $docker swarm join-token manager To add a manager to this swarm, run the following command: docker swarm join\-- token SWMTKN-1-16kit6dksvrqilgptjg5pvu0tvo5qfs8uczjq458lf9mul41hc-7fd1an5iucy4poa4g1bnav0pt\ 192.168.55.111v2377

On docker2 and docker3, run the following command to register the node: $docker swarm join\-- token SWMTKN-1-16kit6dksvrqilgptjg5pvu0tvo5qfs8uczjq458lf9mul41hc-7fd1an5iucy4poa4g1bnav0pt\ 192.168.55.111virtual 2377

Verify that all nodes have been added correctly: [docker1] $docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 5w9kycb046p9aj6yk8l365esh docker3.local Ready Active Reachable 6r22rd71wi59ejaeh7gmq3rge * docker1.local Ready Active Leader awlh9cduvbdo58znra7uyuq1n docker2.local Ready Active Reachable here, and our docker1.local acts as the primary management node.

# # overlay Network

The only way to connect containers running on different hosts to each other is to use an overlay network. You can think of it as a set of container networks built on top of another network (in this case, a physical host network). Docker Swarm mode provides a default overlay network, which is responsible for implementing a VxLAN-based solution in cooperation with libnetwork and libkv. Of course, you can also choose other overlay network driver solutions such as Flannel, Calico or Weave, but you need to perform additional installation steps.

In Docker Engine Swarm mode, you can create an overlay network based solely on the management node, and it does not need additional key storage mechanisms such as etcd, consul or Zookeeper.

This set of Swarm only provides overlay network for each node in the cluster. When you create a service that requires an overlay network, the management node automatically extends the overlay network to the node where the service task is run.

Let's create an overlay network for each container. Here, we need to deploy the Percona XtraDB cluster and the application container on each Docker host to achieve fault tolerance. These containers must run in the same overlay network to ensure that they can communicate with each other.

Here we name the network "mynet". You can only do this on the management node: [docker1] $docker network create-- driver overlay mynet

Let's take a look at our existing network: [docker1] $docker network ls NETWORK ID NAME DRIVER SCOPE 213ec94de6c9 bridge bridge local bac2a639e835 docker_gwbridge bridge local 5b3ba00f72c7 host host local 03wvlqw41e9g ingress overlay swarm 9iy6k0gqs35b mynet overlay swarm 12835e9e75b9 none null local now has 2 overlay networks in Swarm. The "mynet" network is exactly what we created when we deployed the container. The ingress overlay network is provided by default. The Swarm management node leverages ingress load balancing to expose services outside the cluster.

# # deployment using Services and tasks

Next we will deploy the Galera cluster container through services and tasks. When you create a service, you need to specify which set of container images to use and which commands to execute within the container. There are two types of services:

Replication service-distributes a series of replication tasks to each node, depending on the setup state you need, such as "--replicas 3".

Global service-applies to service tasks on all available nodes in the cluster, such as "--mode global". If you have seven Docker nodes in the Swarm cluster, there will be corresponding containers on top of all the nodes.

The Docker Swarm schema has limited functionality in managing persistent data stores. When a node fails, the management node bypasses the relevant containers and creates a new container to maintain the original running state. Because the container will be discarded after it goes offline, we will lose all the data volumes in it. Fortunately, the Galera cluster allows each MySQL container to automatically accept configuration using state / data when joining.

# # deploy key-value Storage

The docker image we use here is Percona-Lab. This set of images requires each MySQL container to access a set of key-value storage (only etcd is supported) to achieve IP address discovery during cluster initialization and booting. Each container will search for other IP addresses in the etcd to complete MySQL startup with the correct wsrep_cluster_address. Otherwise, the first set of containers will use gcomm:// as the boot address.

First deploy our etcd service. You can click here to get the etcd image we use. It requires us to use a discovery URL based on the number of etcd nodes to be deployed. In this case, we need to set up a separate etcd container with the following command: [docker1] $curl-w "\ n" 'https://discovery.etcd.io/new?size=1' https://discovery.etcd.io/a293d6cc552a66e68f4b5e52ef163d68

After that, use the generated URL as the "- discovery" value, and create the service for etcd at the same time: [docker1] $docker service create\-- name etcd\-- replicas 1\-- network mynet\-p 2379discovery 2379\-p 2380discovery 2380\-p 4001discovery 4001\-p 7001discovery 7001\ elcolio/etcd:latest\-name etcd\-discovery= service here The Docker Swarm mode orchestrates the container deployment on one of the Docker hosts.

Retrieve the etcd service virtual IP address. We need to use this IP address when deploying the cluster in the next step: [docker1] $docker service inspect etcd-f "{{.Endpoint.VirtualIPs}}" [{03wvlqw41e9go8li34z2u1t4p 10.255.0.5 ash 16} {9iy6k0gqs35bn541pr31mly59 10.0.0.2pm 24}] here, our architecture is as follows:

# # deploying Database Cluster

Use the following command to specify a virtual IP address for the etcd to deploy the Galera (Percona XtraDB cluster) container: [docker1] $docker service create\-- name mysql-galera\-- replicas 3\-p 3306Galera 3306\-- network mynet\-- env MYSQL_ROOT_PASSWORD=mypassword\-- env XTRABACKUP_PASSWORD=mypassword\-- env CLUSTER_NAME=galera\ perconalab/percona-xtradb-cluster:5.6

The entire deployment process takes some time, including downloading the image to the corresponding work / management node. You can verify the deployment status using the following command: [docker1] $docker service ps mysql-galera ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 8wbyzwr2x5buxrhslvrlp2uy7 mysql-galera.1 perconalab/percona-xtradb-cluster:5.6 docker1.local Running Running 3 minutes ago 0xhddwx5jzgw8fxrpj2lhcqeq mysql-galera.2 perconalab/percona-xtradb-cluster:5.6 docker3.local Running Running 2 minutes ago f2ma6enkb8xi26f9mo06oj2fh mysql-galera.3 perconalab/percona-xtradb-cluster:5.6 docker2.local Running Running 2 minutes ago

As you can see, the mysql-galera service is currently running. All existing services are listed below: [docker1] $docker service ls ID NAME REPLICAS IMAGE COMMAND 1m9ygovv9zui mysql-galera 3 elcolio/etcd:latest 3 perconalab/percona-xtradb-cluster:5.6 au1w5qkez9d4 etcd 1 elcolio/etcd:latest-name etcd-discovery= https://discovery.etcd.io/a293d6cc552a66e68f4b5e52ef163d68

The Swarm pattern contains an internal DNS component that is responsible for automatically assigning a DNS entry to each service in the Swarm. Therefore, you can use the service name to resolve to the corresponding virtual IP address: [docker2] $docker exec-it $(docker ps | grep etcd | awk {'print $1'}) ping mysql-galera PING mysql-galera (10.0.0.4): 56 data bytes 64 bytes from 10.0.0.4: seq=0 ttl=64 time=0.078 ms 64 bytes from 10.0.0.4: seq=1 ttl=64 time=0.179 ms

Or use the "docker service inspect" command directly to retrieve the virtual IP address:

[docker1] # docker service inspect mysql-galera-f "{{.Endpoint.VirtualIPs}}" [{03wvlqw41e9go8li34z2u1t4p 10.255.0.7 docker service inspect mysql-galera 16} {9iy6k0gqs35bn541pr31mly59 10.0.0.4 Universe 24}]

So far, our architecture is shown in the following figure:

# # deploying applications

Finally, you can create the application service and deliver the MySQL service name (mysql-galera) as the database host value:

[docker1] $docker service create\-- name wordpress\-- replicas 2\-p 80:80\-- network mynet\-- env WORDPRESS_DB_HOST=mysql-galera\-- env WORDPRESS_DB_USER=root\-- env WORDPRESS_DB_PASSWORD=mypassword\ wordpress

After the deployment is complete, we can then retrieve the virtual IP address of the wordpress service through the "docker service inspect" command:

[docker1] # docker service inspect wordpress-f "{{.Endpoint.VirtualIPs}}" [{p3wvtyw12e9ro8jz34t9u1t4w 10.255.0.11 docker service inspect wordpress 16} {kpv8e0fqs95by541pr31jly48 10.0.0.8 Universe 24}]

Now let's take a look at the current architecture diagram:

Our distributed application and database settings have been deployed by the Docker container.

# # access Service and load Balancer

At this point, the following ports have been opened on all Docker nodes in the cluster (based on the-p flag on each "docker service create" command), regardless of whether each node is currently running the service task:

Etcd-2380, 2379, 7001, 4001

MySQL-3306

HTTP-80

If we directly connect to PublishedPort using a simple loop, we can see that the MySQL service has implemented load balancing on each container:

[docker1] $while true; do mysql-uroot-pmypassword-h227.0.0.1-P3306-NBe 'select @ @ wsrep_node_address'; sleep 1; done 10.255.0.10 10.255.0.8 10.255.10.255.0.10 10.255.0.8 10.255.0.9 10.255.0.10 10.255.0.8 10.255.0.10 10.255.0.10 ^ C

Now, the Swarm management node is responsible for the internal management of load balancing, and we cannot configure the load balancing algorithm. After that, we can use the external load balancer to route external traffic to each Docker node. If any Docker node fails, the service will be relocated to another available node.

At this point, the study on "how to run MySQL with Docker" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report