In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "the configuration method of MySQL cluster". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
I. introduction
=
The purpose of this document is to describe how to install and configure a 2-server-based MySQL cluster. And the implementation of any server problems or downtime when the MySQL can continue to run.
Be careful!
Although this is a MySQL cluster based on two servers, there must be an additional third server as the management node, but this server can be shut down after the cluster is started. It is also important to note that it is not recommended to shut down the server as the management node after the cluster startup is complete. Although it is theoretically possible to build a MySQL cluster based on only two servers, in this architecture, once a server goes down, the cluster can no longer work properly, so it loses the meaning of the cluster. For this reason, a third server is required to run as a management node.
In addition, many friends may not have the actual environment of three servers, so you can consider experimenting in VMWare or other virtual machines.
Let's assume the situation of these three services:
Server1: mysql1.vmtest.net 192.168.0.1
Server2: mysql2.vmtest.net 192.168.0.2
Server3: mysql3.vmtest.net 192.168.0.3
Servers1 and Server2 are the servers that actually configure the MySQL cluster. Server3, as a management node, is less demanding, requires only minor adjustments to Server3's system and does not need to install MySQL,Server3 to use a less configured computer and can run other services at the same time in Server3.
Install MySQL on Server1 and Server2
= =
Download mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz from
Note: must be max version of MySQL,Standard version does not support cluster deployment!
The following steps need to be done on Server1 and Server2 respectively
# mv mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz / usr/local/
# cd / usr/local/
# groupadd mysql
# useradd-g mysql mysql
# tar-zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# rm-f mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# mv mysql-max-4.1.9-pc-linux-gnu-i686 mysql
# cd mysql
# scripts/mysql_install_db-user=mysql
# chown-R root.
# chown-R mysql data
# chgrp-R mysql.
# cp support-files/mysql.server / etc/rc.d/init.d/mysqld
# chmod + x / etc/rc.d/init.d/mysqld
# chkconfig-add mysqld
Do not start MySQL at this time!
Install and configure the management node server (Server3)
= =
As a management node server, Server3 requires two files, ndb_mgm and ndb_mgmd:
From top to top-max-4.1.9-pc-linux-gnu-i686.tar.gz
# mkdir / usr/src/mysql-mgm
# cd / usr/src/mysql-mgm
# tar-zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# rm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# cd mysql-max-4.1.9-pc-linux-gnu-i686
# mv bin/ndb_mgm.
# mv bin/ndb_mgmd.
# chmod + x ndb_mg*
# mv ndb_mg* / usr/bin/
# cd
# rm-rf / usr/src/mysql-mgm
Now start to create a configuration file for this management node server:
# mkdir / var/lib/mysql-cluster
# cd / var/lib/mysql-cluster
# vi config.ini
Add the following to config.ini:
[NDBD DEFAULT]
NoOfReplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# Managment Server
[NDB_MGMD]
HostName=192.168.0.3 # IP address of the management node server Server3
# Storage Engines
[NDBD]
HostName=192.168.0.1 # IP address of the MySQL cluster Server1
DataDir= / var/lib/mysql-cluster
[NDBD]
HostName=192.168.0.2 # IP address of the MySQL cluster Server2
DataDir=/var/lib/mysql-cluster
# the following two [MYSQLD] can enter the hostnames of Server1 and Server2.
# however, in order to replace the servers in the cluster more quickly, it is recommended to leave it blank, otherwise the configuration must be changed after the server is replaced.
[MYSQLD]
[MYSQLD]
After saving and exiting, start the management node server Server3:
# ndb_mgmd
After starting the management node, it should be noted that this is only the management node service, not the management terminal. So you don't see any information about the output after startup.
Configure the cluster server and start MySQL
= =
The following changes are required in both Server1 and Server2:
# vi / etc/my.cnf
[mysqld]
Ndbcluster
IP address of ndb-connectstring=192.168.0.3 # Server3
[mysql_cluster]
IP address of ndb-connectstring=192.168.0.3 # Server3
After saving and exiting, establish a data directory and start MySQL:
# mkdir / var/lib/mysql-cluster
# cd / var/lib/mysql-cluster
# / usr/local/mysql/bin/ndbd-initial
# / etc/rc.d/init.d/mysqld start
You can add / usr/local/mysql/bin/ndbd to / etc/rc.local to boot.
Note: you need to use the-- initial parameter only when you start ndbd for the first time or after making changes to Server3's config.ini!
5. Check the working status
=
Go back to the management node server Server3 and start the management terminal:
# / usr/bin/ndb_mgm
Type the show command to view the current working status: (the following is an example of status output)
[root@mysql3 root] # / usr/bin/ndb_mgm
-NDB Cluster-Management Client--
Ndb_mgm > show
Connected to Management Server at: localhost:1186
Cluster Configuration
-
[ndbd (NDB)] 2 node (s)
Id=2 @ 192.168.0.1 (Version: 4.1.9, Nodegroup: 0, Master)
Id=3 @ 192.168.0.2 (Version: 4.1.9, Nodegroup: 0)
[ndb_mgmd (MGM)] 1 node (s)
Id=1 @ 192.168.0.3 (Version: 4.1.9)
[mysqld (API)] 2 node (s)
Id=4 (Version: 4.1.9)
Id=5 (Version: 4.1.9)
Ndb_mgm >
If there are no problems above, start testing MySQL now:
Note that this document does not set a root password for MySQL. It is recommended that you set your own MySQL root password for Server1 and Server2.
In Server1:
# / usr/local/mysql/bin/mysql-u root-p
> use test
> CREATE TABLE ctest (I INT) ENGINE=NDBCLUSTER
> INSERT INTO ctest () VALUES (1)
> SELECT * FROM ctest
You should see a 1 row returned message (return a value of 1).
If the above is normal, switch to Server2 and repeat the above test to observe the effect. If successful, perform INSERT in Server2 and switch back to Server1 to see if it works properly.
If there is no problem, then congratulations on your success!
VI. Destructive testing
=
Unplug the network cable from Server1 or Server2 and see if another cluster server is working properly (you can use SELECT query to test). After the test, reinsert the network cable.
If you do not have access to the physical server, that is, you cannot unplug the network cable, you can also test it like this:
On Server1 or Server2:
# ps aux | grep ndbd
You will see all the ndbd process information:
Root 5578 0.0 0.3 6220 1964? S 03:14 0:00 ndbd
Root 5579 0.0 20.4 492072 102828? R 03:14 0:04 ndbd
Root 23532 0.0 0.1 3680 684 pts/1 S 07:59 0:00 grep ndbd
Then kill a ndbd process to destroy the MySQL cluster server:
# kill-9 5578 5579
It is then tested using SELECT queries on another clustered server. And execute the show command in the management terminal of the management node server to see the status of the corrupted server.
After the test is complete, simply restart the ndbd process of the corrupted server:
# ndbd
Be careful! As mentioned earlier, there is no need to add the-- inital parameter at this time!
At this point, the MySQL cluster is configured!
This is the end of the content of "how to configure MySQL clusters". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.