Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure MySQL Cluster

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to configure MySQL Cluster". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn how to configure MySQL Cluster.

# # configuration instructions for MySQL Cluster

Management node (1) 10.10.10.9

SQL nodes (2) 10.10.10.10

10.10.10.11

Data nodes (2) 10.10.10.10

10.10.10.11

# # configuring Host IP

[root@mysql9 ~] # cat / etc/sysconfig/network

NETWORKING=yes

HOSTNAME=mysql9 # # because it is a cloned virtual machine, the hostname needs to be modified

[root@mysql9 ~] # cat / etc/hosts

127.0.0.1 localhost mysql9 # # because it is a cloned virtual machine, the hostname needs to be modified, followed by a different hostname for each machine.

10.10.10.9 mysql9

10.10.10.10 mysql10

10.10.10.11 mysql11

# # the actions done above should be performed on all servers!

# # disabling selinux

Modify / etc/selinux/config file

Change SELINUX=enforcing to SELINUX=disabled

Just restart the machine.

View selinux status:

[root@CentOS ~] # getenforce

# # shutting down iptables # # I don't know what it is, which wastes a lot of my time. If I don't shut it down, I will get an error when starting the Data node:

-bash-4.1$ ndbd-- initial

Unable to connect with connect string: nodeid=0,10.10.10.9:1186

Retrying every 5 seconds. Attempts left: 12 11 10 9 8 7 6 5 4 3 2 1, failed.

2015-12-18 23:11:55 [ndbd] ERROR-- Could not connect to management server, error:'

# # disable it below:

[root@mysql9 ~] # service iptables stop

Iptables: set chain to policy ACCEPT:filter [OK]

Iptables: clear firewall rules: [OK]

Iptables: uninstalling module: [OK]

[root@mysql9 ~] # chkconfig iptables off

# # the actions done above should be performed on all servers!

# # adding users and groups

[root@db ~] # groupadd mysql

[root@db] # useradd-r-g mysql mysql

[root@localhost data] # passwd mysql

Change the password of user mysql.

New password:

Invalid password: too short

Invalid password: too simple

Re-enter the new password:

Passwd: all authentication tokens have been successfully updated.

# # the actions done above should be performed on all servers!

# # installation and configuration of MySQL Cluster

[root@CentOS tmp] # cd / tmp/

[root@CentOS tmp] # ll mysql-cluster-gpl-7.4.8-linux-glibc2.5-i686.tar.gz

-rw-r--r--. 1 root root 472807612 December 18 01:11 mysql-cluster-gpl-7.4.8-linux-glibc2.5-i686.tar.gz

[root@CentOS tmp] # tar zxf mysql-cluster-gpl-7.4.8-linux-glibc2.5-i686.tar.gz-C / usr/local/

[root@CentOS tmp] # cd / usr/local/

[root@CentOS local] # mv mysql-cluster-gpl-7.4.8-linux-glibc2.5-i686/ mysql

[root@CentOS local] # chown-R mysql:mysql / usr/local/mysql

[root@CentOS local] # vi / etc/profile

Export PATH=/usr/local/mysql/bin:$PATH

[root@localhost bin] # source / etc/profile

# # the actions done above should be performed on all servers!

# # initializing Database

[root@CentOS local] # mkdir-p / data/3306

[root@CentOS local] # chown-R mysql:mysql / data

[root@CentOS local] # su-mysql

Su: warning: cannot change to directory / home/mysql: there is no such file or directory

-bash-4.1$ mkdir-p / home/mysql

Mkdir: unable to create directory "/ home/mysql": insufficient permissions

-bash-4.1$ exit

Logout

[root@CentOS local] # mkdir-p / home/mysql

[root@CentOS local] # su-mysql

-bash-4.1$ cd / usr/local/mysql

-bash-4.1 $. / scripts/mysql_install_db-- user=mysql-- basedir=/usr/local/mysql-- datadir=/data/3306

-bash-4.1$ cp / usr/local/mysql/support-files/my-default.cnf / etc/my.cnf

-bash-4.1$ vi / etc/my.cnf

[client]

Port = 3306

Socket = / data/3306/mysql.sock

Default-character-set = utf8

[mysqld]

Port = 3306

User = mysql

Socket = / data/3306/mysql.sock

Pid-file = / data/3306/mysql.pid

Basedir = / usr/local/mysql

Datadir = / data/3306

Log-bin = / data/3306/mysql-bin

Lower_case_table_names=1

Max_connections=1000

Character_set_server=utf8

Character_set_client=utf8

[mysql]

Default-character-set=utf8

[root@localhost mysql] # cd / usr/local/mysql

[root@localhost mysql] # cp support-files/mysql.server / etc/rc.d/init.d/mysqld

[root@localhost mysql] # chmod 777 / etc/rc.d/init.d/mysqld

[root@CentOS mysql] # service mysqld start

Starting MySQL. [OK]

[root@CentOS mysql] # netstat-tulnp | grep mysql

Tcp 0 0: 3306: * LISTEN 2057/mysqld

[root@CentOS mysql] # service mysqld status

MySQL running (2057) [OK]

[root@CentOS mysql] # service mysqld stop

Shutting down MySQL.. [OK]

# # the actions done above should be performed on all servers! Since I am a virtual machine, I just do one more copy.

# Management node does not need mysqld binaries, just MySQL Cluster server program (ndb_mgmd) and listening client level (ndb_mgm).

# # because it is a cloned virtual machine, the mysqld service is also installed on the management node, but the place to configure it will not be mentioned in the later configuration.

# # configuration steps of Management Node

# # execute under mysql users!

(1) create the directory mysql-cluster under / data on server 10.10.10.9, and create the configuration file config.ini in the directory.

-bash-4.1$ cd / data/

-bash-4.1$ mkdir mysql-cluster

-bash-4.1$ cd mysql-cluster/

-bash-4.1$ touch config.ini

(2) according to the test cluster environment here, the config.ini file is configured as follows:

-bash-4.1$ vi config.ini

# # for the configuration in the configuration file, you'd better add a space on both sides of the equal sign. I suffered a great loss here and wasted a lot of time and energy.

# # at one point, I thought there was no pairing, but I finally found out that it was the problem with the configuration file that made it impossible to launch it later!

[ndbd default]

Noofreplicas = 1 # number of mirrors per data node

Datamemory = 50m # memory allocated to data in each data node (this M must be capitalized)

Indexmemory = 20m # memory allocated to the index in each data node

[ndb_mgmd]

# configure the management node

Hostname = 10.10.10.9 # Management node IP

Datadir = / data/mysql-cluster # manages the node data directory, and there will be a lot of log information under this directory after the management node starts successfully.

[ndbd]

Hostname = 10.10.10.10 # # specify data node options

Datadir = / data/mysql-cluster/data

[ndbd]

Hostname = 10.10.10.11 # # specify data node options

Datadir = / data/mysql-cluster/data

[mysqld]

Hostname = 10.10.10.10 # # specify sql node options

[mysqld]

Hostname = 10.10.10.11 # # specify sql node options

In the above configuration file, there are many groups whose names are enclosed by "[]". Here we are most concerned about the configuration of three types of node groups, which are defined as follows.

? [NDB_MGMD]: indicates that there can be only one configuration of the management node.

? [NDBD DEFAULT]: represents the default configuration of each data node, and you no longer need to write these options in the [NDBD] of each node. There can only be one.

? [NDBD]: indicates the configuration of each data node, which can have more than one.

? [MYSQLD]: indicates the configuration of SQL nodes. You can have multiple IP addresses of different SQL nodes, or you can leave only one empty node without writing IP addresses, indicating that any IP address can be accessed. The number of this node indicates the total number of SQL nodes that can be used to connect to the data node.

Each node must have a separate id number, which can be filled in manually, such as "id=2" or not, and will be automatically assigned according to the order in which the configuration file is filled out.

# # configuration of SQL node and Data node

# SQL node, which can be thought of as a traditional MySQL Server. The Data node is also a MySQL Server, and you can specify a few parameters in the my.cnf.

# # because of the limitation of the environment, the SQL node and the Data node are placed on the same virtual machine, and only one mysqld service is installed in the environment. This is the modification of the my.cnf configuration mentioned later.

# # it is mentioned in many places on the Internet that SQL nodes and Data nodes are no different from ordinary mysqld services. To change an ordinary mysqld into a SQL node and a Data node, you only need to add a few parameters to the my.cnf configuration.

# # but I still don't understand that starting the Data node does not use any mysqld-related size, so I understand that the Data node does not need to install the mysqld service. Commands in bin are required.

# # execute under mysql users! Do both on 10.10.10.10 and 10.10.10.11!

The configuration of SQL nodes and data nodes is very simple. You only need to add the following to the configuration file (my.cnf) of MySQL (see the note below for the meaning of the parameters):

[root@CentOS ~] # chown mysql:mysql / etc/my.cnf

[root@CentOS ~] # su-mysql

-bash-4.1$ vi / etc/my.cnf

[mysqld]

Ndbcluster # run the NDB storage engine

Ndb-connectstring=10.10.10.9 # locate the management node

[mysql_cluster]

Ndb-connectstring=10.10.10.9 # locate the management node

[mysqld_safe]

Log-error = / data/mysql-cluster/data/mysqld.log

Pid-file = / data/mysql-cluster/data/mysqld.pid

# # creating a Directory

-bash-4.1$ mkdir-p / data/mysql-cluster/data

# # Startup of Cluster

Cluster needs to be started by each node before it can be run, and the startup order of the nodes is management node-> data node-> SQL node.

(1) on the management node, issue the following command from the system shell to start the management node process: (10.10.10.9)

-bash-4.1$ pwd

/ data/mysql-cluster

-bash-4.1 $ndb_mgmd-f. / config.ini

MySQL Cluster Management Server mysql-5.6.27 ndb-7.4.8

-bash-4.1 $ndb_mgm

-NDB Cluster-Management Client--

Ndb_mgm > show

Connected to Management Server at: localhost:1186

Cluster Configuration

-

[ndbd (NDB)] 2 node (s)

Id=2 (not connected, accepting connect from 10.10.10.10) # # you can see that the not connected is not connected because it has not been started yet

Id=3 (not connected, accepting connect from 10.10.10.11)

[ndb_mgmd (MGM)] 1 node (s)

Id=1 @ 10.10.10.9 (mysql-5.6.27 ndb-7.4.8)

[mysqld (API)] 2 node (s)

Id=4 (not connected, accepting connect from 10.10.10.10) # # you can see that the not connected is not connected because it has not been started yet

Id=5 (not connected, accepting connect from 10.10.10.11)

Ndb_mgm > exit

-bash-4.1$ ps-ef | grep ndb

Mysql 2312 1785 0 18:58 pts/1 00:00:00 ndb_mgm

Mysql 2457 10 19:24? 00:00:03 ndb_mgmd-f. / config.ini

Mysql 2495 1595 0 19:31 pts/0 00:00:00 grep ndb

(2) on each data node server (10.10.10.10 and 10.10.10.11 in this example), run the following command to start the ndbd process:

The ndbd process is a process that uses the NDB storage engine to process data in a table. Through this process, the storage node can achieve distributed transaction management, node recovery, online backup and other related tasks.

Note: it is important to use the "--initial" parameter only when starting ndbd for the first time, or when restarting ndbd after backup / restore or configuration changes. The reason is that this parameter causes the node to delete any files created by earlier ndbd instances for recovery, including recovery log files.

-bash-4.1$ ndbd-- initial # # 10.10.10.10

2015-12-19 01:29:05 [ndbd] INFO-- Angel connected to '10.10.10.9 purl 1186'

2015-12-19 01:29:05 [ndbd] INFO-- Angel allocated nodeid: 2

-bash-4.1$ ndbd-- initial # # 10.10.10.11

2015-12-19 01:37:36 [ndbd] INFO-- Angel connected to '10.10.10.9 purl 1186'

2015-12-19 01:37:36 [ndbd] INFO-- Angel allocated nodeid: 3

(3) start the MySQL service on the SQL node (10.10.10.10 and 10.10.10.11 in this example).

-bash-4.1$ service mysqld start

You can also use:-bash-4.1 $mysqld_safe-- defaults-file=/etc/my.cnf 2 > & 1 > / dev/null &

(4) after all the nodes are started successfully, use the show command of the ndb_mgm tool to check the cluster status:

# # ndb_mgm tool is the client management tool of ndb_mgmd (MySQL Cluster Server). It can easily check the status of Cluster, start backup, close Cluster and other functions.

Ndb_mgm > show

Cluster Configuration

-

[ndbd (NDB)] 2 node (s)

Id=2 @ 10.10.10.10 (mysql-5.6.27 ndb-7.4.8, Nodegroup: 0, *)

Id=3 @ 10.10.10.11 (mysql-5.6.27 ndb-7.4.8, Nodegroup: 1)

[ndb_mgmd (MGM)] 1 node (s)

Id=1 @ 10.10.10.9 (mysql-5.6.27 ndb-7.4.8)

[mysqld (API)] 2 node (s)

Id=4 @ 10.10.10.10 (mysql-5.6.27 ndb-7.4.8)

Id=5 @ 10.10.10.11 (mysql-5.6.27 ndb-7.4.8)

# # testing MySQL Cluster

# # as mentioned above, if you want to use Cluster, the storage engine of the table must be NDB, and the data of other types of storage engines will not be saved to the data node.

1.NDB Storage engine Test

(1) create a test table T1 in the test library of any SQL node (here use 10.10.10.10), set the storage engine to NDB, and insert two pieces of test data:

Mysql > create table T1 (id int) engine=ndb

Mysql > insert into T1 values (1)

Mysql > insert into T1 values (2)

(2) in another SQL node (10.10.10.11), query the T1 table in the test library, and the result is as follows:

Mysql > select * from T1; # # obviously, the data queried by the two SQL nodes are the same.

(3) change the storage engine of test table T1 to MyISAM on SQL node 10.10.10.10, and insert the test record again:

Mysql > alter table T1 engine=myisam

Mysql > insert into T1 values (3)

(4) query table T1 again on SQL node 10.10.10.11, and the result is as follows:

Mysql > select * from T1; # # you can find that table T1 can no longer be queried.

ERROR 1146 (42S02): Table 'test.t1' doesn't exist

(5) change the storage engine of T1 to NDB again on SQL node 10.10.10.10:

Mysql > alter table T1 engine=ndb

(6) query again on SQL node 10.10.10.11, and the results are as follows:

Mysql > select * from T1

+-+

| | id |

+-+

| | 2 |

| | 3 |

| | 1 |

+-+

3 rows in set (0.01sec)

# # obviously, the data in table T1 is synchronized to the data node again. All SQL nodes can query data normally.

# # in fact, you can find that the data files are still stored in the datadir directory of mysqld, while the "/ data/mysql-cluster/data" directories specified above contain cluster-related things.

2. Single point failure test-SQL node has a single point of failure.

(1) stop the MySQL service on SQL node 10.10.10.10.

-bash-4.1$ mysqladmin-uroot-p shutdown

(2) check the status of Cluster.

Ndb_mgm > show

Cluster Configuration

-

[ndbd (NDB)] 2 node (s)

Id=2 @ 10.10.10.10 (mysql-5.6.27 ndb-7.4.8, Nodegroup: 0, *)

Id=3 @ 10.10.10.11 (mysql-5.6.27 ndb-7.4.8, Nodegroup: 1)

[ndb_mgmd (MGM)] 1 node (s)

Id=1 @ 10.10.10.9 (mysql-5.6.27 ndb-7.4.8)

[mysqld (API)] 2 node (s)

Id=4 (not connected, accepting connect from 10.10.10.10)

Id=5 @ 10.10.10.11 (mysql-5.6.27 ndb-7.4.8)

(3) look at table T1 from node 10.10.10.11, and the results are as follows.

Mysql > select * from T1

+-+

| | id |

+-+

| | 2 |

| | 3 |

| | 1 |

+-+

3 rows in set (0.01sec)

# # shutting down MySQL Cluster

The shutdown command for Cluster is simple, as long as you execute the following command under shell:-bash-4.1 $ndb_mgm-e shutdown

You can also use the ndb_mgm tool to enter the administrative interface and use the shutdown command to close: ndb_mgm > shutdown

Then close mysqld:-bash-4.1$ service mysqld stop on another node

Or: mysqladmin-uroot-p shutdown

MySQL Cluster provides two types of logs, which are cluster logs (clusterlog) and node logs (node log). The former records all the logs generated by the Cluster node, while the latter records only the local events of the data node. In most cases, we recommend using the cluster log because it records the data of all nodes in one place, making it easier to manage.

Clusterlog is generally recorded in the same directory as the configuration file (config.ini). The file name format is ndb__cluster.log, where nodeid is the management node number.

You can turn the log on or off using the ndb_mgm client administration tool, as follows.

(1) execute the ndb_mgm command in shell.

-bash-4.1 $ndb_mgm

-NDB Cluster-Management Client--

Ndb_mgm >

(2) execute the clusterlog info command to view the current log status.

Ndb_mgm > clusterlog info

Connected to Management Server at: localhost:1186

Severities enabled: INFO WARNING ERROR CRITICAL ALERT

(3) the current log is open. Use the clusterlog off command to close the log.

Ndb_mgm > clusterlog off

Cluster logging is disabled

(4) check the log status again and find that it has been closed.

Ndb_mgm > clusterlog info

Cluster logging is disabled.

(5) executing the clusterlog on command will open the log again.

Ndb_mgm > clusterlog on

Cluster logging is enabled.

Ndb_mgm > clusterlog info

Severities enabled: INFO WARNING ERROR CRITICAL ALERT

Thank you for your reading, the above is the content of "how to configure MySQL Cluster", after the study of this article, I believe you have a deeper understanding of how to configure MySQL Cluster, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report