In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "the usage of mysql cluster". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn the usage of mysql cluster.
We all know that MySQL Cluster is a technology whose main function is to deploy Cluster of in-memory databases in related systems without sharing. Through the non-shared architecture, the system can use cheap hardware, and there are no special requirements for hardware and software.
In addition, because each component has its own memory and disk, there is no single point of failure. MySQL Cluster consists of a group of computers, each running a variety of processes, including MySQL servers, NDB Cluster data nodes, management servers, and (possibly) specialized data access programs. For the relationship between these components in Cluster, see the following figure:
All these nodes constitute a completed MySQL cluster system. The data is saved in the storage engine of the NDB Storage Server, and the table (structure) is saved in the MySQL Server. The application accesses these data tables through the "MySQL server", and the cluster management server manages the "NDB storage server" through the management tool (ndb_mgmd).
By bringing MySQL Cluster into the open source world, MySQL provides Cluster data management with high availability, high performance, and scalability for all who need it.
The basic concept of MySQL Cluster "NDB" is an in-memory storage engine, which has the characteristics of high availability and good data consistency.
MySQL Cluster can configure the NDB storage engine with a variety of failover and load balancing options, but this is easiest at the Cluster level. MySQL Cluster's NDB storage engine contains a complete dataset, which only depends on other data in Cluster itself.
Currently, the Cluster portion of MySQL Cluster can be configured independently of the MySQL server. In MySQL Cluster
Each part of the Cluster is treated as 1 node.
MGM nodes: the role of such nodes is to manage other nodes in the MySQL Cluster, such as providing configuration data, starting and stopping nodes, running backups, and so on. Because such nodes are responsible for managing the configuration of other nodes, you should start such nodes before starting other nodes. The MGM node is started with the command "ndb_mgmd".
Data nodes: this type of node is used to hold Cluster data. The number of data nodes is related to the number of copies and is a multiple of the fragment. For example, for two replicas, each copy has two fragments, then there are four data nodes. However, there is no need to set up multiple copies. The data node is started with the command "ndbd".
SQL node: this is the node used to access Cluster data. For MySQL Cluster, the client node uses NDB
Traditional MySQL server for Cluster storage engine. Typically, the SQL node is started with the command "mysqld-ndbcluster"
Or use "mysqld" to start after adding "ndbcluster" to "my.cnf". Note: in many cases, the term "node" is used to mean
Computer, but when talking about MySQL Cluster, it represents a process. There can be any number of nodes on a single computer, for
Here, we use the term "Cluster host".
The management server (MGM node) is responsible for managing Cluster configuration files and Cluster logs. Each node in the Cluster retrieves configuration data from the management server and requests to determine how the management server is located. When a new event occurs within the data node, the node transmits information about such events to the management server, and then writes such information to the Cluster log.
In addition, there can be any number of Cluster client processes or applications. They fall into two types:
Standard MySQL clients: for MySQL Cluster, they are no different from standard (non-Cluster class) MySQL. Change the sentence
In other words, you can access MySQL Cluster from existing MySQL applications written in PHP, Perl, C, C++, Java, Python, Ruby, and so on.
Administrative client: this type of client connects to the management server and provides commands to start and stop nodes, start and stop message tracking (debug version only), display node version and status, start and stop backups, and so on.
Download the required software at www.mysql.com
MySQL-Cluster-gpl-client-7.1.18-1.el6.x86_64.rpm
MySQL-Cluster-gpl-server-7.1.18-1.el6.x86_64.rpm
MySQL-Cluster-gpl-clusterj-7.1.18-1.el6.x86_64.rpm
MySQL-Cluster-gpl-storage-7.1.18-1.el6.x86_64.rpm
MySQL-Cluster-gpl-management-7.1.18-1.el6.x86_64.rpm
MySQL-Cluster-gpl-tools-7.1.18-1.el6.x86_64.rpm
System environment: rhel6.0 x86-64 iptables and selinux off
Management node: 192.168.0.1
Data node: 192.168.0.3
Sql node: 192.168.0.4
-> manage node configuration
Rpm-ivh MySQL-Cluster-gpl-management-7.1.18-1.el6.x86_64.rpm
Rpm-ivh MySQL-Cluster-gpl-tools-7.1.18-1.el6.x86_64.rpm
[root@server1~] # mkdir / usr/mysql-cluster
[root@server1~] # cd / usr/mysql-cluster
[root@server1 mysql-cluster] # vim config.ini
[ndbd default]
Noofreplicas=1 # this global parameter can only be set in [NDBD DEFAULT], which defines the number of copies saved for each table in cluster. This parameter also specifies the size of the node group. A node group refers to a collection of nodes that hold the same information.
Datamemory=64M # this parameter defines the amount of space that the data node uses to store database records. All the space is allocated in memory. Make sure that your data node has enough physical memory to accommodate this value. Since I am testing with a virtual machine, the setting is too small. You can define this value according to the actual environment.
Indexmemory=10M # # this parameter is used to control the amount of storage used by hash (mixed) indexes in cluster.
[ndb_mgmd]
Each node in nodeid=1 # cluster has a unique identity, represented by integers from 1 to 63
Hostname=192.168.0.1 # specify the management node
Datadir=/usr/mysql-cluster # specifies the location where the management server output files are stored, including cluster log files, process output files, and PID of the supervisor
[ndbd]
Nodeid=2
Hostname=192.168.0.3 # specify data node
Datadir=/usr/mysql-cluster # specifies the directory where trace files, log files, pid files, and error logs are stored
[mysqld]
Nodeid=3
Hostname=192.168.0.4 # specify sql node
-> data node configuration
Rpm-ivh MySQL-Cluster-gpl-storage-7.1.18-1.el6.x86_64.rpm
[root@server3~] # mkdir / usr/mysql-cluster
[root@server3~] # vim / etc/my.cnf
[mysqld]
Ndbcluster # runs the ndb cluster engine
Ndb-connectstring=192.168.0.1 # specify the management node
[mysql_cluster]
Ndb-connectstring=192.168.0.1 # specify the management node
-> sql node configuration
Rpm-ivh MySQL-Cluster-gpl-client-7.1.18-1.el6.x86_64.rpm
Rpm-ivh MySQL-Cluster-gpl-server-7.1.18-1.el6.x86_64.rpm
If your memory ≤ 64m, copy / usr/share/mysql/my-small.cnf is / etc/my.cnf
If the memory is 128m, the replication / usr/share/mysql/my-medium.cnf is / etc/my.cnf
If the memory is 512m, the replication / usr/share/mysql/my-large.cnf is / etc/my.cnf
If the memory is 1-2G, the replication / usr/share/mysql/my-huge.cnf is / etc/my.cnf
If the memory is 4G, the replication / usr/share/mysql/my-innodb-heavy-4G.cnf is / etc/my.cnf
[root@server4 ~] # cd / usr/share/mysql
[root@server4 mysql] # cp my-medium.cnf / etc/my.cnf
[root@server4 mysql] # vim / etc/my.cnf
....
# innodb_lock_wait_timeout = 50 # the following data is added below the [mysqld] area
Ndbcluster
Ndb-connectstring=192.168.0.1
[mysql-cluster]
Ndb-connectstring=192.168.0.1
....
-> first start the management node, then the data node, and finally the sql node
# manage nodes
[root@server1 mysql-cluster] # ndb_mgmd-f / usr/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-5.1.56 ndb-7.1.18
[root@server1 mysql-cluster] # ls
Files generated after the config.ini ndb_1_cluster.log ndb_1_config.bin.1 ndb_1_out.log ndb_1.pid # service starts
[root@server1 mysql-cluster] # ps ax
....
1326? S 0:00 [flush-253:0]
1331? Ssl 0:00 ndb_mgmd-f / usr/mysql-cluster/config.ini
1347 pts/0 R + 0:00 ps ax
# data node
[root@server3 ~] # ndbd-- initial # operates on the data node
2012-04-09 07:43:47 [ndbd] INFO-- Angel connected to '192.168.0.1 purl 1186'
2012-04-09 07:43:47 [ndbd] INFO-- Angel allocated nodeid: 2
The [root@server3 ~] # ls / usr/mysql-cluster/ # directory needs to be created manually, and files and directories are generated in it after the service is started.
Ndb_2_fs ndb_2_out.log ndb_2.pid # blue font is the directory
Note: use the "--initial" parameter only when starting ndbd for the first time, or when restarting ndbd after backup / restore or configuration changes, which is heavy
Yes. The reason is that this parameter causes the node to be deleted by the earlier ndbd instance
Any files created by the instance for recovery, including recovery log files
# SQL node
[root@server4 ~] # / etc/init.d/mysql start # operates on sql node
Starting MySQL.. SUCCESS!
[root@server1 ~] # netstat-antlp
....
Tcp 0 0 0.0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 18. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0
Tcp 0 0 127.0.0.1:47757 127.0.0.1:1186 ESTABLISHED 1331/ndb_mgmd
Tcp 0 0 192.168.0.1:1186 192.168.0.3:60259 ESTABLISHED 1331/ndb_mgmd
Tcp 0 0 192.168.0.1:1186 192.168.0.4:57693 ESTABLISHED 1331/ndb_mgmd
Tcp 0 0 192.168.0.1:1186 192.168.0.3:60260 ESTABLISHED 1331/ndb_mgmd
Tcp 0 0 127.0.0.1:1186 127.0.0.1:47757 ESTABLISHED 1331/ndb_mgmd
....
-> manage node testing
[root@server1 mysql-cluster] # ndb_mgm
-NDB Cluster-Management Client--
Ndb_mgm > show
Connected to Management Server at: localhost:1186
Cluster Configuration
-
[ndbd (NDB)] 1 node (s)
Id=2 @ 192.168.0.3 (mysql-5.1.56 ndb-7.1.18, Nodegroup: 0, Master)
[ndb_mgmd (MGM)] 1 node (s)
Id=1 @ 192.168.0.1 (mysql-5.1.56 ndb-7.1.18)
[mysqld (API)] 1 node (s)
Id=3 @ 192.168.0.4 (mysql-5.1.56 ndb-7.1.18)
-> safely shut down and restart mysql cluster
On the machine where the MGM node is located, simply enter the following command in Shell:
Ndb_mgm-e shutdown
This command will properly abort ndb_mgm, ndb_mgmd, and any ndbd processes. Use mysqladmin shutdown or
Other methods can abort the SQL node
When restarting, you can simply run the following command:
# on the management node:
Ndb_mgmd-f / var/lib/mysql-cluster/config.ini
# on each data node:
Ndbd
Note: do not call this command with the "--initial" option when restarting the NDBD node normally.
# on the SQL node:
Service mysql restart
-> add nodes online (192.168.0.5 configuration is the same as data node 192.168.0.3) and (192.168.0.6 configuration is the same as sql node 192.168.0.4)
Management node:
[root@server1 ~] # ndb_mgm-e shutdown # abort ndb_mgm, ndb_mgmd and any ndb processes
[root@server1 ~] # vim / usr/mysql-cluster/config.ini # add the following
[ndbd default]
Noofreplicas=2 # defines two nodes as a node group, the data stored in the same group is the same, and the data nodes are distributed on different hosts, thus avoiding a single point of failure.
....
[ndbd] # New node with same node 192.168.0.3 data node
Nodeid=4
Hostname=192.168.0.5 # specify a new data node
Datadir=/usr/mysql-cluster
[mysqld] # New node with same node 192.168.0.4 sql node
Nodeid=5
Hostname=192.168.0.6 # specify a new sql node
[mysqld] # add a free connection to prepare for later data recovery, or add it when needed
[root@server5] # rpm-ivh MySQL-Cluster-gpl-storage-7.1.18-1.el6.x86_64.rpm
[root@server5 ~] # mkdir / usr/mysql-cluster
[root@server3 ~] # scp / etc/my.cnf 192.168.0.5:/etc/ #
[root@server5 ~] # vim / etc/my.cnf
[mysqld]
Ndbcluster
Ndb-connectstring=192.168.0.1
[mysql_cluster]
Ndb-connectstring=192.168.0.1
[root@server6] # rpm-ivh MySQL-Cluster-gpl-client-7.1.18-1.el6.x86_64.rpm
[root@server6] # rpm-ivh MySQL-Cluster-gpl-server-7.1.18-1.el6.x86_64.rpm
[root@server4 ~] # scp / etc/my.cnf 192.168.0.6:/etc/ #
[root@server6 ~] # vim / etc/my.cnf
....
# innodb_flush_log_at_trx_commit = 1
# innodb_lock_wait_timeout = 50
Ndbcluster
Ndb-connectstring=192.168.0.1
[mysql-cluster]
Ndb-connectstring=192.168.0.1
[mysqldump]
....
-> manage node detection
[root@server1 ~] # ndb_mgm
Ndb_mgm > show
Connected to Management Server at: localhost:1186
Cluster Configuration
-
[ndbd (NDB)] 2 node (s)
Id=2 @ 192.168.0.3 (mysql-5.1.56 ndb-7.1.18, Nodegroup: 0, Master)
Id=4 @ 192.168.0.5 (mysql-5.1.56 ndb-7.1.18, Nodegroup: 1)
[ndb_mgmd (MGM)] 1 node (s)
Id=1 @ 192.168.0.1 (mysql-5.1.56 ndb-7.1.18)
[mysqld (API)] 2 node (s)
Id=3 @ 192.168.0.4 (mysql-5.1.56 ndb-7.1.18)
Id=5 @ 192.168.0.6 (mysql-5.1.56 ndb-7.1.18)
Execute on the sql node:
[root@server6 ~] # mysqladmin create world
[root@server6 ~] # vi city.sql
CREATE TABLE City (
ID int (11) NOT NULL auto_increment
Name char (35) NOT NULL default''
CountryCode char (3) NOT NULL default''
District char (20) NOT NULL default''
Population int (11) NOT NULL default'0'
PRIMARY KEY (ID)
) ENGINE=NDBCLUSTER; # specifies that MySQL uses the NDB storage engine for the table, and the data is stored in each data node, # otherwise mysql uses the default MyISAM storage engine, all data will be stored locally, just like a stand-alone machine
INSERT INTO City VALUES (1)
INSERT INTO City VALUES (2) Qandahardy (2)
INSERT INTO City VALUES (3 Heratoriums, AFGAs, 186800)
[root@server6 ~] # mysql world
< city.sql ####在各sql节点查看导入的数据 mysql>Use world
Database changed
Mysql > show tables
+-+
| | Tables_in_world |
+-+
| | City |
+-+
1 row in set (0.00 sec)
Mysql > select * from City
+-- +
| | ID | Name | CountryCode | District | Population | |
+-- +
| | 3 | Herat | AFG | Herat | 186800 | |
| | 1 | Kabul | AFG | Kabol | 1780000 | |
| | 2 | Qandahar | AFG | Qandahar | 237500 | |
+-- +
3 rows in set (0.06 sec)
-> mysql cluster backup and recovery
# operate on the management node
Ndb_mgm > show
Cluster Configuration
-
[ndbd (NDB)] 2 node (s)
Id=2 @ 192.168.0.3 (mysql-5.1.56 ndb-7.1.18, Nodegroup: 0, Master)
Id=4 @ 192.168.0.5 (mysql-5.1.56 ndb-7.1.18, Nodegroup: 1)
[ndb_mgmd (MGM)] 1 node (s)
Id=1 @ 192.168.0.1 (mysql-5.1.56 ndb-7.1.18)
[mysqld (API)] 2 node (s)
Id=3 @ 192.168.0.4 (mysql-5.1.56 ndb-7.1.18)
Id=5 @ 192.168.0.6 (mysql-5.1.56 ndb-7.1.18)
Ndb_mgm > start backup # mysql cluster backup data
Waiting for completed, this may take several minutes
Node 2: Backup 1 started from node 1
Node 2: Backup 1 started from node 1 completed
StartGCP: 20 StopGCP: 23
# Records: 2053 # LogRecords: 0
Data: 50312 bytes Log: 0 bytes
Ndb_mgm > quit
Note: the data backed up by default is saved in the / usr/local/mysql/ndbdata/BACKUP directory of each data node
[root@server1 ~] # scp / usr/bin/ndb_restoer root@server3:/use/bin/
[root@server1 ~] # scp / usr/bin/ndb_restore root@server5:/usr/bin/
# operate on sql node
Mysql > delete from City; # Delete the City table in the world library and perform a restore operation on the data node
Query OK, 3 rows affected (0.04 sec)
Mysql > select * from City
Empty set (0.00 sec)
# in the data node (because the data is scattered in two nodes, the recovery operation is performed on the two data nodes)
[root@server3 ~] # cd / usr/mysql-cluster/
[root@server3 mysql-cluster] # ls
BACKUP ndb_2_error.log ndb_2_fs ndb_2_out.log ndb_2.pid ndb_2_trace.log.1 ndb_2_trace.log.next
[root@server3 mysql-cluster] # cd BACKUP/
[root@server3 BACKUP] # ls
BACKUP-1
[root@server3 BACKUP] # cd BACKUP-1/
[root@server3 BACKUP-1] # ls
BACKUP-1-0.2.Data BACKUP-1.2.ctl BACKUP-1.2.log
[root@server3 ~] # ndb_restore-n 2-b 1-r / usr/mysql-cluster/BACKUP/BACKUP-1/ # data Node 2
Nodeid = 2
Backup Id = 1
Backup path = / usr/mysql-cluster/BACKUP/BACKUP-1/
Opening file'/ usr/mysql-cluster/BACKUP/BACKUP-1/BACKUP-1.2.ctl'
File size 10420 bytes
Backup version in files: ndb-6.3.11 ndb version: mysql-5.1.56 ndb-7.1.18
Stop GCP of Backup: 84
Connected to ndb!!
Opening file'/ usr/mysql-cluster/BACKUP/BACKUP-1/BACKUP-1-0.2.Data'
File size 26304 bytes
Processing data in table: sys/def/NDB$EVENTS_0 (3) fragment 0
_ _ _
Processing data in table: mysql/def/ndb_apply_status (6) fragment 0
_ _ _
Processing data in table: world/def/City (7) fragment 0
_ _ _
Processing data in table: mysql/def/NDB$BLOB_4_3 (5) fragment 0
_ _ _
Processing data in table: sys/def/SYSTAB_0 (2) fragment 0
_ _ _
Processing data in table: mysql/def/ndb_schema (4) fragment 0
Opening file'/ usr/mysql-cluster/BACKUP/BACKUP-1/BACKUP-1.2.log'
File size 52 bytes
Restored 1 tuples and 0 log entries
NDBT_ProgramExit: 0-OK
[root@server5 ~] # cd / usr/mysql-cluster/
[root@server5 mysql-cluster] # ls
BACKUP ndb_4_error.log ndb_4_fs ndb_4_out.log ndb_4.pid ndb_4_trace.log.1 ndb_4_trace.log.2 ndb_4_trace.log.next
[root@server5 mysql-cluster] # cd BACKUP/
[root@server5 BACKUP] # ls
BACKUP-1
[root@server5 BACKUP] # cd BACKUP-1/
[root@server5 BACKUP-1] # ls
BACKUP-1-0.4.Data BACKUP-1.4.ctl BACKUP-1.4.log
[root@server5 ~] # ndb_restore-n 4-b 1-r / usr/mysql-cluster/BACKUP/BACKUP-1/ # data Node 4
Nodeid = 4
Backup Id = 1
Backup path = / usr/mysql-cluster/BACKUP/BACKUP-1/
Opening file'/ usr/mysql-cluster/BACKUP/BACKUP-1/BACKUP-1.4.ctl'
File size 10420 bytes
Backup version in files: ndb-6.3.11 ndb version: mysql-5.1.56 ndb-7.1.18
Stop GCP of Backup: 84
Connected to ndb!!
Opening file'/ usr/mysql-cluster/BACKUP/BACKUP-1/BACKUP-1-0.4.Data'
File size 25392 bytes
_ _ _
Processing data in table: sys/def/NDB$EVENTS_0 (3) fragment 1
_ _ _
Processing data in table: mysql/def/ndb_apply_status (6) fragment 1
_ _ _
Processing data in table: world/def/City (7) fragment 1
_ _ _
Processing data in table: mysql/def/NDB$BLOB_4_3 (5) fragment 1
_ _ _
Processing data in table: sys/def/SYSTAB_0 (2) fragment 1
_ _ _
Processing data in table: mysql/def/ndb_schema (4) fragment 1
Opening file'/ usr/mysql-cluster/BACKUP/BACKUP-1/BACKUP-1.4.log'
File size 52 bytes
Restored 2 tuples and 0 log entries
NDBT_ProgramExit: 0-OK
Note:-n refers to node id-b refers to backup id-r refers to backup directory
GOOD LUCK!
At this point, I believe you have a deeper understanding of the use of "mysql cluster". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.