Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to build MySQL Cluster 7.4 in CentOS 6.5.How to build MySQL Cluster

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

Editor to share with you how to build CentOS cluster 7.4 in MySQL 6.5.I hope you will get something after reading this article. Let's discuss it together.

The information of each node is as follows:

Management node: 192.168.78.141

Data node 1: 192.168.78.137

Data Node 2VR 192.168.78.135

SQL Node 1Rank 192.168.78.137

SQL Node 2VR 192.168.78.135

Perform compilation and installation on the management node, the data node, and the SQL node

Create software installation path, log and data storage path

[root@localhost /] # mkdir-p / cluster

[root@localhost /] # mkdir-p / cluster_data/

Go to the official website to download MySQL Cluster

Http://dev.mysql.com/downloads/cluster/

[root@localhost install] # rpm-ivh MySQL-Cluster-gpl-7.4.11-1.el6.src.rpm

Warning: MySQL-Cluster-gpl-7.4.11-1.el6.src.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY

1:MySQL-Cluster-gpl # # [100%]

[root@localhost log] # cd / root/rpmbuild/SOURCES/

[root@localhost SOURCES] # tar xvfz mysql-cluster-gpl-7.4.11.tar.gz

[root@localhost SOURCES] # ls

Mysql-5.5.48 mysql-5.5.48.tar.gz mysql-cluster-gpl-7.4.11 mysql-cluster-gpl-7.4.11.tar.gz

[root@localhost SOURCES] # cd mysql-cluster-gpl-7.4.11

-- cmake parameter description

-DWITH_NDB_JAVA= {ON | OFF}

Enable Java support, including ClusterJ, when creating a MySQL cluster. This parameter is on by default. If you do not want to use Java support when compiling a MySQL cluster, you can turn off this parameter-DWITH_NDB_JAVA=OFF at compile time.

-DWITH_NDBCLUSTER_STORAGE_ENGINE= {ON | OFF}

Set up and connect the NDB (NDBCLUSTER) storage engine in mysqld. This parameter is enabled by default.

[root@localhost SOURCES] # cmake. -DCMAKE_INSTALL_PREFIX=/cluster\

-DDEFAULT_CHARSET=utf8\

-DDEFAULT_COLLATION=utf8_general_ci\

-DWITH_NDB_JAVA=OFF\

-DWITH_FEDERATED_STORAGE_ENGINE=1\

-DWITH_NDBCLUSTER_STORAGE_ENGINE=1\

-DCOMPILATION_COMMENT='MySQL Cluster production environment'\

-DWITH_READLINE=ON\

-DSYSCONFDIR=/cluster_data\

-DMYSQL_UNIX_ADDR=/cluster_data/mysql.sock\

[root@localhost mysql-cluster-gpl-7.4.11] # make

.

[100%] Building CXX object libmysqld/CMakeFiles/sql_embedded.dir/__/sql/uniques.cc.o

[100%] Building CXX object libmysqld/CMakeFiles/sql_embedded.dir/__/sql/unireg.cc.o

Linking CXX static library libsql_embedded.a

[100%] Built target sql_embedded

[100%] Generating mysqlserver_depends.c

Scanning dependencies of target mysqlserver

[100%] Building C object libmysqld/CMakeFiles/mysqlserver.dir/mysqlserver_depends.c.o

Linking C static library libmysqld.a

/ usr/bin/ar: creating / root/rpmbuild/SOURCES/mysql-cluster-gpl-7.4.11/libmysqld/libmysqld.a

[100%] Built target mysqlserver

Scanning dependencies of target mysql_client_test_embedded

[100%] Building C object libmysqld/examples/CMakeFiles/mysql_client_test_embedded.dir/__/__/tests/mysql_client_test.c.o

Linking CXX executable mysql_client_test_embedded

[100%] Built target mysql_client_test_embedded

Scanning dependencies of target mysql_embedded

[100%] Building CXX object libmysqld/examples/CMakeFiles/mysql_embedded.dir/__/__/client/completion_hash.cc.o

[100%] Building CXX object libmysqld/examples/CMakeFiles/mysql_embedded.dir/__/__/client/mysql.cc.o

[100%] Building CXX object libmysqld/examples/CMakeFiles/mysql_embedded.dir/__/__/client/readline.cc.o

Linking CXX executable mysql_embedded

[100%] Built target mysql_embedded

Scanning dependencies of target mysqltest_embedded

[100%] Building CXX object libmysqld/examples/CMakeFiles/mysqltest_embedded.dir/__/__/client/mysqltest.cc.o

Linking CXX executable mysqltest_embedded

[100%] Built target mysqltest_embedded

Scanning dependencies of target my_safe_process

[100%] Building CXX object mysql-test/lib/My/SafeProcess/CMakeFiles/my_safe_process.dir/safe_process.cc.o

Linking CXX executable my_safe_process

[100%] Built target my_safe_process

[root@localhost mysql-cluster-gpl-7.4.11] # make install

.

-- Installing: / cluster/sql-bench/innotest2

-- Installing: / cluster/sql-bench/innotest2b

-- Installing: / cluster/sql-bench/innotest1b

-- Installing: / cluster/sql-bench/test-alter-table

-- Installing: / cluster/sql-bench/README

-- Installing: / cluster/sql-bench/innotest1

-- Installing: / cluster/sql-bench/bench-count-distinct

-- Installing: / cluster/sql-bench/innotest1a

-- Installing: / cluster/sql-bench/test-ATIS

-- Installing: / cluster/sql-bench/test-wisconsin

-- Installing: / cluster/sql-bench/run-all-tests

-- Installing: / cluster/sql-bench/test-create

-- Installing: / cluster/sql-bench/server-cfg

-- Installing: / cluster/sql-bench/test-connect

-- Installing: / cluster/sql-bench/test-big-tables

-- Installing: / cluster/sql-bench/test-transactions

-- Installing: / cluster/sql-bench/test-insert

-- change the permission of the software installation directory to mysql

[root@localhost mysql-cluster-gpl-7.4.11] # chown-R mysql.mysql / cluster

-- the permissions for changing logs and data storage directories are mysql

[root@localhost /] # chown-R mysql.mysql / cluster_data/

Configure the management node

The management node needs to configure a config.ini file that tells the MySQL cluster the amount of replica (redundancy) to maintain, the amount of memory allocated to each data node and index, the location of the data node, and the location of the SQL node.

Configure the config.ini file for the management node

[root@localhost mysql-cluster-gpl-7.4.11] # mkdir-p / cluster_data/config/

[root@localhost mysql-cluster-gpl-7.4.11] # vim / cluster_data/config/config.ini

[ndbd default]

# Options affecting ndbd processes on all data nodes:

NoOfReplicas=2 # specifies the number of redundancy. It is recommended that the value be no less than 2, otherwise the data will not be protected by redundancy.

The amount of memory allocated by DataMemory=80M # for data storage should be very large in the actual production environment.

The amount of memory allocated by IndexMemory=18M # for index storage should be large in the actual production environment.

[tcp default]

# TCP/IP options:

Portnumber=2202 # This the default; however, you can use any

# port that is free for all the hosts in the cluster

# Note: It is recommended that you do not specify the port

# number at all and simply allow the default value to be used

# instead

[ndb_mgmd]

# manage node options:

Hostname=192.168.78.141 # hostname or IP address of the management node

Datadir=/cluster_data/config # the path where the management node stores the node log files

[ndbd]

# data Node 1 option:

# (each data node needs to be configured with a [ndbd] section)

Hostname=192.168.78.137 # Hostname or IP address

Datadir=/cluster_data # the path where the data files of the data node are stored

[ndbd]

# data Node 2 option:

# (each data node needs to be configured with a [ndbd] section)

Hostname=192.168.78.135 # Hostname or IP address

Datadir=/cluster_data # the path where the data files of the data node are stored

[mysqld]

# SQL Node 1 option:

Hostname=192.168.78.137 # Hostname or IP address

# (additional mysqld connections can be

# specified for this node for various

# purposes such as running ndb_restore)

[mysqld]

# SQL Node 2 option:

Hostname=192.168.78.135 # Hostname or IP address

[root@localhost mysql-cluster-gpl-7.4.11] # chown-R mysql.mysql / cluster_data/

Configure the data node

Each data node needs to be configured

Each data node needs to be configured with a my.cnf profile that provides the connection string to the management node and the host information where the management node resides.

[root@localhost /] # vim / etc/my.cnf

[mysqld]

# options for mysqld process:

Ndbcluster # start the NDB storage engine

[mysql_cluster]

# MySQL cluster node options:

Ndb-connectstring=192.168.78.141 # host of the management node

Initialize the data file path of the MySQL database and create system tables

[root@localhost cluster_data] # cd / cluster

[root@localhost cluster] # ls

Bin COPYING data docs include lib man mysql-test README scripts share sql-bench support-files

[root@localhost cluster] # cd scripts/

[root@localhost scripts] # ls

Mysql_install_db

[root@localhost scripts] # / mysql_install_db-- user=mysql-- basedir=/cluster-- datadir=/cluster_data/

FATAL ERROR: Could not find. / bin/my_print_defaults

If you compiled from source, you need to run 'make install' to

Copy the software into the correct location ready for operation.

If you are using a binary release, you must either be at the top

Level of the extracted archive, or pass the-- basedir option

Pointing to that location.

[root@localhost scripts] # / mysql_install_db-- user=mysql-- basedir=/cluster-- datadir=/cluster_data/

Configure the SQL nod

Each SQL node should be configured

Each SQL node needs to be configured with a my.cnf profile that provides the connection string connected to the management node and the host information where the data node resides.

[root@localhost /] # vim / etc/my.cnf

[client]

Socket=/cluster_data/mysql.sock

[mysqld]

Ndbcluster # start the NDB storage engine

Basedir = / cluster

Datadir = / cluster_data

Socket=/cluster_data/mysql.sock

Log_error = / cluster_data/err.log

[mysql_cluster]

# MySQL cluster node options:

Ndb-connectstring=192.168.78.141 # host of the management node

Initialize and start the MySQL cluster

Start the management node

On the host where the management node resides, start the management node process

[root@localhost mysql-cluster-gpl-7.4.11] # / cluster/bin/ndb_mgmd-f / cluster_data/config/config.ini

MySQL Cluster Management Server mysql-5.6.29 ndb-7.4.11

2016-05-15 01:26:16 [MgmtSrvr] INFO-- The default config directory'/ cluster/mysql-cluster' does not exist. Trying to create it...

2016-05-15 01:26:16 [MgmtSrvr] INFO-- Sucessfully created config directory

Use the ndb_mgm client tool to connect to the cluster and view the status of the cluster

[root@localhost config] # / cluster/bin/ndb_mgm

-NDB Cluster-Management Client--

Ndb_mgm > help

NDB Cluster-Management Client-Help

HELP Print help text

HELP COMMAND Print detailed help for COMMAND (e.g. SHOW)

SHOW Print information about cluster

.

Check the status of the cluster. Only the management node is started, and neither the data node nor the SQL node is started.

Ndb_mgm > show

Connected to Management Server at: localhost:1186

Cluster Configuration

-

[ndbd (NDB)] 2 node (s)

Id=2 (not connected, accepting connect from 192.168.78.137)

Id=3 (not connected, accepting connect from 192.168.78.135)

[ndb_mgmd (MGM)] 1 node (s)

Id=1 @ 192.168.78.141 (mysql-5.6.29 ndb-7.4.11)

[mysqld (API)] 2 node (s)

Id=4 (not connected, accepting connect from 192.168.78.137)

Id=5 (not connected, accepting connect from 192.168.78.135)

[root@localhost cluster] # cd / cluster_data/

[root@localhost cluster_data] # ls

Config

[root@localhost cluster_data] # cd config/

[root@localhost config] # ls

Config.ini ndb_1_cluster.log ndb_1_out.log ndb_1.pid

Start the data node

On the host where each data node resides, execute the following command to start the ndbd process

[root@localhost mysql-cluster-gpl-7.4.11] # / cluster/bin/ndbd

2016-05-15 01:34:45 [ndbd] INFO-- Angel connected to '192.168.78.141purl 1186'

2016-05-15 01:34:45 [ndbd] INFO-- Angel allocated nodeid: 2

[root@localhost /] # cd / cluster_data/

[root@localhost cluster_data] # ls

Ndb_2_fs ndb_2_out.log ndb_2.pid

View cluster status on the management node

[root@localhost config] # / cluster/bin/ndb_mgm

-NDB Cluster-Management Client--

Ndb_mgm > show

Cluster Configuration

-

[ndbd (NDB)] 2 node (s)

Id=2 @ 192.168.78.137 (mysql-5.6.29 ndb-7.4.11, starting, Nodegroup: 0, *)

Id=3 @ 192.168.78.135 (mysql-5.6.29 ndb-7.4.11, starting, Nodegroup: 0)

[ndb_mgmd (MGM)] 1 node (s)

Id=1 @ 192.168.78.141 (mysql-5.6.29 ndb-7.4.11)

[mysqld (API)] 2 node (s)

Id=4 (not connected, accepting connect from 192.168.78.137)

Id=5 (not connected, accepting connect from 192.168.78.135)

Ndb_mgm > Node 2: Started (version 7.4.11)

Node 3: Started (version 7.4.11)

Show

Cluster Configuration

-

[ndbd (NDB)] 2 node (s)

Id=2 @ 192.168.78.137 (mysql-5.6.29 ndb-7.4.11, Nodegroup: 0, *)

Id=3 @ 192.168.78.135 (mysql-5.6.29 ndb-7.4.11, Nodegroup: 0)

[ndb_mgmd (MGM)] 1 node (s)

Id=1 @ 192.168.78.141 (mysql-5.6.29 ndb-7.4.11)

[mysqld (API)] 2 node (s)

Id=4 (not connected, accepting connect from 192.168.78.137)

Id=5 (not connected, accepting connect from 192.168.78.135)

View memory usage

Ndb_mgm > all report memory

Node 11: Data usage is 57% (3478260 32K pages of total 6062080)

Node 11: Index usage is 13% (795507 8K pages of total 5898272)

Node 12: Data usage is 57% (3461303 32K pages of total 6062080)

Node 12: Index usage is 13% (806025 8K pages of total 5898272)

Start the SQL node

[root@localhost mysql-cluster-gpl-7.4.11] # / cluster/bin/mysqld_safe-- defaults-file=/etc/my.cnf &

[1] 42623

[root@localhost mysql-cluster-gpl-7.4.11] # 160515 02:45:14 mysqld_safe Logging to'/ cluster_data/err.log'.

160515 02:45:14 mysqld_safe Starting mysqld daemon with databases from / cluster_data

Try to connect to the database, delete redundant root and anonymous users in the database, and keep only one root user locally

[root@localhost mysqld] # / cluster/bin/mysql-uroot

Welcome to the MySQL monitor. Commands end with; or\ g.

Your MySQL connection id is 2

Server version: 5.6.29-ndb-7.4.11 MySQL Cluster production environment

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

Affiliates. Other names may be trademarks of their respective

Owners.

Type 'help;' or'\ h' for help. Type'\ c'to clear the current input statement.

Mysql > select host, user,password from mysql.user

+-- +

| | host | user | password | |

+-- +

| | localhost | root |

| | localhost.localdomain | root |

| | 127.0.0.1 | root |

|:: 1 | root | |

| | localhost |

| | localhost.localdomain |

+-- +

6 rows in set (0.18 sec)

Mysql > delete from mysql.user where (user,host) not in (select 'root','localhost')

Query OK, 5 rows affected (0.15 sec)

Mysql > select host, user,password from mysql.user

+-+

| | host | user | password | |

+-+

| | localhost | root |

+-+

1 row in set (0.00 sec)

Mysql > update mysql.user set user='system',password=password ('Mysql#2015') where user='root'

Query OK, 1 row affected (0.08 sec)

Rows matched: 1 Changed: 1 Warnings: 0

Mysql > flush privileges

Query OK, 0 rows affected (0.02 sec)

Mysql > select version ()

+-+

| | version () |

+-+

| | 5.6.29-ndb-7.4.11 |

+-+

1 row in set (0.08 sec)

-- View the cluster status on the management node

You can see that each node has been started normally.

[root@localhost config] # / cluster/bin/ndb_mgm

-NDB Cluster-Management Client--

Ndb_mgm > show

Cluster Configuration

-

[ndbd (NDB)] 2 node (s)

Id=2 @ 192.168.78.137 (mysql-5.6.29 ndb-7.4.11, Nodegroup: 0)

Id=3 @ 192.168.78.135 (mysql-5.6.29 ndb-7.4.11, Nodegroup: 0, *)

[ndb_mgmd (MGM)] 1 node (s)

Id=1 @ 192.168.78.141 (mysql-5.6.29 ndb-7.4.11)

[mysqld (API)] 2 node (s)

Id=4 @ 192.168.78.137 (mysql-5.6.29 ndb-7.4.11)

Id=5 @ 192.168.78.135 (mysql-5.6.29 ndb-7.4.11)

-- testing cluster data synchronization

-- SQL Node 2 to create a test table whose storage engine is NDBCLUSTER

Mysql > use test

Database changed

Mysql > create table emp (id int) engine=NDBCLUSTER

Query OK, 0 rows affected (2.68 sec)

Mysql > insert into emp values (10)

Query OK, 1 row affected (0.07 sec)

Mysql > commit

Query OK, 0 rows affected (0.07 sec)

-- SQL Node 1 to view the table created in Node 2

Mysql > desc emp

+-+ +

| | Field | Type | Null | Key | Default | Extra | |

+-+ +

| | id | int (11) | YES | | NULL |

+-+ +

1 row in set (0.13 sec)

Mysql > select * from emp

+-+

| | id |

+-+

| | 10 |

+-+

1 row in set (0.13 sec)

After reading this article, I believe you have a certain understanding of "how to build MySQL Cluster 7.4 in CentOS 6.5.If you want to know more about it, you are welcome to follow the industry information channel. Thank you for reading!"

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report