In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Building MySQL Cluster to achieve High availability
MySQL Cluster Overview and installation Environment
MySQL Cluster is a highly practical and highly redundant version of MySQL suitable for distributed computing environments. Cluster means "cluster" in Chinese. It uses the NDB Cluster storage engine, which allows multiple MySQL servers to run in one Cluster.
MySQL Cluster is a technology that allows you to deploy the Cluster of an "in-memory" database in a system without sharing. Through the non-shared architecture, the system can use cheap hardware, and there are no special requirements for hardware and software. In addition, because each component has its own memory and disk, there is no single point of failure.
Mysql cluster architecture
SQL node: provides sql access to the upper application layer.
Storage / data node: saves the data in the cluster. Data node, which can provide a copy. Achieve data redundancy.
Management node (MGM): manages the entire cluster. Start and shut down the cluster. Start the cluster with the ndb_mgmd command
In our experiment, the roles of the machines in the mysql cluster cluster are assigned as follows:
Mysql management node: xuegod63.cn IP:192.168.1.63 (install server, client)
Mysql data node: xuegod64.cn IP:192.168.1.64 (install server, client)
Mysql data node: xuegod65.cn IP:192.168.1.65 (install server, client)
Msyql SQL node: xuegod66.cn IP:192.168.1.66 (install server, client)
Msyql SQL node: xuegod67.cn IP:192.168.1.67 (install server, client)
Download address of Mysql cluster: https://dev.mysql.com/downloads/cluster/
Environmental cleaning and installation
Although this part is the initial process, but it is more complex, please configure it patiently.
Execute the following on all hosts:
First we need to clean up the old version, then install mysql cluster, and finally, file rights management.
1.mysql old version cleanup
First use the following command to clean up the previous mysql installation that came with the operating system:
Yum-y remove mysql
Yum-y remove maridb* (centos7)
Then use the following command:
Rpm-qa | grep mysql
Rpm-qa | grep maridb
For the 2 remaining mysql packages found, delete them in the following command format:
Rpm-e-- nodeps mysql-libs-5.1.71-1.el6.x86_64
Finally, delete the following documents:
Rm-rf / var/lib/mysql/*
Rm-rf / etc/my.cnf
Rm-rf / etc/init.d/mysqld
Vim / etc/profile removes content related to mysql
2. Installation of mysql cluster version 7.5
2.1 upload and install the latest EPEL source and mysql community source installation packages
[root@xuegod63] # rpm-ihv epel-release-7-10.noarch.rpm
[root@xuegod63] # rpm-ihv mysql57-community-release-el7-11.noarch.rpm
2.2 enable mysql cluster feeds
[root@xuegod63 ~] # sed-I "55s/0/1/g" / etc/yum.repos.d/mysql-community.repo
2.3 flushing the yum source cache
[root@xuegod63] # yum clean all & & yum list
2.4 install the dependency package that solves Cluster
[root@xuegod63 ~] # yum install-y perl perl-Class-MethodMaker perl-DBI libaio numactl
2.5 install Mysql Cluster
[root@xuegod63 ~] # yum install-y mysql-cluster-community-server
2.6 View installation path
[root@xuegod63 ~] # rpm-ql mysql-cluster-community-server
Repeat steps 2.1-2.5 on the other 4 hosts
3. Data node hosts all need to install data node related packages (2 sets)
[root@xuegod64 ~] # yum install-y mysql-cluster-community-data-node
[root@xuegod65 ~] # yum install-y mysql-cluster-community-data-node
4. The management node installs the management management pack
[root@xuegod63~] # yum-y install mysql-cluster-community-management-server
5. Create a folder
The management node creates a configuration directory:
[root@xuegod63~] # mkdir-p / usr/mysql-cluster/
Set up mysql cluster
1.1 configuration on each machine
Let's configure the requirements to each machine first.
1.xuegod63 creates a profile on the management node
[root@xuegod63 ~] vim / usr/mysql-cluster/config.ini # writes the following
[ndbd default]
The number of NoOfReplicas=2 # data writes. 2 means two copies
DataMemory=200M # memory available for configuring data stores
IndexMemory=100M # index to 100m
[ndb_mgmd]
Nodeid=1
Datadir=/var/lib/mysql # manage the log of the node
The IP address of the HostName=192.168.1.63 # management node. Native IP
# data node options: # Storage Node
[ndbd]
HostName=192.168.1.64
DataDir=/var/lib/mysql # mysql data storage path
Nodeid=2
[ndbd]
HostName=192.168.1.65
DataDir=/var/lib/mysql # mysql data storage path
Nodeid=3
# SQL node options: # about SQL nodes
[mysqld]
HostName=192.168.1.66
Nodeid=4
[mysqld]
HostName=192.168.1.67
Nodeid=5
In this file, ID is assigned to five nodes, which helps to better manage and distinguish each node. Of course, if not specified, MySQL will also dynamically assign a
2.xuegod64, xuegod65 data nodes
[root@xuegod64 /] # vim / etc/my.cnf # Please configure xuegod65 as
[root@xuegod65 /] # vim / etc/my.cnf
[mysqld]
Datadir=/var/lib/mysql # mysql data storage path
Ndbcluster # start > start the ndb engine
Ndb-connectstring=192.168.1.63 # Management Node IP address
[mysqld_safe]
Log-error=/var/log/mysqld.log
Pid-file=/var/run/mysqld/mysqld.pid
[mysql_cluster]
Ndb-connectstring=192.168.1.63 # Management Node IP address
3. SQL node configuration document:
[root@xuegod66~] # vim / etc/my.cnf # Please configure xuegod67 as
[root@xuegod67~] # vim / etc/my.cnf
[mysqld]
Ndbcluster # start the ndb engine
Ndb-connectstring=192.168.1.63 # Management Node IP address
[mysqld_safe]
Log-error=/var/log/mysqld.log
Pid-file=/var/run/mysqld/mysqld.pid
[mysql_cluster]
Ndb-connectstring=192.168.1.63 # Management Node IP address
Description: the difference between data node and SQL node configuration file is one more line.
The data nodes are: datadir=/var/lib/mysql # mysql data storage path.
Not on the SQL node.
1.2 MySQL Cluster start
Initial startup command and user password change adjustment: (please start in strict order)
Start first: manage Node Service-> data Node Service-> sql Node Service
Shutdown: after shutting down the management node service and shutting down the management node service, the nbdb data node service will automatically shut down-> manually shut down the sql node service.
Before performing the first boot, please make sure that the firewalls of the two machines are turned off (service iptables stop or set the firewall port to be accessible, the two ports are communication port 1186 and data port 3306)
Start the management node command on xuegod63
[root@xuegod67] # ndb_mgmd-- ndb_nodeid=1-- initial-f / usr/mysql-cluster/config.ini
# try not to configure management nodes, data nodes and storage nodes on the same machine, otherwise one of them will be dead or all of them will be dead.
View the port number:
[root@xuegod63~] # netstat-antup | grep 1186
Tcp 0 0 0.0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 18. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0
Tcp 0 0 127.0.0.1:1186 127.0.0.1:60324 ESTABLISHED 7057/ndb_mgmd
Tcp 0 0 127.0.0.1:60324 127.0.0.1:1186 ESTABLISHED 7057/ndb_mgmd
2. Xuegod64 and xuegod65 start the data node service
[root@xuegod68] # ndbd-- initial
[ndbd] INFO-- Angel connected to '10.10.10.67: 1186'
[ndbd] INFO-Angel allocated nodeid: 2
[root@xuegod69] # ndbd-- initial
[ndbd] INFO-- Angel connected to '10.10.10.67: 1186'
[ndbd] INFO-Angel allocated nodeid: 3
3. Xuegod66 and xuegod67 start SQL node service
[root@xuegod66~] # systemctl start mysqld
[root@xuegod66 ~] # systemctl start mysqld
4. Check the status of the mysql cluster:
[root@xuegod67 ~] # ndb_mgm
Ndb_mgm > show
Cluster Configuration
-
[ndbd (NDB)] 2 node (s)
Id=2 @ 192.168.1.64 (mysql-5.7.22 ndb-7.5.10, Nodegroup: 0, *)
Id=3 @ 192.168.1.65 (mysql-5.7.22 ndb-7.5.10, Nodegroup: 0)
[ndb_mgmd (MGM)] 1 node (s)
Id=1 @ 192.168.1.63 (mysql-5.7.22 ndb-7.5.10)
[mysqld (API)] 2 node (s)
Id=4 @ 192.168.1.66 (mysql-5.7.22 ndb-7.5.10)
Id=5 @ 192.168.1.67 (mysql-5.7.22 ndb-7.5.10)
5. Data synchronization experimental test
We need to change the mysql passwords of the two machines before that.
Check the mysql root user password Note: we only need to change the password of the sql node
[root@xuegod66 ~] # grep password / var/log/mysqld.log
2018-07-02T16:01:46.008448Z 1 [Note] A temporary password is generated for root@localhost: lCUouyg (k2fuf
Add validate-password=off under mysqld in the my.cnf file
Restart mysql
[root@xuegod66 ~] # systemctl restart mysqld
[root@xuegod66] # mysql-u root-p'lCUouyg (k2roomF'
Mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with; or\ g.
Your MySQL connection id is 6
Server version: 5.7.22-ndb-7.5.10-cluster-gpl
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
Affiliates. Other names may be trademarks of their respective
Owners.
Type 'help;' or'\ h' for help. Type'\ c'to clear the current input statement.
Mysql > set password for 'root'@'localhost'=password (' 123456')
Mysql > grant all privileges on *. * to cluster@ "%" identified by "123456"; # Licensing
Mysql > flush privileges
The other sql node operates the same way.
6. Impersonate a client of an external machine to insert data:
Mysql-ucluster-p123456-h 192.168.1.66
Note: use the ndb engine when creating tables
Mysql > create database db
Mysql > use db
Mysql > create table test (id int) engine=ndb
Mysql > insert into test values (1000)
Mysql > select * from test
Log in to another sql node to view
Mysql-ucluster-p123456-h 192.168.1.67
Mysql > use db
Mysql > select * from test
Stop a sql node test
[root@xuegod66] # mysqladmin-uroot-p123456 shutdown
Ndb_mgm > show View status
Connected to Management Server at: localhost:1186
Cluster Configuration
-
[ndbd (NDB)] 2 node (s)
Id=2 @ 192.168.1.64 (mysql-5.7.22 ndb-7.5.10, Nodegroup: 0, *)
Id=3 @ 192.168.1.65 (mysql-5.7.22 ndb-7.5.10, Nodegroup: 0)
[ndb_mgmd (MGM)] 1 node (s)
Id=1 @ 192.168.1.63 (mysql-5.7.22 ndb-7.5.10)
[mysqld (API)] 2 node (s)
Id=4 (not connected, accepting connect from 192.168.1.66)
Id=5 @ 192.168.1.67 (mysql-5.7.22 ndb-7.5.10) # found one sql node missing
7. Shut down the service
Turn off mysql cluster order: shut down the management node service-"data node service shuts down automatically when shutting down the management node -" requires manual shutdown of the SQL node service
Close the management node of mysql
[root@xuegod63 /] # ndb_mgm
-NDB Cluster-Management Client
Ndb_mgm > shutdown
Node 2: Cluster shutdown initiated
Node 3: Cluster shutdown initiated
Node 2: Node shutdown completed.
Node 3: Node shutdown completed.
3 NDB Cluster node (s) have shutdown.
Disconnecting to allow management server to shutdown.
Ndb_mgm > exit
Ps-axu | grep ndbd # cannot be viewed, indicating that the data node has been disabled
Manually shut down the SQL node service
Manually shut down the SQL node service on Xuegod66 and xuegod67
[root@xuegod66] # mysqladmin-uroot-p123456 shutdown
[root@xuegod67] # mysqladmin-uroot-p123456 shutdown
Or: method two, kill.
[root@xuegod66 ~] # ps-axu | grep mysql
[root@xuegod66] # kill-9 7617
[root@xuegod66] # kill-9 7743
[root@xuegod66~] # ps-axu | grep mysql # xuegod67 also kill
8. Summary
Start again, and the mysql cluster starts:
[root@xuegod63 /] # ndb_mgmd-ndb_nodeid=1-initial-f / usr/mysql-cluster/config.ini
[root@xuegod64/] # ndbd
[root@xuegod65 /] # ndbd
[root@xuegod66 /] # systemctl restart mysqld
[root@xuegod67 /] # systemctl restart mysqld
Check the status of the mysql cluster:
[root@xuegod63 ~] # ndb_mgm
-NDB Cluster-Management Client--
Ndb_mgm > show
Cluster Configuration
-
[ndbd (NDB)] 2 node (s)
Id=2 @ 10.10.10.68 (mysql-5.7.20 ndb-7.5.8, Nodegroup: 0, *)
Id=3 @ 10.10.10.69 (mysql-5.7.20 ndb-7.5.8, Nodegroup: 0)
[ndb_mgmd (MGM)] 1 node (s)
Id=1 @ 10.10.10.67 (mysql-5.7.20 ndb-7.5.8)
[mysqld (API)] 2 node (s)
Id=4 @ 10.10.10.70 (mysql-5.7.20 ndb-7.5.8)
Id=5 @ 10.10.10.71 (mysql-5.7.20 ndb-7.5.8)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.