In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Mysql master-slave replication and read-write separation
We have done load balancing and high availability deployment for Nginx of LNMP platform before. Today we will use some technologies to improve the high availability of data and database performance.
1. Mysql master-slave replication
First of all, let's take a look at what problems master-slave replication can solve.
1. Brief introduction to database
On the road of the development of the world, data has become an indispensable part, and the security of data has become a problem worth discussing. Is there any way to solve the problem of data security?
We use the function of mysql itself to achieve data backup, before we also made some backups of available data, but those are not good solutions, because no matter the previous import and export, or directly copy the directory where the database is located, these technologies can not guarantee real-time, and what we are introducing today is the ability to achieve hot backup of the database. So as to improve the security of database-- mysql master-slave replication
2. Principle of master-slave replication 1) replication types supported by mysql
(1) statement-based replication. The sql statement executed on the master server and the same statement executed on the slave server. Mysql uses statement-based replication by default, which is more efficient than other methods.
(2) copy the changed content based on the row copy, instead of executing it again from the server.
(3) for mixed replication types, statement-based replication is adopted by default, and row-based replication is adopted once it is found that statement-based replication cannot be accurately replicated.
2) the working process of replication
The working process of mysql replication is as follows:
Figure 1mysql replication process
(1) before each transaction is completed. Master records these changes in the binary log. After writing to the binary log, master notifies the storage engine to commit the transaction.
(2) Slave copies the Binary log of Master to its relay log. First, slave opens a worker thread, the Icano thread, which opens a network connection on Master, and then starts Binlog dump process. Binlog dump process reads events from Master's binary log, and if he has successfully linked to Master, he sleeps and waits for Master to generate new events. The Icano thread writes these events to the relay log
(3) SQL slave thread (SQL from the thread) handles the last step of the process. The SQL thread reads events from the relay log and replays the events in it to update the data of Slave to make it consistent with the data in Master. As long as the thread is consistent with the Ipico thread, the relay log is usually in the cache of OS, so the overhead of the relay log is very small.
This is the principle of mysql master-slave replication. There can be multiple Slave and multiple master services, and keepalived can be used for high availability of HA. It is recommended that mysql data is not only placed on shared storage, but that each Slave has a separate storage to store data.
There is a very important limitation in the process of replication, that is, replication is serialized on Slave, that is, parallel update operations on Master cannot be operated in parallel on Slave, that is, they cannot be executed at the same time.
3. The principle of mysql read-write separation.
To put it simply, to achieve the separation of reading and writing (figure 2) is to read the data from the server and write it to the master service when writing the data. The basic principle is that the master server handles some simple transactional queries, while the slave server handles select queries and database replication is used to synchronize the changes caused by transactional queries to the slave database of the cluster.
Figure 2
At present, there are two common types of Mysql read-write separation.
1) based on the internal implementation of program code
In the code according to select, insert route classification, this kind of method is also the most widely used in the production environment. The advantage is that the performance is better, because the program is implemented in the code, and there is no additional hardware expense: the disadvantage is that developers are needed to implement it, and the operators can't start with it.
2) implementation based on intermediate agent layer
The agent is generally between the application server and the database server. After receiving the request from the application server, the proxy database server forwards it to the back-end database according to the judgment. There are following representative programs.
(1) mysql_proxy. Mysql_proxy is an open source project of Mysql. Sql is judged by its own lua script. Although it is the official product of mysql, mysql officially does not recommend deploying it in a production environment.
(2) Atlas. It is a data middle tier project based on MySQL protocol developed and maintained by Qihoo 360, the infrastructure team of Web platform Department. It optimizes and adds some new features on the basis of mysql-proxy version 0.8.2. The mysql business run by Atlas internally carries billions of read and write requests every day. Support things and stored procedures
(3) Moeeba. Developed by Alibaba Group employee Chen Silu using preface java language, Alibaba Group puts its users in the production environment, but he does not support things and storage processes.
What we are demonstrating today is amoeba. Version 2.2.0
II. Case environment
This case is built with 5 servers. The specific topology is shown in figure 3.
Figure 3
Mainframe
Operating system
IP address
Main software
Master
CentOS 6.5 x86_64
192.168.1.2
Cmake-2.8.6.tar.gz
Mysql-5.5.22.tar.gz
Slave1
CentOS 6.5 x86_64
192.168.1.3
Cmake-2.8.6.tar.gz
Mysql-5.5.22.tar.gz
Slave2
CentOS 6.5 x86_64
192.168.1.4
Cmake-2.8.6.tar.gz
Mysql-5.5.22.tar.gz
Amoeba
CentOS 6.5 x86_64
192.168.1.1
Amoeba-mysql-binary-2.2.0.tar.gz
Jdk-6u14-linux-x64.bin
Client
CentOS 6.5 x86_64
192.168.1.5
The above is today's Tupu environment, if you need to solve a single point of failure, you can use the keepalived implementation mentioned earlier, but with a few more computers. Master can also use keepalived to avoid a single point of failure, which has been discussed before, so I won't explain it too much here.
1. Configure time synchronization
Since the time of master-slave replication must be consistent, we can use master as a time synchronization server to provide time synchronization for salve. We use the ntp software package installed by rpm and installed by yum.
[root@centos2 ~] # yum-y install ntp
[root@centos2 ~] # vim / etc/ntp.conf
Server 127.127.1.0
Fudge 127.127.1.0 stratum 8 / / these two lines are added anywhere
[root@centos2 ~] # iptables-I INPUT-p udp-- dport 123-j ACCEPT / / ntp uses port 123 of udp by default
[root@centos2 ~] # service ntpd start / / start the ntp service
Client synchronization time
If there is no ntpdate command, you can use yum to install the ntpdate package.
Slave1
The server after [root@centos3 ~] # ntpdate 192.168.1.2 is the same.
2. Install and deploy mysql server
Here I'll take the slave2 server as an example to demonstrate the installation of the mysql server, master slave1 is the same as the slave2 installation.
Slave2:
(1) install mysql and create a program user
[root@centos4 cmake-2.8.12] # / configure & & gmake & & gmake install
[root@centos4 ~] # tar zxf mysql-5.5.38.tar.gz
[root@centos4 ~] # cd mysql-5.5.38
[root@centos4 mysql-5.5.38] # cmake\
-DCMAKE_INSTALL_PREFIX=/usr/local/mysql-DSYSCONFDIR=/etc/-DDEFAULT_CHARSET=utf8-DDEFAULT_COLLATION=utf8_general_ci-DWITH_EXTRA_CHARSETS=all & & make & & make install
[root@centos4 mysql-5.5.38] # useradd-M-s / sbin/nologin mysql
(2) optimize the program execution path
[root@centos4 mysql-5.5.38] # echo\
> PATH=$PATH:/usr/local/mysql/bin > > / etc/profile
(3) create a master configuration file
[root@centos4 mysql-5.5.38] # cp support-files/my-medium.cnf.sh / etc/my.cnf
Cp: overwrite "/ etc/my.cnf"? Y
[root@centos4 mysql-5.5.38] #
(4) create a service script and add it as a system service
[root@centos4 mysql-5.5.38] # cp support-files/mysql.server / etc/init.d/mysqld
[root@centos4 mysql-5.5.38] #
[root@centos4 mysql-5.5.38] # chkconfig-- add mysqld
[root@centos4 mysql-5.5.38] # chkconfig mysqld on
[root@centos4 mysql-5.5.38] # chmod + x / etc/init.d/mysql
(5) initialize the database
[root@centos4 mysql-5.5.38] # / usr/local/mysql/scripts/mysql_install_db-user=mysql-basedir=/usr/local/mysql/-datadir=/usr/local/mysql/
[root@centos4 mysql-5.5.38] #
(6) modify the permissions of the installation directory
[root@centos4 mysql-5.5.38] # chown-R mysql:mysql / usr/local/mysql/
(7) start the service
[root@centos4 mysql-5.5.38] # service mysqld start
(8) set the password for the user root
[root@centos4 mysql-5.5.38] # mysqladmin-u root-p password '123.abc'
There is no password by default, just enter at the place where you confirm the old password.
(9) Log in to mysql database
[root@centos4 mysql-5.5.38] # mysql- u root-p123.abc
3. Configure master server (1) modify / etc/my.cnf main configuration file
[root@centos2 ~] # vim / etc/my.cnf
Server-id = 1 / / unique indication of mysql data (cannot be repeated)
Log-slave-updates=true / / allow concatenated replication (increase)
Log-bin=master-bin / / binary file name (modified)
(2) restart mysql service
[root@centos2 ~] # service mysqld restart
(3) Log in to mysql database
[root@centos2] # mysql-u root-p123.abc
(4) establish authorized users
Give replication and slave permissions to all libraries and all tables, and the user is any IP password of the myslave source for this network segment 123.abc / / this user is the user used to copy binaries from the database
Mysql > grant replication slave on *. * to myslave@'192.168.1.%' identified by '123.abc'
(5) check the binary file name and segment offset of the database
Figure 4
(6) exit mysql database
Mysql > exit
Bye
(7) establish firewall rules
Root@centos2 ~] # iptables-I INPUT-p tcp-- dport 3306-j ACCEPT / / allow inbound with destination port 3306
4. Modify the configuration from server (slave1) (1) / etc/my.cnf
[root@centos3 ~] # vim / etc/my.cnf
Server-id = 2 / / cannot be repeated with other instances
Log-bin=mysql-bin / / binary log file name
Relay-log=relay-log-bin / / copied binary file name
Name of the file stored in the relay-log-index=slave-relay-bin.index / / relay log
(2) restart the mysqld service
[root@centos3 ~] # service mysqld restart
(3) Log in to mysql database
[root@centos3] # mysql-u root-p123.abc
(4) configure mysql data synchronization
Mysql > change master to master_host='192.168.1.2', master_user='myslave', master_password='123.abc', master_log_file='master-bin.000001', master_log_pos=261
Ps: the last two configuration items must be the same as those in figure 4, otherwise they cannot be synchronized
Database information of IP address, user, password all master
(5) start synchronization
Mysql > start slave
(6) View synchronization information
Figure 5
\ G this option allows the result to format the output
The result of verification must be that both values must be yes. If, in general, Slasve_IO_Running: connect may be caused by a firewall problem, or the red part of the above command is inconsistent with the value shown in figure 3, it is recommended to display the value after restarting the mysqld service of master, and then modify it on the salve. Before modifying, please stop slave; and then modify. After the modification, start the replication and verify it.
The configuration of slave2 is exactly the same as that of slave1. It is recommended to copy the configuration of slave1 directly where you modify it, and be careful not to make mistakes when operating in the database.
5. Install amoeba
(1) deploy jdk environment
[root@centos1 ~] # chmod + x jdk-6u14-linux-x64.bin
[root@centos1 ~] #. / jdk-6u14-linux-x64.bin
When reading the license agreement later, just press Q to exit.
In the end, the following confidence will emerge
Do you agree to the above license terms? [yes or no] whether to agree to the above license, here we directly yes and enter no to exit the installation.
After that, there will be an installation process.
Press Enter to continue. / / enter directly to complete the installation
After the installation is complete, the following directories will be generated under the current directory
[root@centos1 ~] # ls jdk1.6.0_14/-ld
Drwxr-xr-x. 10 root root 4096 Nov 11 00:54 jdk1.6.0_14/
Since amoeba is based on jdk1.5, it is not recommended to use jdk environments higher than 1.6
(2) move the installation directory
[root@centos1 ~] # mv jdk1.6.0_14/ / usr/local/jdk1.6
(3) modify profile file
Export JAVA_HOME=/usr/local/jdk1.6 / / set the root directory of jdk
Export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jir
E/lib / / assign the program file of jdk to the CLASSPATH variable
Export PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/bin:$PATH:$HOME
/ bin / / assign the jdk program file to the PATH variable
Export AMOEBA_HOME=/usr/local/amoeba / / defines the root directory of AMOEBA
Export PATH=$PATH:$AMOEBA_HOME/bin copies the program file of amoeba to the PATH variable
[root@centos1] #. / etc/profile / / refresh the profile file
(4) create an unzipped directory for amoeba
[root@centos1] # mkdir-p / usr/local/amoeba
[root@centos1] # chmod-R 755 / usr/local/amoeba/
(5) decompress amoeba software package
[root@centos1] # tar zxf amoeba-mysql-binary-2.2.0.tar.gz-C / usr/local/amoeba/
(6) verify whether the deployment of environment variables is correct
[root@centos1 ~] # / usr/local/amoeba/bin/amoeba
Amoeba start | stop
The above prompt indicates that the environment variable is set correctly, and jdk is already working. We can check the version of jdk.
[root@centos1 ~] # javac-version
Javac 1.6.0_28
6. Configure amoeba to achieve read-write separation.
Disclaimer: all configuration file comments are based on, and then delete the comments, it is best to delete the positive line, but sometimes only need to delete the head and tail, the configuration items can be used directly.
(1) modify the amoeba.xml main configuration file under the conf directory under the amoeba installation directory
This configuration file needs to define two configurations, the first is what user connection amoeba the application uses to access the back-end mysql database, and the second is to define the default write pool and read pool.
Because there are so many configuration files, only the modified ones are listed.
[root@centos1 ~] # vim / usr/local/amoeba/conf/amoeba.xml
Amoeba
123.abc
${amoeba.home} / conf/access_list.
Conf
The bold part is the part that needs to be modified.
${amoeba.home} / conf/rule.xml
${amoeba.home} / conf/ruleFunctionMap.xml
${amoeba.home} / conf/functionMap.xml
1500
Master / / defines the default write pool
Master / / define write pool
Slaves / / define the reading pool. True the above names as you like.
The bold font above is the content that needs to be modified in this configuration file.
(2) modify the dbServer.xml configuration file, define the server to write and read, and specify the algorithm.
[root@centos1 ~] # vim / usr/local/amoeba/conf/dbServers.xml
Line test / / 26
Line 123.abc / / 27
What is the identity of the above two behaviors to access the background database? this user needs to be authorized in all databases, has all permissions on all tables in all databases, and allows the IP access username of the host amoeba is not fixed, which is established according to the actual situation.
/ / the name of the server written by the definition must be consistent with the previously defined write pool
192.168.1.2 / write the IP of the server
/ / the name is casual.
192.168.1.3 Compact / read Server IP
/ / the name is casual.
192.168.1.4 IP of the server / read server
The above parts need to be modified. The part where slave2 is located is to copy the configuration of slave1.
/ / call the read pool defined by the main profile
one
Slave1,slave2// defines the configuration type where the read server is located
This part defines the read pool load balancing algorithm, with 1 for polling and 2 for weighted polling.
The above is the modification and explanation of all configuration files.
3. Set up a test user on master, slave1 and slave2, which needs to be established according to the configuration file.
Mysql > grant all on *. * to test@'192.168.1.%' identified by '123.abc'
All right, now you can start the amoeba service and verify
[root@centos1 ~] # amoeba start& / / the default amoeba needs to occupy a terminal after it starts. To prevent this from happening, we can directly run it in the background. During the startup process, if a long prompt message is found and all ends with java, then the table name service does not start successfully. Amoeba listens to port 8066 by default, and we can verify it.
[root@centos1 ~] # netstat-anpt | grep 8066
Tcp 0 0: 8066: * LISTEN 3405/java
7. Establish firewall rules
[root@centos1] # iptables-I INPUT-p tcp-- dport 8066-j ACCEPT
8. Master, slave1 and slave2 all need to open port 3306 for inbound.
[root@centos2] # iptables-I INPUT-p tcp-- dport 3306-j ACCEPT
[root@centos2 ~] # service iptables save
Iptables: save firewall rules to / etc/sysconfig/iptables: [OK]
[root@centos2 ~] #
The other two slave servers have the same configuration, or drop the iptables stop directly.
9. Testing
(1) on the application server
[root@web ~] # yum-y install mysql
Access mysql through proxies
[root@web] # mysql-u amoeba-p123.abc-h 192.168.1.1-P8066
-P specify the default port of amoeba so the default setting needs to specify the port. We can modify the first 8066 of amoeba's amoeba.xml configuration file to 3306 and then establish a firewall rule to allow port 3306 inbound and restart the amoeba service.
Amoeba stop
Amoeba start&
3306 / / on line 11
[root@centos1 ~] # amoeba stop
[root@centos1 ~] # amoeba start&
[root@centos1] # iptables-I INPUT-p tcp-- dport 3306-j ACCEPT
Login to amoeba in progress
[root@web] # mysql-u amoeba-p123.abc-h 192.168.1.1
(2) create a banji library on master, create a new class table in the benji library, see if it is synchronized to other servers, then turn off the lave function on each server, and then insert differentiated statements.
Master operation
Mysql > create database banji
Mysql > create table banji.class (id int)
Slave1 operation
Figure 6
Mysql > stop slave
Slave2 operation
Figure 7
Mysql > stop slave
Master operation
Mysql > insert into banji.class values (1)
Slave1 operation
Mysql > insert into banji.class values (2)
Slvae2 operation
Mysql > insert into banji.class values (3)
Client operation
Figure 8
As can be seen from the figure above, two queries were executed to read from the respective databases of slave1 and slave2, with their respective values. Because there is no synchronization, there is only one value.
Mysql > insert into banji.class values (4)
Figure 9
If you can't see the 4 you just inserted, it proves that the write operation is on Master. We can go to Master and take a look.
Master operation
Figure 10
You can see the values just inserted in Master, and we enable the functions of each slave.
Slave1,slave2 operation
Mysql > start slave
Client operation
After the opening of figure 11, the data we see has been synchronized, there is no 3, this is due to data inconsistency, if the data is consistent, this will not happen. Mysql master-slave replication and read-write separation OK to sum up: when ① configuration master-slave replication, pay attention to user rights, firewall and server-id (do not repeat) ② configuration master-slave replication, be sure to fill in the values behind change according to the status information of Master. When configuring amoeba, pay attention to the environment variables without writing errors, refresh the profile file to make the changes take effect after the modification is completed, and you can use echo verification. You can also launch amoeba directly for verification, or verify the jdk version. When ④ modifies the amoeba configuration file, note that the default read and write pool and the name of the dbServer file must be the same as the main configuration file. ⑤ pay attention to open the corresponding firewall port. If you are in trouble when logging in, you can selectively modify the amoeba port to 3306 so that you do not need to specify the port to log in. Be sure to install the above steps for verification when ⑥ verification, and you need to be careful not to get confused in the test environment. The ⑦ mysql service must be self-booting. If you want to solve the single point problem of amoeba and Master, use the keepalived tool.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.