In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Environment:
Virtual machine VMware
OS:CentOS Linux release 7.3.1611 (Core)
After installing the stand-alone Mysql8.0 as described below, copy two machines, modify the IP to 192.168.1.78 and 192.168.1.78, and the serverid of the configuration file / etc/my.cnf of the two new machines, and delete the auto.cnf under the data path or the configuration will not work.
Vim / etc/hosts
192.168.1.78 mgr-mul1
192.168.1.79 mgr-mul2
192.168.1.80 mgr-mul3
Use hostname to modify the hostname of each machine to make sure it is different, otherwise the configuration will not be successful.
I. stand-alone installation (192.168.1.78)
Download the 8.0.11 tar.gz package from the official website and unpack it, copy it and rename it to / usr/local/mysql.
Create data path and log path and change their permissions to mysql users:
/ data/mysql3306/mysql3306
/ data/mysql3306/logs
The configuration file is as follows: pay attention to modify server-id and group-related ip
Vim / etc/my.cnf
[client]
Port = 3306
Socket = / tmp/mysql3306.sock
Default_character_set = utf8mb4
[mysql]
Default_character_set = utf8mb4
[mysqld]
Lower_case_table_names = 1
Port = 3306
Basedir = / usr/local/mysql8
Datadir = / data/mysql3306/mysql3306
Socket = / tmp/mysql3306.sock
Pid_file = / data/mysql3306/mysql3306.pid
Tmpdir = / tmp/
Default_authentication_plugin = mysql_native_password
Skip_name_resolve = 1
Character_set_server = utf8mb4
Collation_server = utf8mb4_unicode_ci
Max_connections = 2000
Max_connect_errors = 10000
Interactive_timeout = 600,
Wait_timeout = 600,
Table_open_cache = 2048
Thread_cache_size = 51
Max_allowed_packet = 16m
Tmp_table_size = 256m
Max_heap_table_size = 96m
Sort_buffer_size = 4m
Read_buffer_size = 4m #?? IO
Join_buffer_size = 4m
Read_rnd_buffer_size = 64m #? Basket captive
Bulk_insert_buffer_size = 64m #
Log_error = / data/mysql3306/logs/mysql-error.log
Log_timestamps = system
Slow_query_log = 1
Slow_query_log_file = / data/mysql3306/logs/mysql-slow.log
Long_query_time = 1
Log_queries_not_using_indexes = 1
Log_throttle_queries_not_using_indexes = 10
Log_slow_admin_statements = 1
Log_slow_slave_statements = 1
Min_examined_row_limit = 100
Log_queries_not_using_indexes = 1
Log_output = FILE
Relay_log = / data/mysql3306/mysql3306/mysql-relay
# binlog
Server_id = 1783306
Log_bin = / data/mysql3306/mysql3306/mysql-bin
Binlog_expire_logs_seconds = 604800
Binlog_format = row
Max_binlog_size = 1024m
Max_binlog_cache_size = 8G
Binlog_cache_size = 4m
Sync_binlog = 1
Master_info_repository = TABLE
Relay_log_info_repository = TABLE
Relay_log_recovery = ON
Log_slave_updates = 1
# innodb
Default_storage_engine = InnoDB
Innodb_page_size = 16384
Innodb_data_home_dir = / data/mysql3306/mysql3306/
Innodb_data_file_path = ibdata1:512M:autoextend
Innodb_log_group_home_dir = / data/mysql3306/mysql3306
Innodb_buffer_pool_instances = 1
Innodb_buffer_pool_size = 1024m
Innodb_buffer_pool_dump_pct = 40
Innodb_page_cleaners = 8
Innodb_log_file_size = 256m
Innodb_log_files_in_group = 2
Innodb_log_buffer_size = 32m # default 16m
Innodb_flush_log_at_trx_commit = 1
Innodb_lock_wait_timeout = 30
Innodb_strict_mode = 1
Innodb_print_all_deadlocks = 1
Innodb_buffer_pool_load_at_startup = 1
Innodb_buffer_pool_dump_at_shutdown = 1
Innodb_buffer_pool_dump_pct = 40
Innodb_open_files = 65536
Innodb_file_per_table = 1
Innodb_lock_wait_timeout = 30
Innodb_read_io_threads = 8
Innodb_write_io_threads = 8
Innodb_io_capacity = 200
Innodb_flush_log_at_trx_commit = 1
Innodb_flush_method = O_DIRECT
Innodb_purge_threads = 4
Innodb_max_dirty_pages_pct = 75
Transaction_isolation = READ-COMMITTED
Innodb_page_cleaners = 16
Explicit_defaults_for_timestamp = 1
# undo
Innodb_undo_directory = / data/mysql3306/mysql3306/
Innodb_undo_tablespaces = 3
Innodb_undo_log_truncate = 1
Innodb_max_undo_log_size = 1000m
Innodb_purge_rseg_truncate_frequency = 128,
# rep
Slave-parallel-type=LOGICAL_CLOCK
Slave-parallel-workers=16
Master_info_repository=TABLE
Relay_log_info_repository=TABLE
Relay_log_recovery=ON
Binlog_checksum=NONE
Slave_preserve_commit_order=1
# group_replication_flow_control_mode=DISABLED
Log_slave_updates=1
# GR
Enforce_gtid_consistency=1
Gtid_mode = on
Transaction_write_set_extraction = XXHASH64 # off
Loose-group_replication_group_name = "a876d35e-9110-11e6-a365-882b2b5s09d6" # can be generated in the library using SELECT UUID (), and all machines in a group are the same.
Loose-group_replication_local_address = "192.168.1.78 24900" # Native IP
Loose-group_replication_group_seeds = "192.168.1.78 VRV 24900192.168.1.79WR 24900192.168.1.80RV 24900"
Loose-group_replication_bootstrap_group = off # off
Loose-group_replication_start_on_boot = off # is very important
[mysqldump]
Max_allowed_packet = 16m
[mysqldump]
Max_allowed_packet = 16m
Initialization
/ usr/local/mysql/bin/mysqld-defaults-file=/etc/my.cnf-initialize-insecure-basedir=/usr/local/mysql-datadir=/data/mysql3306/mysql3306/-explicit_defaults_for_timestamp-user=mysql
22 documents.
Start
/ usr/local/mysql/bin/mysqld_safe-defaults-file=/etc/my.cnf-ledir=/usr/local/mysql/bin-datadir=/data/mysql3306/mysql3306-user=mysql &
Echo "/ usr/local/mysql8/bin/mysqld_safe-- defaults-file=/etc/my.cnf-- ledir=/usr/local/mysql8/bin-- datadir=/data/mysql3306/mysql3306-- user=mysql &" > / etc/rc.local
Login:
Mysql
Alter user root@localhost identified by '123456'
Create user root@'192.168.1.%' identified by '123456'
Grant all on *. * to root@'192.168.1.%'
CREATE USER rep@'%' IDENTIFIED BY 'rep'
GRANT REPLICATION SLAVE ON *. * TO rep@'%'
Flush privileges
Save the snapshot (before MGR-Multi), then copy two, and make the following changes:
(1) modify the virtual machine IP to 192.168.1.79 and 192.168.1.80 respectively, and modify their hostname hostname corresponding to the hosts file
(2) then modify the server_id and loose-group_replication_local_address in their respective my.cnf to native IP
(3) enter / data/mysql3306/mysql3306 to delete the auto.cnf file.
Then save the snapshot separately (before MGR-Multi)
Second, multi-master construction
1. Install components on all nodes:
Install plugin group_replication soname 'group_replication.so'
Show plugins
2. On 192.168.1.78: (only need to execute on this, same as single main mode)
CHANGE MASTER TO MASTER_USER='rep',MASTER_PASSWORD='rep' FOR CHANNEL 'group_replication_recovery'
Set persist group_replication_single_primary_mode=off;-- the single owner is all ON.
Enable bootstrap temporarily:
SET GLOBAL group_replication_bootstrap_group=ON
Start group_replication
SET GLOBAL group_replication_bootstrap_group=OFF
Select * from performance_schema.replication_group_members
3. After the above operations are completed, perform them in the order of 192.168.1.79 and 192.168.1.80:
CHANGE MASTER TO MASTER_USER='rep',MASTER_PASSWORD='rep' FOR CHANNEL 'group_replication_recovery'
Set persist group_replication_single_primary_mode=off
START GROUP_REPLICATION
4. Verify:
Arbitrary library execution: select * from performance_schema.replication_group_members
Select * from performance_schema.replication_group_members
+- -+-+
| | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | |
+- -+-+
| | group_replication_applier | 5651f374-97c8-11e8-9966-000c295ce53e | mgr-mul1 | 3306 | ONLINE | PRIMARY | 8.0.11 |
| | group_replication_applier | 749eb449-97e1-11e8-ab87-000c29ee3a4b | mgr-mul3 | 3306 | ONLINE | PRIMARY | 8.0.11 |
| | group_replication_applier | 7753edfa-97e1-11e8-a10c-000c296952a6 | mgr-mul2 | 3306 | ONLINE | PRIMARY | 8.0.11 |
+- -+-+
You can see that all three are online, their hostnames are correct, and all primary means that the MGR multi-master configuration is successful.
Select * from performance_schema.replication_connection_status\ G, check to see if there is any error
For example: select * from performance_schema.replication_connection_status\ G
* * 1. Row *
CHANNEL_NAME: group_replication_recovery
GROUP_NAME:
SOURCE_UUID:
THREAD_ID: NULL
SERVICE_STATE: OFF
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-0000: 00.000000
RECEIVED_TRANSACTION_SET:
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-0000: 00.000000
LAST_QUEUED_TRANSACTION:
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-0000: 00.000000
LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-0000: 00.000000
LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-0000: 00.000000
LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 0000-00-0000: 00.000000
QUEUEING_TRANSACTION:
QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-0000: 00.000000
QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-0000: 00.000000
QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-0000: 00.000000
* 2. Row * *
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 8ef01324-97ce-11e8-b4cc-000c295ce53e
SOURCE_UUID: 8ef01324-97ce-11e8-b4cc-000c295ce53e
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-0000: 00.000000
RECEIVED_TRANSACTION_SET: 8ef01324-97ce-11e8-b4cc-000c295ce53e:1-7 purl 1000006 purl 2000006
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-0000: 00.000000
LAST_QUEUED_TRANSACTION: 8ef01324-97ce-11e8-b4cc-000c295ce53e:2000006
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 2018-08-08 0930 purl 19.410114
LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-0000: 00.000000
LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2018-08-08 0930 19.412069
LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2018-08-08 0930 Vera 19.412081
QUEUEING_TRANSACTION:
QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-0000: 00.000000
QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-0000: 00.000000
QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-0000: 00.000000
All three machines are group_replication_applier with a status of ON,group_replication_recovery and a status of OFF.
5. Note: #
If you shut down all three and then restart the multi-master MGR, use bootstrap again.
#
III. Testing
Any database and table can be built, and other libraries can be synchronized.
IV. Problems
Feel that MGR multi-master seems to be unstable, often there will be a library offline situation will not automatically pull up; using circular script testing once there is a high concurrency, it will drop nodes, whether or not using proxy testing nodes, it seems not very practical, may also be because I configure something wrong with the parameters, I hope there will be experts to give guidance.
Pull-up nodes only need to execute stop group_replication and start group_replication in turn.
Transferred from: https://blog.csdn.net/zhangwenjiang001/article/details/81503576
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.