In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Demand:
At the request of big data's department, it is necessary to add the create_time field to the specified table and give the current time a default value.
Environment:
DB: the MySQL5.6 of Aliyun RDS
OS: centos7
There are several large tables, over 100 million, the largest 700 million, consider using pt-online-schema-change tools, night execution, and according to the workload to implement several times
When choosing a plan, the boss means: several sets should be listed, and then compare the advantages and disadvantages.
1.online DDL
2.pt-online-schema-change tool
3. Over 100 million tables, rename becomes a history table, and then the new table contains create_time fields, and the front end needs to modify the program.
If it is less than 100 million, modify the pt-online-schema-change tool.
4. Do sub-database and sub-table operations on large tables through mycat, once and for all
Since rds cannot be directly ssh, apply for an ECS server with the same network segment to execute pt-online-schema-change
Command execution machine: centos7.x mysql5.6client pt toolset
I. tool installation
Yum-y install perl-TermReadKey.x86_64
Yum-y install perl-IO-Socket-SSL
Yum-y install perl-DBI.x86_64
Yum-y install perl-DBD-MySQL.x86_64
Yum-y install perl-Digest-MD5
Wget-c https://www.percona.com/downloads/percona-toolkit/3.0.12/binary/redhat/7/x86_64/percona-toolkit-3.0.12-1.el7.x86_64.rpm
Rpm-ivh percona-toolkit-3.0.12-1.el7.x86_64.rpm
2. Monitoring tools:
Innotop-h xxx.xxx.xxx.xxx
-u use the name
-p password
Enter Q to see the current query
III. Restrictions
1) Server space inspection and evaluation
For example, the largest table is 50g and the index is 10g, so you need at least "60G temporary space" + "binlog space is 50G", which is more than 110g.
Using OSC doubles the space, including indexes
And under Row Based Replication, a binlog will be written. Don't take it for granted to use-set-vars to set up sql_log_bin=0, because at this session level, alter statements are also executed on the slave library, unless you have other plans for the slave library.
2) the original table cannot be checked by trigger
3) Foreign key check, it is best not to have a foreign key
4) Primary key or unique index check
In most cases, a primary key or unique index is required on the table, because in order to ensure that the new table is up-to-date, the tool needs to create DELETE and UPDATE triggers on the old table, and it is faster to have a primary key when synchronizing to the new table. In rare cases, when the alter operation is to establish a primary key on the C1 column, the DELETE trigger will be based on the C1 column.
5) mysql server parameter check
(1) connection time parameters:
Wait_timeout value 24 hours
Innodb_lock_wait_timeout 50 seconds
(2) during the Online DDL process, it is necessary to maintain the maximum log size of "add, delete and modify" during this period of time. If you report a mistake when it is small, it will take a long time to lock the watch when it is bigger.
Innodb_online_alter_log_max_size 134217728 (this is the default value for RDS, which I did not modify)
This parameter was introduced by mysql 5.6.6, because the data such as delete, update and insert need to be maintained in the online ddl process, so a log is needed to maintain it. This parameter limits the maximum size of the log. When the size of the log needed in the ddl process is larger than this limit, an error will be reported.
This parameter is "dynamic" and "global". Setting method: set global innodb_online_alter_log_max_size=402653184
Official website documentation:
Http://dev.mysql.com/doc/refman/5.6/en/innodb-create-index-overview.html
Http://dev.mysql.com/doc/refman/5.7/en/innodb-create-index-overview.html
(3) Master-slave replication parameters
Max_binlog_cache_szie
Max_allowed_packet
Fourth, generate scripts
# for i in `seq 0 9`
> do
> echo "pt-online-schema-change-- no-version-check-- charset=utf8-- user=mysqldba-- ask-pass-- host=127.0.0.1-- alter=\" add COLUMN update_time datetime\ "Prun3306-- execute-- nocheck-replication-filters" > batch.sh
> done
# for i in {a..f}
> do
> echo "pt-online-schema-change-no-version-check-- charset=utf8-- user=mysqldba-- password=xx-- host=127.0.0.1-- alter=\" add COLUMN update_time datetime\ "Prun3306 Magazine Dobsescoregy tasking username recording nocheck-replication-filters I-- execute-- nocheck-replication-filters" > > batch.sh
> done
Select concat ('pt-online-schema-change-- no-version-check-- charset=utf8-- user=xx'
'--ask-pass-- alter= "add COLUMN update_time datetime"
'hindxxrePendence 3306, table_schema
', tweets'
Table_name
'--execute')
From information_schema.tables
Where table_schema='xx'
Description:
1) password
Use the parameter-- password will leave the password in the shell history command
Use-- ask-pass to prompt the user for a password, but you can't run in batches.
Can be configured under [client] of / etc/my.conf
User=xxx
Password=xxxx
Chmod 600 / etc/my.conf cannot be viewed by non-root users
Then you don't have to enter the password.
Pt-online-schema-change-- no-version-check-- alter "add column create_time timestamp default current_timestamp comment 'storage time'"-- host=192.168.178.131 Prismatic 3310 Magnum DancitestdbDiffert1-- execute
2)
5. Abnormal fallback method during execution
If an error is reported during execution, for example, when the table name is card,
Select * from information_schema.triggers where table_name='card'
Drop trigger triggername
Delete the newly created table (all new tables that begin with the underscore _)
Drop table _ card_new
Adjust the parameters and re-execute
VI. Adjustable parameters
Table: bw_fund_record primary key id
Pt-online-schema-change-- no-version-check-- check-replication-filters-- lock-wait-timeout=3-- alter "add (create_time timestamp not null)"-h292.168.178.131 Prun3306
Pt-online-schema-change-- user=root-- password=123456-h localhost-- alter "ADD COLUMN content text" dongdb _ 1 _ line _
Pt-online-schema-change-- user=root-- password= "xxxxx"-- host=192.168.xx.xx dumped Mirrorxx revising tasking Trouxx-- alter "ADD Fxxxxx'"-- charset=utf8-- no-check-replication-filters-- alter-foreign-keys-method=auto-- recursion-method=none-- print-- execute
-- do not add critical-load (related to load)
-- do not add max-load (related to load)
-- max-lag
When it is a master-slave environment, do not care about the slave delay, you need to add-- recursion-method=none parameter. When you want to have as little impact on the service as possible, you need to add the-max-load parameter.
1) the above tests delete the original table. If you do not delete the original table, use the-- no-drop-old-table option, which will keep the original table (_ test_binlog_old).
2) if you add fields to the online environment, but do not want to affect the service, you can use the-- max-load option to execute the tool. The default is Threads_running=25, that is, so many threads pause data replication while running, and so on. If the value is less than this value, you can continue to copy the data to the new table.
Pt-online-schema-change-- host=xxxxx-P 3306-- charset=utf8-u root-p 'xxxxxx;'-- alter='add column door_no varchar (200) comment "residential gatehouse"
'--print-- execute dumblzmhms, wlwrooms, donglzmhmhms, wlwrooms, donglzmhhms, wlwrooms, donglzmhhms, wlwrooms, donglzmhhmhms, wlwrooms, donglzmhhms, wlwrooms, donglzmhhhms, wlwrooms, etc.-- critical-load= "Threads_running=200".
7. The appearance of the most middle script
Pt-online-schema-change-- no-version-check-- execute-- alter "add column C1 int" hobbies xxxx.mysql.rds.aliyuncs.commemt Prun3306 pamphjackydxxcoreDipjackyMagneThis Aliyun
Pt-online-schema-change-- no-version-check-- alter "add column create_time timestamp default current_timestamp comment 'creation time"Prismatic 3306 retro upright beadwallet pageDictionary beadwalletloanreachable beadwalletloanreachable bwelling xgkeeper midday score
Pt-online-schema-change-- no-version-check-- alter "add column create_time timestamp default current_timestamp comment 'creation time'" Prismatic 3306 pagebeadwallet pagebeadwallet pagebeadwalletloanreachbeadwalletloanreachable bwelling borrowerbirthauth
Pt-online-schema-change-- no-version-check-- lock-wait-timeout=3-- alter "add column create_time timestamp default current_timestamp comment 'creation time"
Time. / pt-online-schema-change-- no-version-check-- alter "modify column create_time timestamp default current_timestamp comment 'time of storage'"-- hhostname1 Prun3306 authoring upright user1, Dobby1, ask-pass, ask-pass, execute
Time. / pt-online-schema-change-- no-version-check-- alter "add column create_time timestamp default current_timestamp comment 'time of storage'"-- hhostname1 Prun3306 author upright user1, Dobby1, ask-pass, ask-pass-- execute
8. Problems encountered:
1) character set
After a large number of changes, the next day found that all the modified field comments have become garbled, or just comments garbled, the data is normal.
In retrospect, I was afraid that I had made such a low-level mistake that if the pt-online-schema-change replication table was not at the block level, the whole library would be garbled.
Mysql > show create table bw_city
+-+ +
| | Table | Create Table |
+-+ +
| | bw_city | CREATE TABLE `bw_ city` (
`id`int (11) NOT NULL AUTO_INCREMENT
`parentCode` varchar (50) DEFAULT NULL COMMENT'? code:0????'
`citycode` varchar (50) NOT NULL COMMENT'? code'
`adcode` varchar (50) CHARACTER SET utf8mb4 NOT NULL COMMENT 'adcode'
`name`varchar (100) NOT NULL COMMENT'?
`level`varchar (50) NOT NULL COMMENT'?'
`levelNum` tinyint (3) NOT NULL COMMENT'?
`LAT`varchar (200) NOT NULL COMMENT'?
`LNG`varchar (200) NOT NULL COMMENT'?
`create_ time`timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT'å... $:: "æ-Spanish é -''
PRIMARY KEY (`id`)
KEY `caccouniparentCode` (`parentCode`)
KEY `caccouni_ cityCode` (`citycode`)
KEY `caccouniadcode` (`adcode`)
) ENGINE=InnoDB AUTO_INCREMENT=3608 DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC
Because it is RDS, we cannot ssh it locally and execute it remotely on another machine (- hxxx) without adding the-- charset parameter.
When the session level is latin1, the newly added field comments display normally, but the original fields are still garbled.
Mysql > set names latin1
Query OK, 0 rows affected (0.00 sec)
Mysql > SELECT column_comment FROM `information_ schema`.`COLUMNS` where table_name='bw_city'
+-+
| | column_comment |
+-+
| | |
| | code:0???? |
| | code |
| | adcode |
| |? |
| |? |
| | 1 / 2 / 2 / 3? | |
| |? |
| |? |
| | time of storage |
+-+
10 rows in set (0.00 sec)
Solution:
(1)-charset=xxx
Default character type Latin1, for example, if the value is utf8, set the output character to utf8 format, pass mysql_enable_utf8 to DBD::mysql, and then run the SET NAMES UTF8 command after connecting to MySQL
When executing remotely, if the local character set is the same as the character set on the server, you need to specify
Time. / pt-online-schema-change-- charset=utf8-- no-version-check-- alter "modify column create_time timestamp default current_timestamp comment 'time of storage"-- hrm-uhostname1 Prun3306
(2) specify the character set in the DSN parameter
[DSN]
When specifying, pay attention to case sensitivity, there can be no spaces around "=", and multiple values can be separated by commas
1. A charset
2. D database
3. F mysql_read_default_file
4. H host
5. P password
6. P port
7. S mysql_socket
8. T table
9. U user
(3) specify the character set in / etc/my.cnf
[client]
Port = 3310
Default-character-set = utf8
Socket = "/ u01/my3310/run/my3310.sock"
User=root
Password=xxx
2) trigger exists
Pt-online-schema-change-u user-p password-h 10.0.200.195\
-- alter= "MODIFY COLUMN f_receiver varchar (128) NOT NULL DEFAULT''AFTER f_user_id"-- dry-run database database name
The table `db_ name`.`table _ name` has triggers. This tool needs to create its own triggers, so the table cannot already have triggers.
3) no-version-check
Pt-online-schema-change-uuser-ppassword-alter "add key id_provice (f_provice)"\
Dazzling database name, thumbnail name-h rdsxxxxxx.mysql.rds.aliyuncs.com
Can't use an undefined value as an ARRAY reference at / usr/bin/pt-online-schema-change line 7335.
This error occurs when executing on Aliyun RDS, but there is no problem with the native version 5.6
It should be that when pt went to verify the mysql server version, the information obtained from rds was incorrect, resulting in a format error.
9. Parameters on the server:
Mysql > show grants for 'usera'@'%'
+- -- +
| | Grants for usera@% |
+- -- +
| | GRANT PROCESS, REPLICATION SLAVE, REPLICATION CLIENT ON *. * TO 'usera'@'%' IDENTIFIED BY PASSWORD' * xxx' |
| | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `db1ra`.* TO 'usera'@'%' |
| | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `db2seal`. * TO 'usera'@'%' |
| | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `db3loan`.* TO 'usera'@'%' |
| | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `db0`. * TO 'usera'@'%' |
| | GRANT SELECT ON `performance_ schema`. * TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`time _ zone_ substitution`TO 'usera'@'%' | |
| | GRANT SELECT ON `mysql`.`proc`TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`time _ zone_ name`TO 'usera'@'%' | |
| | GRANT SELECT ON `mysql`.`time _ zone_leap_ second`TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`event`TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`time _ zone_transition_ type`TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`general _ log`TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`help _ topic`TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`func`TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`help _ roomy`TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`help _ room`TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`help _ keyword` TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`time _ zone`TO 'usera'@'%' |
| | GRANT SELECT ON `mysql`.`slow _ log`TO 'usera'@'%' |
+- -- +
20 rows in set (0.01 sec)
Mysql > show global variables like 'log%'
+-+ +
| | Variable_name | Value |
+-+ +
| | log_bin | ON |
| | log_bin_basename |
| | log_bin_index |
| | log_bin_trust_function_creators | ON |
| | log_bin_use_v1_row_events | ON |
| | log_error |
| | log_output | TABLE |
| | log_queries_not_using_indexes | OFF |
| | log_slave_updates | ON |
| | log_slow_admin_statements | ON |
| | log_slow_slave_statements | OFF |
| | log_throttle_queries_not_using_indexes | 0 | |
| | log_warnings | 2 | |
+-+ +
13 rows in set (0.00 sec)
Mysql > show variables like 'long_query_time'
+-+ +
| | Variable_name | Value |
+-+ +
| | long_query_time | 1.000000 | |
+-+ +
1 row in set (0.00 sec)
Mysql > show global variables like 'slave%'
+-+
| | Variable_name | Value |
+-+
| | slave_allow_batching | OFF |
| | slave_checkpoint_group | 512 | |
| | slave_checkpoint_period | 300 | |
| | slave_compressed_protocol | OFF |
| | slave_exec_mode | STRICT |
| | slave_load_tmpdir |
| | slave_max_allowed_packet | 1073741824 | |
| | slave_net_timeout | 60 | |
| | slave_parallel_workers | 8 |
| | slave_pending_jobs_size_max | 167772160 | |
| | slave_pr_mode | TABLE |
| | slave_rows_search_algorithms | TABLE_SCAN,INDEX_SCAN |
| | slave_skip_errors | OFF |
| | slave_sql_verify_checksum | ON |
| | slave_transaction_retries | 10 | |
| | slave_type_conversions |
+-+
16 rows in set (0.00 sec)
Mysql > show global variables like 'sql%'
+-+-
| | Variable_name | Value |
+-+-
| | sql_auto_is_null | OFF |
| | sql_big_selects | ON |
| | sql_buffer_result | OFF |
| | sql_log_bin | ON |
| | sql_log_off | OFF |
| | sql_mode |
| | sql_notes | ON |
| | sql_quote_show_create | ON |
| | sql_safe_updates | OFF |
| | sql_select_limit | 18446744073709551615 | |
| | sql_slave_skip_counter | 0 | |
| | sql_warnings | OFF |
+-+-
Mysql > show global variables like 'innodb%'
+-+ +
| | Variable_name | Value |
+-+ +
| | innodb_adaptive_flushing | ON |
| | innodb_adaptive_flushing_lwm | 10 | |
| | innodb_adaptive_hash_index | ON |
| | innodb_adaptive_hash_index_parts | 8 |
| | innodb_adaptive_max_sleep_delay | 150000 | |
| | innodb_additional_mem_pool_size | 2097152 | |
| | innodb_api_bk_commit_interval | 5 | |
| | innodb_api_disable_rowlock | OFF |
| | innodb_api_enable_binlog | OFF |
| | innodb_api_enable_mdl | OFF |
| | innodb_api_trx_level | 0 | |
| | innodb_autoextend_increment | 64 | |
| | innodb_autoinc_lock_mode | 1 | |
| | innodb_buffer_pool_dump_at_shutdown | OFF |
| | innodb_buffer_pool_dump_now | OFF |
| | innodb_buffer_pool_filename | ib_buffer_pool |
| | innodb_buffer_pool_instances | 8 |
| | innodb_buffer_pool_load_abort | OFF |
| | innodb_buffer_pool_load_at_startup | OFF |
| | innodb_buffer_pool_load_now | OFF |
| | innodb_buffer_pool_size | 12884901888 | |
| | innodb_change_buffer_max_size | 25 | |
| | innodb_change_buffering | all |
| | innodb_checksum_algorithm | innodb |
| | innodb_checksums | ON |
| | innodb_cmp_per_index_enabled | OFF |
| | innodb_commit_concurrency | 0 | |
| | innodb_compression_failure_threshold_pct | 5 | |
| | innodb_compression_level | 6 | |
| | innodb_compression_pad_pct_max | 50 | |
| | innodb_concurrency_tickets | 5000 | |
| | innodb_data_file_path | ibdata1:200M:autoextend |
| | innodb_data_home_dir |
| | innodb_defragment | OFF |
| | innodb_defragment_fill_factor | 0.900000 | |
| | innodb_defragment_fill_factor_n_recs | 20 |
| | innodb_defragment_frequency | 40 | |
| | innodb_defragment_n_pages | 7 | |
| | innodb_defragment_stats_accuracy | 0 | |
| | innodb_disable_sort_file_cache | ON |
| | innodb_doublewrite | ON |
| | innodb_encrypt_algorithm | aes_128_ecb |
| | innodb_fast_shutdown | 1 | |
| | innodb_file_format | Barracuda |
| | innodb_file_format_check | ON |
| | innodb_file_format_max | Barracuda |
| | innodb_file_per_table | ON |
| | innodb_flush_log_at_timeout | 1 | |
| | innodb_flush_log_at_trx_commit | 1 | |
| | innodb_flush_method | O_DIRECT |
| | innodb_flush_neighbors | 1 | |
| | innodb_flushing_avg_loops | 30 | |
| | innodb_force_load_corrupted | OFF |
| | innodb_force_recovery | 0 | |
| | innodb_ft_aux_table |
| | innodb_ft_cache_size | 8000000 | |
| | innodb_ft_enable_diag_print | OFF |
| | innodb_ft_enable_stopword | ON |
| | innodb_ft_max_token_size | 84 | |
| | innodb_ft_min_token_size | 3 | |
| | innodb_ft_num_word_optimize | 2000 | |
| | innodb_ft_result_cache_limit | 2000000000 | |
| | innodb_ft_server_stopword_table |
| | innodb_ft_sort_pll_degree | 2 | |
| | innodb_ft_total_cache_size | 640000000 | |
| | innodb_ft_user_stopword_table |
| | innodb_io_capacity | 2000 | |
| | innodb_io_capacity_max | 4000 | |
| | innodb_large_prefix | OFF |
| | innodb_lock_wait_timeout | 50 | |
| | innodb_locks_unsafe_for_binlog | OFF |
| | innodb_log_buffer_size | 8388608 | |
| | innodb_log_compressed_pages | OFF |
| | innodb_log_file_size | 1572864000 | |
| | innodb_log_files_in_group | 2 | |
| | innodb_log_group_home_dir |
| | innodb_lru_scan_depth | 1024 | |
| | innodb_max_dirty_pages_pct | 75 | |
| | innodb_max_dirty_pages_pct_lwm | 0 | |
| | innodb_max_purge_lag | 0 | |
| | innodb_max_purge_lag_delay | 0 | |
| | innodb_mirrored_log_groups | 1 | |
| | innodb_monitor_disable |
| | innodb_monitor_enable |
| | innodb_monitor_reset |
| | innodb_monitor_reset_all |
| | innodb_old_blocks_pct | 37 | |
| | innodb_old_blocks_time | 1000 | |
| | innodb_online_alter_log_max_size | 134217728 | |
| | innodb_open_files | 3000 | |
| | innodb_optimize_fulltext_only | OFF |
| | innodb_page_size | 16384 | |
| | innodb_print_all_deadlocks | OFF |
| | innodb_purge_batch_size | 300 | |
| | innodb_purge_threads | 1 | |
| | innodb_random_read_ahead | OFF |
| | innodb_rds_quick_lru_limit_per_instance | 4096 | |
| | innodb_rds_trx_own_block_max | 128 | |
| | innodb_read_ahead_threshold | 56 | |
| | innodb_read_io_threads | 4 |
| | innodb_read_only | OFF |
| | innodb_replication_delay | 0 | |
| | innodb_rollback_on_timeout | OFF |
| | innodb_rollback_segments | 128 | |
| | innodb_sort_buffer_size | 1048576 | |
| | innodb_spin_wait_delay | 30 | |
| | innodb_stats_auto_recalc | ON |
| | innodb_stats_method | nulls_equal |
| | innodb_stats_on_metadata | OFF |
| | innodb_stats_persistent | ON |
| | innodb_stats_persistent_sample_pages | 20 |
| | innodb_stats_sample_pages | 8 |
| | innodb_stats_transient_sample_pages | 8 |
| | innodb_status_output | OFF |
| | innodb_status_output_locks | OFF |
| | innodb_strict_mode | OFF |
| | innodb_support_xa | ON |
| | innodb_sync_array_size | 1 | |
| | innodb_sync_spin_loops | 100 | |
| | innodb_table_locks | ON |
| | innodb_thread_concurrency | 0 | |
| | innodb_thread_sleep_delay | 10000 | |
| innodb_undo_directory |. | |
| | innodb_undo_logs | 128 | |
| | innodb_undo_tablespaces | 0 | |
| | innodb_use_native_aio | OFF |
| | innodb_use_sys_malloc | ON |
| | innodb_version | 5.6.16 |
| | innodb_write_io_threads | 4 |
+-+ +
129 rows in set (0.01 sec)
X. execution time
1. Tens of millions watch: 40 minutes
two。 Several tables over 100 million: more than 1 hour
That's a fast speed.
11. Reference:
1) pt-online-schema-change interpretation
Https://www.cnblogs.com/xiaoyanger/p/6043986.html
2) pt-online-schema-change usage instructions, restrictions and comparison
Https://blog.csdn.net/lijingkuan/article/details/68951089
Introduction to pt-osc
1. Pt-osc working process
(1) create a new empty table structure that is the same as the table to perform the alter operation (the structure before alter)
(2) execute alter table statements in the new table (should be fast)
(3) create triggers in the original table. Three triggers correspond to insert,update,delete operations.
(4) copy data from the original table to the temporary table with a certain block size, and all writes to the original table through the trigger on the original table will be updated to the new temporary table during the copying process.
(5) transfer the original Rename table to the old table, and change the temporary table Rename to the original table.
(6) if there is a foreign key referring to the table, detect the table related to the foreign key according to the value of the alter-foreign-keys-method parameter, and do the corresponding setting processing.
(7) the old original table will be deleted by default
two。 Description of common options
Only some common options are introduced.
-host=xxx-user=xxx-password=xxx
Connect the instance information with the abbreviation-h xxx-u xxx-p xxx. The password can be entered manually using the parameter-ask-pass.
-alter
The structure change statement does not require the ALTER TABLE keyword. Like the original ddl, you can specify multiple changes, separated by commas.
In most cases, a primary key or unique index is required on the table, because in order to ensure that the new table is up-to-date, the tool needs to create DELETE and UPDATE triggers on the old table, and it is faster to have a primary key when synchronizing to the new table. In rare cases, when the alter operation is to establish a primary key on the C1 column, the DELETE trigger will be based on the C1 column.
Clause does not support rename to rename the table.
The alter command does not support renaming the index in the original table, it needs to be drop and then add, and it is the same in pt-osc. (mysql 5.7supports RENAME INDEX old_index_name TO new_index_name)
But rename the field, do not drop-add, the whole column of data will be lost, use change col1 col1_new type constraint (keep the type and constraint consistent, otherwise it is equivalent to modifying column type, not online)
Clause if it is add column and not null is defined, then the default value must be specified, otherwise it will fail.
If you want to delete a foreign key (name fk_foo), use the tool with an underscore, such as-alter "DROP FOREIGN KEY _ fk_foo"
Dobb _ name _ name _ _
Specify the database name and table name to ddl
-max-load
The default is Threads_running=25. After each chunk is copied, the contents of the SHOW GLOBAL STATUS are checked to see if the metric exceeds the specified threshold. If it exceeds, pause it first. You can specify multiple conditions separated by commas, each in the format of status metric = MAX_VALUE or status metric: MAX_VALUE. If you do not specify MAX_VALUE, the tool will only make it 120% of the current value.
Because copying rows may lock some rows, Threads_running is an excellent indicator of the current database load.
-max-lag
The default is 1s. "after each chunk copy is completed, the latency (Seconds_Behind_Master) of all replication Slave is checked." If the delay is greater than this value, data replication is paused until the lag of all followers is less than this value. -check-interval is used together to specify how long the tool will sleep if the slave lag exceeds max-lag. Default is 1s, and then check again. Such as-max-lag=5-check-interval=2.
Anyone familiar with percona-toolkit knows that-recursion-method can be used to specify dsn records from the library. In addition, if the slave library is stopped, it will wait forever until synchronization starts, and the delay is less than this value.
-chunk-time
The default is 0.5s, that is, when copying data rows, in order to ensure that a chunk is copied within 0.5s as far as possible, the size of the chunk-size is dynamically adjusted to adapt to the changes in server performance.
You can also use another option-chunk-size to disable dynamic adjustment, that is, 1k lines are fixed at a time. If specified, the default is 1000 lines, which takes precedence over chunk-time.
-set-vars
To use pt-osc for ddl, open a session to operate, and set-vars can set these variables before executing alter, such as setting-set-vars "wait_timeout=10000,innodb_lock_wait_timeout=1,lock_wait_timeout=60" by default.
Because the speed of ddl will slow down after using pt-osc, it is expected that it can only be changed in 2.5 hours, so remember to increase wait_timeout.
-dry-run
Create and modify new tables, but do not create triggers, copy data, and replace the original table. Do not really execute, you can see the generated execution statement, understand its execution steps and details, and-print works best.
-execute
If you are sure to modify the table, specify this parameter. Actually execute alter. -dry-run and-execute must specify one, and they are mutually exclusive
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.