In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Lao Zhang, I am not only a fan of Jin Yong, but also a fan of the three Kingdoms. Just like to watch the battle of wits between Zhuge Liang of Shu and Ma Yi of Wei Guoshi in the later period. The use of all kinds of tricks can be called a classic. For the management of MySQL database, Mr. Zhang also has a lot of brilliant ideas, which will be introduced to you one by one in the future. When it comes to the three Kingdoms, I personally prefer that Shu can be unified, but things go against one's wishes. Unfortunately, it was Sima Yan of the State of Wei who unified the world in the end. Some people attribute the failure of Shu to a Liu Chan who can't afford it, others attribute it to fate, and some even say that "lying dragon and Phoenix chicks" can win the world, while Liu Bei has both. It sounds ridiculous now, but in fact, anyone's fate is still in his own hands.
We should learn to do our best, try our best to do everything well, and not let go of a small detail. Especially in the field of database, we should be more careful. One of my teachers once told me that you should learn to integrate your work into your own blood. Only when you really fall in love with it, can you study it with your heart!
Every time before Lao Zhang writes a blog, he likes to talk about some soul chicken soup. Brothers who don't like to hear, I hope you will forgive me! In fact, I just hope that everyone can do everything with their heart, no matter which industry you are in, you will succeed sooner or later.
Lao Zhang's MySQL online course has also been officially launched at 51CTO College, and students who want to study can visit
If you have any questions, you can communicate with the teacher in time.
Today I would like to share with you an article about the use of essential tools for MySQL DBA. It is convenient to help us manage our database and make our work more efficient.
This tool belongs to percona, an important branch of MySQL, and is called percona-toolkit (a sharp Swiss × ×). It is a collection of commands. Today, I'd like to introduce to you some of the ones we use most in the production environment.
Download address of the toolkit: https://www.percona.com/downloads/percona-toolkit/LATEST/
The installation process is simple, unzip it first:
Tar-zxvf percona-toolkit-3.0.3_x86_64.tar.gz
Since it is a binary package, you can use it directly in the percona-toolkit-3.0.3/bin directory after decompression.
A trick up your sleeve:
Pt-online-schema-change
The function can sort out the table structure online, collect fragments, and add fields and indexes to large tables. Avoid blocking read and write operations caused by locking tables. For MySQL version 5. 7, you don't need to use this command and just online DDL online.
The presentation process is as follows:
Because it is a test environment, we do not create a table with a large amount of data, mainly to let people understand this process.
This is the data in the table and the structure of the table.
Mysql > select count (*) from su;+-+ | count (*) | +-+ | 100000 | +-+ 1 row in set (0.03 sec) mysql > desc su +-+-+ | Field | Type | Null | Key | Default | Extra | | +-+ | id | int (10) unsigned | NO | PRI | NULL | auto_increment | | | C1 | int (11) | NO | | 0 | c2 | int (11) | NO | | 0 | | c3 | int (11) | NO | | 0 | | c4 | int (11) | NO | | 0 | c5 | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP | | c6 | varchar (11) | NO | | |
The process of adding fields online:
[root@node3 bin] # / pt-online-schema-change-- user=root-- password=root123-- host=localhost-- alter= "ADD COLUMN city_id INT" See-- recursion-method if host node3 has slaves.Not checking slave lag because no slaves were found and-- check-slave-lag was not specified.Operation, tries, wait: analyze_table, 10,1 copy_rows, 10,0.25 create_triggers, 10,1 drop_triggers, 10,1 swap_tables, 10,1 update_foreign_keys, 10 1Altering `test`.`su`.Creating new table...Created new table test._su_new OK.Altering new table...Altered `test`.` _ su_ new`OK.2017-08-10T14:53:59 Creating triggers...2017-08-10T14:53:59 Created triggers OK.2017-08-10T14:53:59 Copying approximately 100163 rows...2017-08-10T14:54:00 Copied rows OK.2017-08-10T14:54:00 Analyzing new table...2017-08-10T14:54:00 Swapping tables...2017-08-10T14:54:00 Swapped original and new tables OK.2017-08-10T14:54:00 Dropping old table...2017-08-10T14:54:00 Dropped old table `test`.` _ su_ old` OK.2017-08-10T14:54:00 Dropping triggers...2017-08-10T14:54:00 Dropped triggers OK.Successfully altered `test`.`su`.
A new field of city_id has been added to view the result:
Mysql > desc su +-+-+ | Field | Type | Null | Key | Default | Extra | | +-+ | id | int (10) unsigned | NO | PRI | NULL | | | auto_increment | | C1 | int (11) | NO | | 0 | | c2 | int (11) | NO | | 0 | c3 | int (11) | | NO | | 0 | | c4 | int (11) | NO | | 0 | c5 | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP | | c6 | varchar (11) | NO | | city_id | int (11) | YES | | NULL | | +-| -+
Two tricks up your sleeve:
Pt-query-digest
Function: now capture online TOP 10 slow sql statements.
We all know that most of the performance problems of the database are caused by sql statements, so we have to catch criminals like them. Do relevant optimization processing in time.
The presentation process is as follows:
Slow sql statements can be sampled according to the interval. Since is an adjustable sql statement
[root@node3 bin] # / pt-query-digest-- since=24h / data/mysql/slow.log > 1.log
Take a look at the sql report, summarize the slow statements, and look at the time consumption.
The following is only part of the reporting process
Cat 1.log# Profile# Rank Query ID Response time Calls R/Call Item# M Item# = # 1 0x040ADBE3A1EED0A2 16.8901 87.2% 1 16.8901 0.00 CALL insert_su# 2 0x8E44F4ED46297D4C 1.3013 6.7% 3 0.4338 0.18 INSERT SELECT test._su_new test.su# 3 0x12E7CAFEA3145EEF 0.7431 3.8% 1 0.7431 0.00 DELETE su# MISC 0xMISC 0.4434 2.3% 3 0.1478 0.0 # Query 1: 0 QPS 0x concurrency ID 0x040ADBE3A1EED0A2 at byte 19060 _ # Scores: ID 0x040ADBE3A1EED0A2 at byte M = 0.01mm Time range: all events occurred at 2017-08-02 12purl 12purl 05mm Attribute pct total min max avg 95% stddev median# = # Count 2 "Exec time 47 18s 18s 18s 0 18s# Lock time 0 103us 103us 103us 103us 103us 0 103us# Rows sent 0 0 0 0# Rows examine 0 0 0 0# Query size 0 21 21 21 0 21# String:# Databases test# Hosts localhost# Users Root# Query_time distribution# 1us# 10us# 100us# 1ms# 10ms# 100ms# 1s# 10s + # call insert_su (50000)\ G
You can see that the report enumerates the percentage of response time of sql statements and the execution time of sql statements. It is convenient for us to observe intuitively which statements are problematic. (only one sql is listed here)
Three tricks up your sleeve:
Pt-heartbeat
The function monitors the master-slave delay. Monitor how long the slave database lags behind the main database.
Environment introduction: 192.168.56.132 master library, 192.168.56.133 slave library
Do the following:
Execute on the main library:
[root@node3 bin] # / pt-heartbeat-- database test-- update-- create-table-- daemonize-uroot-proot123
Test monitors the synchronized library for me, creates a monitoring table heartbeat under the library, and the background process updates this table from time to time.
Execute a statement on the slave library that monitors the master-slave synchronization delay:
Master-server-id is the server-id of the main library,-h (ip of the main library)
[root@node4 bin] # / pt-heartbeat-- master-server-id=1323306--monitor-- database test-uzs-p123456-h 192.168.56.1320.00s [0.00s, 0.00s] 0.00s [0.00s, 0.00s] 0.00s [0.00s, 0.00s] 0.00s [0.00s, 0.00s] 0.00s [0.00s, 0.00s] 0.00s 0.00s] 0.00s [0.00s, 0.00s, 0.00s]
The time is 0s, and there is no delay at present.
Trick 4 up your sleeve:
Pt-table-checksum
Function to check the consistency of master-slave replication
Principle: execute a check statement on the master to check the consistency of mysql master-slave replication, generate a replace statement, then pass it to the slave library through replication, and then update the value of master_src through update. Finally, by detecting the data from this_src and master_src
Value to determine whether the replication is consistent.
To compare the differences in the test library, execute on top of the main library:
[root@node3 bin] #. / pt-table-checksum-- no-check-binlog-format-- nocheck-replication-filters-- databases=test-- replicate=test.checksums-- host=192.168.56.132-uzs-p123456 TS ERRORS DIFFS ROWS CHUNKS SKIPPED TIME TABLE08-10T16:01:02 0 01 10 0.013 test.heartbeat08-10T16:01:02 0 01 0 0.015 test.su08-10T16:01:02 0 01 0 0.011 test.t
It can be seen that the diff is 0, which proves that there is no difference between the master and slave test libraries.
Compare which tables are different in the test library (you need to add replicate-check-only), and execute on top of the main library:
[root@node3 bin] # / pt-table-checksum-- no-check-binlog-format-- nocheck-replication-filters-- databases=test-- replicate=test.checksums-- replicate-check-only-- host=192.168.56.132-uzs-p123456Differences on node4TABLE CHUNK CNT_DIFF CRC_DIFF CHUNK_INDEX LOWER_BOUNDARY UPPER_BOUNDARYtest.t 1 11
It can be seen that the master-slave data of this table under the test library is inconsistent.
Five tricks up your sleeve:
Pt-slave-restart
Function: monitor master-slave errors and try to restart MySQL master-slave
Note: skip the error command to resolve the phenomenon of multiple data from the library (error code 1062). If there is less data from the database and errors are skipped, the problem of master-slave synchronization cannot be solved fundamentally (error code 1032). You need to find out what the missing data is first. If there is a lot of missing data, it is recommended to re-build the master-slave environment.
A 1062 error occurred from the library:
Slave_IO_Running: YesSlave_SQL_Running: NoLast_Errno: 1062Last_Error: Could not execute Write_rows event on table test.t; Duplicate entry'1' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000006, end_log_pos 757482
You need to execute on top of the slave library:
[root@node4 bin] #. / pt-slave-restart-uroot-proot123-- error-numbers=10622017-08-10T16:28:12 packs.
After skipping the error, check the master-slave result:
Slave_IO_Running: YesSlave_SQL_Running: Yes
The synchronization state is consistent again.
Six tricks up your sleeve:
Pt-ioprofile
Function: it is convenient to locate IO problems, which can be located through IO throughput.
[root@node3 bin] #. / pt-ioprofile Thu Aug 10 16:33:47 CST 2017Tracing process ID 3907 total read pwrite write fsync filename 13.949355 0.839006 0.000000 0.286556 12.823793 / data/mysql/mysql-bin.000006 7.454844 0.000000 2.913702 0.000000 4.541142 / data/mysql/ib_logfile0 0.000193 0.000000 0.000000 0.000193 0.000000 / data/ Mysql/slow.log read: reads data from a file. The file to be read is identified by a file descriptor, and the data is read into a pre-defined buffer. Write: writes data from the buffer to a file. Pread: the kernel may temporarily suspend the process between lseek and read calls, causing problems with synchronization. Calling pread is equivalent to calling lseek and read sequentially, which are equivalent to a bundled atomic operation. Pwrite: the kernel may temporarily suspend the process between lseek and write calls, causing problems with synchronization. Calling pwrite is equivalent to calling lseek and write sequentially, which are equivalent to a bundled atomic operation. Fsync: make sure that all the modified contents of the file have been correctly synchronized to the hard disk, and this call blocks waiting until the device reports that IO is complete. Filename: name of the file that interacts with the disk
From this report, we can see which file takes up more time in IO and has the busiest interaction with disk, which makes it easier to lock IO problems.
Because there are a lot of commands in this toolset, I will first introduce these more commonly used ones to you today, and you can study some of the others in private if you are interested.
Official address: https://www.percona.com/doc/percona-toolkit/LATEST/index.html
Finally, Lao Zhang hopes that everyone can become "Wolong or Phoenix chicks", and any one of you will have no worries about the company's database!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.