In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
After the study of the previous six pages, I believe you have initially formed an overall understanding of what performance_schema is, but I think many colleagues may still look confused after reading the previous article. Today, I will bring you the last chapter of the performance_schema series (a total of 7 chapters in the department). In this issue, we will give you dozens of performance_schema application examples. Next, please follow us to start the learning journey of performance_schema system.
1. Troubleshooting MySQL performance problems using wait events
Usually, before the production server goes online, we will conduct an IO benchmark test on the hardware of the database server, add, delete, modify and check the database, and establish baseline reference data to provide data support for future server capacity expansion or architecture upgrade. When planning a benchmark, we usually need to choose a benchmark software (IO benchmark usually chooses fio and iozone,MySQL database benchmark usually sysbench, tpcc-mysql, workbench, etc.). When we use these benchmark software to test a limit on the server, we think that the data obtained is the highest performance of the server under test. But this is not enough, the reason why the test performance can not continue to improve may also be that your server is not optimized in terms of BIOS settings, hardware configuration, operating system parameters, file system policies, database configuration parameters, and so on. So we also need to use some performance troubleshooting tools to find out where the performance bottleneck lies, so that we have a good idea of the possible bottleneck once the database server is online. Let's take the sysbench benchmark tool stress testing MySQL database as an example to show how to use performance_schema wait events to troubleshoot database performance bottlenecks.
First, use the performance_schema configuration table to enable the collection and logging of waiting events
# enable instrumentsadmin@localhost for all waiting events: performance_schema 11:47:46 > use performance_schemaDatabase changed# modify the enabled and timed fields of the setup_instruments table to yes, indicating that the corresponding instrumentsadmin@localhost: performance_schema 11:48:05 > update setup_instruments set enabled='yes',timed='yes' where name like 'wait/%' is enabled Query OK, 269 rows affected (0.00 sec) Rows matched: 323 Changed: 269 Warnings: "check the modification result. If the enabled and timed fields are YES, the current instruments is enabled (but at this point, the collector does not immediately collect event data. You need to save the table waiting for these events-- consumers, and the collection will not start until it is enabled) admin@localhost: performance_schema 11:49:40 > select * from setup_instruments where name like 'wait/%' +-+ | NAME | ENABLED | TIMED | +-+ | wait/synch/mutex/sql/TC_LOG_MMAP::LOCK_tc | | YES | YES | | wait/synch/mutex/sql/LOCK_des_key_file | YES | YES |. | wait/io/socket/sql/server_tcpip_socket | YES | YES | | wait/io/socket/sql/server_unix_socket | YES | YES | | wait | / io/socket/sql/client_connection | YES | YES | | wait/lock/metadata/sql/mdl | YES | YES | +-- -+ 323 rows in set (0.01 sec) # enable consumersadmin@localhost for waiting events: performance_schema 11:48:21 > update setup_consumers set enabled='yes' where name like'% wait%' Query OK, 3 rows affected (0.00 sec) Rows matched: 3 Changed: 3 Warnings: 0admin@localhost: performance_schema 11:49:20 > select * from setup_consumers where name like'% wait%' +-- +-+ | NAME | ENABLED | +-+-+ | events_waits_current | YES | | events_waits_history | YES | | events_waits_history_long | | YES | +-- +-+ 3 rows in set (0.00 sec) |
Then, use sysbench to pressurize the database and gradually increase the number of concurrent threads until tps and qps no longer increase with the number of threads.
Sysbench-test=oltp-- db-driver=mysql-- mysql-table-engine=innodb-- mysql-host=10.10.10.10-- mysql-port=3306-- mysql-user='qbench'-- mysql-password='qbench'-- test=/usr/share/doc/sysbench/tests/db/oltp.lua-- oltp-table-size=5000000-- oltp-tables-count=8-- num-threads=16-max-time=1800-- max-requests=0-- report-interval=1 run. [111s] threads: 16 Tps: 52.99, reads/s: 668.93, writes/s: 171.98, response time: 629.52ms [112s] threads: 16, tps: 42.00, reads/s: 650.93, writes/s: 202.98, response time: 688.46ms (95%).
From the output of sysbench, we can see that under the pressure of 16 concurrent threads oltp, tps can only run to less than 100, and the delay is less than 600ms, indicating that there is a serious performance bottleneck (or serious mutex waiting occurs within MySQL, or serious performance deficiency of hardware devices). Now, let's use operating system commands to check the hardware load.
# top command shows that most of the CPU resources are consumed on% wa It indicates that there is a serious deficiency in the performance of IO devices [root@localhost] # toptop-18:59:03 up 7:02, 3 users, load average: 4.28,5.82, 4.22Tasks: 186total, 1 running, 185 sleeping, 0 stopped, 0 zombieCpu0: 4.1%us, 8.5%sy, 0.0%ni, 11.9%id, 75.4%wa, 0.0%hi, 0.0%si 0.0%stCpu1: 4.0%us, 13.1%sy, 0.0%ni, 17.5%id, 65.0%wa, 0.0%hi, 0.3%si, 0.0%stCpu2: 9.4%us, 32.1%sy, 0.0%ni, 2.3%id, 55.5%wa, 0.0%hi, 0.7%si, 0.0%stCpu3: 3.0%us, 5.3%sy 0.0%ni, 31.0%id, 60.0%wa, 0.0%hi, 0.7%si, 0.0%stMem: 8053664k total, 1684236k used, 6369428k free, 87868k buffersSwap: 2031612k total, 0k used, 2031612k free, 150680k cached# iostat commands to view disk load You can see through the% util column Avg-cpu:% user% nice% system% iowait% steal% idle 1.77 0.00 2.28 95.70 0.00 0.25Device: rrqm/s wrqm/s rbank s rsec/s wsec/s avgrq-sz avgqu-sz await svctm% 0.00 277.00 160.00 8864.00 2774.00 26 .63 47.84 112.98 2.29 100.10avg-cpu:% user% nice% system% iowait% steal% idle 5.05 0.00 11.62 64.14 0.00 19.19Device: rrqm/s wrqm/s rbank s rsec/s wsec/s avgrq-sz avgqu-sz await svctm% 0.00 267.00 244.00 8544.00 4643.00 25.81 28.20 40.29 1.96 100.00
By querying the system load, you can see at a glance that it is caused by a serious lack of disk performance, but how is the event information inside the database reflected? (note: if you do not have enough experience in using performance_schema, this is an excellent opportunity to learn and accumulate. Don't miss it. Maybe one day when the operating system load can't see the hint, these event information can be of great help.)
# to facilitate querying waiting event statistics, we can first create a view for real-time statistics of current waiting events (non-historical data) admin@localhost: performance_schema 12:14:14 > create view sys.test_waits as select sum (TIMER_WAIT) as TIMER_WAIT,sum (NUMBER_OF_BYTES) as NUMBER_OF_BYTES, EVENT_NAME,OPERATION from events_waits_current where event naming waiting events statistics group by EVENT_NAME,OPERATION Query OK, 0 rows affected (0. 04 sec) # uses the view created earlier to query and sort the results of this view query in descending order. From the query results below, we can see that four of the top five time costs are IO-related waits, and the remaining one is binlog-related exclusive waits admin@localhost: performance_schema 12:30:38 > select sys.format_time (TIMER_WAIT), sys.format_bytes (NUMBER_OF_BYTES), EVENT_NAME,OPERATION from sys.test_waits where sys.format_time (TIMER_WAIT) not regexp'ns | us' order by TIMER_WAIT desc + -+ | sys.format_time (TIMER_WAIT) | sys.format_bytes (NUMBER_OF_BYTES) | EVENT_NAME | OPERATION | +-- -+ | 16.60s | 224.00 KiB | wait/io/file/innodb/innodb_data_file | read | | 16.05s | | 553s | wait/io/table/sql/handler | fetch | | 1.96s | NULL | wait/io/file/sql/binlog | sync | | 1.96s | NULL | | | wait/synch/cond/sql/MYSQL_BIN_LOG::update_cond | timed_wait | | 1.85 s | 1.34 KiB | wait/io/file/sql/binlog | write | | 56.66 ms | NULL | | | wait/io/file/innodb/innodb_log_file | sync | +-| -- + + 6 rows in set (0.01sec) # of course You can also query the events_waits_ current table directly (there may be a large number of rows of data returned, and the query results are not grouped, but the event data is recorded row by row) admin@localhost: performance_schema 11:59:25 > select THREAD_ID,EVENT_NAME,sys.format_time (TIMER_WAIT), INDEX_NAME,NESTING_EVENT_TYPE,OPERATION,NUMBER_OF_BYTES from events_waits_current where event naming event recording order by TIMER_WAIT desc +-+- -+ | THREAD_ID | EVENT_NAME | sys.format_time (TIMER_WAIT) | INDEX_NAME | NESTING_EVENT_TYPE | OPERATION | NUMBER_OF_BYTES | +-- -- + -+ | wait/io/table/sql/handler | 169.48 ms | PRIMARY | STATEMENT | fetch | 39 | | 115 | wait/io/file/innodb/innodb_data_file | 169.48 ms | NULL | WAIT | read | | 16384 | wait/io/table/sql/handler | 93.76 ms | PRIMARY | STATEMENT | fetch | 39 | | 16384 | wait/io/file/innodb/innodb_data_file | 93.76 ms | NULL | WAIT | read | 16384 | | wait/io/file/innodb/innodb_data_file | 73.08 ms | NULL | STATEMENT | read | 16384 | wait/io/file/innodb/innodb_data_file | 63.13 ms | NULL | STATEMENT | read | 16384 | | 106 | wait/io/file | / innodb/innodb_data_file | 53.24 ms | NULL | STATEMENT | read | 16384 | | wait/io/table/sql/handler | 51.90 ms | PRIMARY | STATEMENT | fetch | 39 | 113 | wait/io/file/innodb/innodb_ Data_file | 51.90 ms | NULL | WAIT | read | 16384 | | 49 | wait/synch/cond/sql/MYSQL_BIN_LOG::update_cond | 27.48 ms | NULL | STATEMENT | timed_wait | NULL | .57 rows in set (0.00 sec)
From the query results of the above waiting events, we can see very clearly that most of the delay time of transactions is spent waiting for IO (mainly undo log, redo log, independent tablespace files, fetch of binlog and read system calls), indicating that there may be a serious performance bottleneck on IO devices, which corresponds to the serious lack of disk performance seen by operating system commands.
Conclusion: the above test data show that the reason for the serious poor performance of MySQL is that the serious lack of disk performance has become a bottleneck (in general, the cpu of 4 core can reach 800 + tps without memory and disk bottleneck.)
In view of the insufficient performance of IO, it is recommended to optimize the strategy:
Replace devices with better IO performance
Add two independent identical devices, and put redo log, binlog and other data file in MySQL on three independent IO devices respectively, so that random IO and sequential IO in the database will not cause IO to wait because of competing for resources.
PS:
Of course, you might say that we deliberately used a poorly configured server in this case. Yeah, that's right. But we can think about one question: how much can performance_schema help us with MySQL? At present, we can't find much reliable experience of using performance_schema on the Internet, so we need to dig it constantly. We suggest that qualified peers can prepare two test servers (one with low configuration and one with high configuration). By comparing the test data, you can get the experience of using performance_schema.
two。 Lock problem troubleshooting
2.1 find out who holds the global read lock
The global read lock is usually made by flush table with read lock This kind of statement is added. This kind of statement is usually used when various backup tools are used to obtain consistent backups. In addition, it is also often used when switching between master and standby in an environment with master-slave replication architecture. In addition to these two cases, there is also a situation that is the most difficult to troubleshoot, that is, when the online system permission constraints are not standardized, and the database accounts used by various personnel have RELOAD permissions. You can add a global read lock to the database.
In versions prior to MySQL 5.7, it was usually difficult to directly query useful data at the database level to find out who holds a global read lock (innodb_locks tables can only record lock information in innodb layer, while global read lock is a lock in server layer, so it cannot be queried). Since MySQL 5.7.Table performance_schema.metadata_locks table is provided to record some lock information in Server layer (including global read lock and MDL lock, etc.) Let's demonstrate it with an example and then use performance_schema to find out who holds the global read lock.
First, open the first session and execute the global read lock.
# execute the locked statement root@localhost: sbtest 12:28:30 > flush table with read lock;Query OK, 0 rows affected (0.00 sec) # query the process id of the following locked threads, so that the subsequent troubleshooting process corresponds to root@localhost: sbtest 12:31:48 > select connection_id () +-+ | connection_id () | +-+ | 4 | +-+ 1 row in set (0.00 sec)
Now that we start a second session and execute any statements that may cause changes to the data, let's take the update operation as an example.
Root@localhost: sbtest 12:42:36 > use sbtestDatabase changedroot@localhost: sbtest 12:42:38 > select * from sbtest1 limit 1\ G * * 1. Row * * id: 21k: 2483476c: 09279210219-37745839908-56185699327-79477158641-86711242956-61449540392-42622804506-61031512845-36718422840-11028803849pad: 9813293060-05308009118-09223341195-1924109585-4559816481 row in set (0.00 sec) ERROR: No query specifiedroot@localhost: sbtest 12:42:39 > select connection_id () +-+ | connection_id () | +-+ | 5 | +-+ 1 row in set (0.00 sec) root@localhost: sbtest 12:42:44 > update sbtest1 set pad='xxx' where id=21; operation blocked
Now, let's open a third session and start using some means to troubleshoot.
Root@localhost: (none) 12:42:25 > select connection_id () +-+ | connection_id () | +-+ | 16 | +-+ 1 row in set (0.00 sec) # query processlist information. You can only see that the State of the thread with processid 5 is Waiting for global read lock, indicating that you are waiting for the global read lock root@localhost: (none) 12:43:11 > show processlist. +-+ -+-+ | Id | User | Host | db | Command | Time | State | Info | + -+- -- + | 3 | qfsys | 192.168.2.168Binlog Dump 41042 | NULL | Binlog Dump | 11457 | Master has sent all binlog to slave Waiting for more updates | NULL | 4 | root | localhost | sbtest | Sleep | 234 | NULL | 5 | root | localhost | sbtest | Query | 26 | Waiting for global read lock | update sbtest1 set pad='xxx' where id=21 | 16 | root | localhost | NULL | Query | 0 | starting | show processlist | + -+-- +-+-- + 4 Rows in set (0.00 sec) # continue to query information_schema.innodb_locks, Innodb_lock_waits, innodb_ trx table It is found that all three tables are empty root@localhost: (none) 12:59:30 > select * from information_schema.innodb_locks Empty set, 1 warning (0.00 sec) root@localhost: (none) 12:59:40 > select * from information_schema.innodb_lock_waits;Empty set, 1 warning (0.00 sec) root@localhost: (none) 12:59:43 > select * from information_schema.innodb_trx\ GEmpty set (0.00 sec) # then use show engine innodb status Check it out (just look at the TRANSACTION paragraph here) and still don't have any useful lock information root@localhost: (none) 12:59:48 > show engine innodb status;.==2018-06-25 13:01:43 0x7fe55ded8700 INNODB MONITOR OUTPUT==.-TRANSACTIONS-Trx id counter 2527502Purge done for trx's NRO
< 2527500 undo n:o < 0 state: running but idleHistory list length 3LIST OF TRANSACTIONS FOR EACH SESSION:---TRANSACTION 422099353083504, not started0 lock struct(s), heap size 1136, 0 row lock(s)---TRANSACTION 422099353082592, not started0 lock struct(s), heap size 1136, 0 row lock(s)---TRANSACTION 422099353081680, not started0 lock struct(s), heap size 1136, 0 row lock(s)--------FILE I/O...... 通过上面的常规手段查询下来,无任何有用信息,这个时候,有gdb调试经验的老鸟估计就要开始使用gdb,strace,pstack什么的命令查看MySQL 调用栈、线程信息什么的了,但这对于没有C语言基础的人来说,基本上是看天书,好在从MySQL 5.7版本开始,提供performance_schema.metadata_locks表,该表记录了各种Server层的锁信息(包括全局读锁和MDL锁信息),下面我们开启第三个会话查询该表试试看。 # 我们还可以通过performance_schema.metadata_locks表来排查谁持有全局读锁,全局读锁通常在该表记录着同一个会话的OBJECT_TYPE为global和commit、LOCK_TYPE都为SHARED的两把显式锁,如下root@localhost : (none) 01:01:43>Select * from performance_schema.metadata_locks where ownerless threaded IDPs sys.psroomthread readreadable id (connection_id ())\ G * * 1. Row * * OBJECT_TYPE: GLOBAL OBJECT_SCHEMA: NULLOBJECT_ NAME: NULLOBJECT_INSTANCE_BEGIN: 140621322913984 LOCK_TYPE: SHARED # shared lock LOCK_DURATION: EXPLICIT # explicit LOCK_STATUS: GRANTED # SOURCE: lock.cc:1110 OWNER_THREAD_ID: 94 # Internal thread that holds the lock has been granted an ID of 94 OWNER_EVENT_ID: 16 * * 2. Row * * OBJECT_TYPE: COMMIT OBJECT_SCHEMA: NULL OBJECT_ NAME: NULLOBJECT_INSTANCE_BEGIN: 140621322926064 LOCK_TYPE: SHARED # shared lock LOCK_DURATION: EXPLICIT # explicit LOCK_STATUS: GRANTED # granted to SOURCE: lock.cc:1194 OWNER_THREAD_ID: 94 # Internal thread holding lock ID is 94 OWNER_EVENT_ID: 16 * 3. Row * * * OBJECT_TYPE: GLOBAL OBJECT_SCHEMA: NULLOBJECT_ NAME: NULLOBJECT_INSTANCE_BEGIN: 140621391527216 LOCK_TYPE: INTENTION_EXCLUSIVE # intention exclusive lock LOCK_DURATION: STATEMENT # statement LOCK_STATUS: PENDING # status is pending Indicates that you are waiting to be granted SOURCE: sql_base.cc:3190 OWNER_THREAD_ID: 95 # blocked internal threads ID is 95 OWNER_EVENT_ID: 38 3 rows in set (0.00 sec) # to see how many internal threads ID corresponding to process id is 4 sec 5 root@localhost: (none) 01:33:36 > select sys.ps_thread_id (4) +-+ | sys.ps_thread_id (4) | +-+ | 94 | the internal thread ID corresponding to the thread of # process id=4 is exactly 94 It means that the thread of process id=4 holds the global read lock +-+ 1 row in set (0.00 sec) root@localhost: (none) 01:34:10 > select sys.ps_thread_id (5) +-+ | sys.ps_thread_id (5) | +-+ | 95 | the internal thread corresponding to the thread of # proces id=5 is exactly 95, indicating that the thread waiting for the global read lock is the thread of process id=5 +-+ 1 row in set (0.00 sec)
If it is a production environment, synthesize the above information, find the information of user, host and db in the row records of the corresponding process id=4 in the show processlist information, roughly judge what business use it belongs to, ask the relevant personnel, kill it if necessary, and discuss how to avoid this problem in the future.
2.2 find out who holds the MDL lock
We may often find that the execution of statements is blocked waiting for MDL locks, such as using show processlist Statement to view thread information may find that State is listed as "Waiting for table metadata lock". In this case, how should we find out who holds the MDL lock that has not been released? let's try to simulate the MDL lock waiting scenario (the instruments corresponding to the mdl lock record is wait/lock/metadata/sql/mdl, which is not enabled by default, and the corresponding consumers is performance_schema.metadata_locks. In setup_consumers, it is only controlled by the global configuration item global_instrumentation, which is enabled by default.
First, open two sessions and execute the following statements.
# session 1, explicitly open a transaction and execute a update statement to update the sbtest1 table root@localhost: sbtest 12:26:25 > use sbtestDatabase changedroot@localhost: sbtest 12:26:30 > begin;Query OK, 0 rows affected (0.00 sec) root@localhost: sbtest 12:26:32 > update sbtest1 set pad='yyy' where id=1 Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: session 2, execute a DDL statement on the sbtest1 table to add a normal index root@localhost: sbtest 12:42:50 > use sbtestDatabase changedroot@localhost: sbtest 12:42:56 > alter table sbtest1 add index iTunc (c); # blocked
At this point, we open another session 3 and use the show processlist statement to query thread information, and we can find that the update statement is waiting for the MDL lock (Waiting for table metadata lock).
Root@localhost: (none) 12:37:49 > show processlist +-+-- + -+ | Id | User | Host | db | Command | Time | State | Info | +-+-- -+-+ | 92 | root | localhost | sbtest | Query | 121 | Waiting for table metadata lock | alter table sbtest1 add index iTunc (c) | | 93 | root | localhost | NULL | Query | 0 | starting | show processlist | | 94 | root | localhost | sbtest | Sleep | 1078 | | NULL | +-+ -+-+ 3 rows in set (0.00 sec)
Before MySQL version 5.7, we could not directly query who holds the MDL lock information at the database level (of course, you can say that you will check it with tools such as gdb, but the use of such tools requires a certain c programming language foundation). Now, we can get the MDL lock information by querying the performance_schema.metadata_locks table and find that there are five lines of MDL lock records. Among them, the first behavior is the SHARED_ write lock of the sbtest.sbtest1 table. In the GRANTED state, it is held by 136threads (corresponding to process id 94). In the other four rows, there are SHARED_UPGRADABLE and EXCLUSIVE locks of sbtest.sbtest1 table, where SHARED_UPGRADABLE is in GRANTED state, EXCLUSIVE is in PENDING state, and holds for 134threads (corresponding to process id 92). Indicates that the thread is waiting for a MDL lock.
Root@localhost: (none) 01:23:05 > select * from performance_schema.metadata_locks where ownership threadreadid (connection_id ())\ G * * 1. Row * * OBJECT_TYPE: TABLE OBJECT_SCHEMA: sbtest OBJECT_NAME: sbtest1OBJECT_INSTANCE_BEGIN: 139886013386816 LOCK_TYPE: SHARED_WRITE LOCK_DURATION: TRANSACTION LOCK_STATUS: GRANTED SOURCE: sql_parse.cc:5996 OWNER_THREAD_ID: 136 OWNER_EVENT_ID: 721 girls * 2. Row * * OBJECT_TYPE: GLOBAL OBJECT_SCHEMA: NULLOBJECT_ NAME: NULLOBJECT_INSTANCE_BEGIN: 139886348911600 LOCK_TYPE: INTENTION_EXCLUSIVE LOCK_DURATION: STATEMENT LOCK_STATUS: GRANTED SOURCE: sql_base.cc:5497 OWNER_THREAD_ID: 134 OWNER_EVENT_ID: 4667 LOCK_TYPE * 3. Row * * OBJECT_TYPE: SCHEMA OBJECT_SCHEMA: sbtest OBJECT_NAME: NULLOBJECT_INSTANCE_BEGIN: 139886346748096 LOCK_TYPE: INTENTION _ EXCLUSIVE LOCK_DURATION: TRANSACTION LOCK_STATUS: GRANTED SOURCE: sql_base.cc:5482 OWNER_THREAD_ID: 134 OWNER_EVENT_ID: 4667 row * * OBJECT_TYPE: TABLE OBJECT_SCHEMA: sbtest OBJECT_NAME: sbtest1OBJECT _ INSTANCE_BEGIN: 139886346749984 LOCK_TYPE: SHARED_UPGRADABLE LOCK_DURATION: TRANSACTION LOCK_STATUS: GRANTED SOURCE: sql_parse.cc:5996 OWNER_THREAD_ID: 134 OWNER_EVENT_ID: 4669 OBJECT_TYPE * 5. Row * * OBJECT_TYPE: TABLE OBJECT_SCHEMA: sbtest OBJECT_NAME: sbtest1OBJECT_INSTANCE_BEGIN: 139886348913168 LOCK_TYPE: EXCLUSIVE LOCK_DURATION: TRANSACTION LOCK_STATUS: PENDING SOURCE: mdl.cc:3891 OWNER_THREAD_ID: 134OWNER_EVENT_ID: 47485 rows in set (0.00 sec)
Through the above data, we know which thread holds the MDL lock. Through the query results of the show processlist statement, we can see that the thread with process id 94 has been in the sleep state for a long time, but we cannot see what statement this thread has executed here. We may need to query the information_schema.innodb_trx table to confirm whether the thread has an uncommitted transaction. As follows, by querying the table, it is found that the thread with process id 94 (trx_mysql_thread_id=94) does have an uncommitted transaction, but does not have much useful information, except for a transaction start time and process id (trx_started: 2018-01-14 01:19:25, trx_mysql_thread_id: 94)
Root@localhost: (none) 01:32:17 > select * from information_schema.innodb_trx\ G * * 1. Row * trx_id: 2452892 trx_state: RUNNING trx_started: 2018-01-14 01:19:25trx_requested_lock_id: NULL trx_wait_started: NULL Trx_weight: 3 trx_mysql_thread_id: 94.1 row in set (0.00 sec)
At this point, from all the data we have, although we know that the transaction of the 136th thread did not commit and caused the 134thread to wait for MDL lock, we do not know what the 136thread is doing. Of course, we can kill 136threads and let 134threads continue to execute, but we don't know what statements the 136threads are executing, so we can't find the relevant developers to optimize them. Next time we may encounter a similar problem again, so We can also use the performance_schema.events_statements_ current table to query the event information of the statement being executed or the last execution completed by a thread (the information here is not necessarily reliable, because each thread in this table can only record the currently executing and last completed statement event information, and once the thread executes a new statement, the information will be overwritten), as follows:
Root@localhost: (none) 01:47:53 > select * from performance_schema.events_statements_current where thread_id=136\ G * * 1. Row * * THREAD_ID: 136EVENT_ID: 715END_EVENT_ID: 887EVENT_NAME: statement/sql/update SOURCE: socket_connection.cc:101. SQL_TEXT: update sbtest1 set pad='yyy' where id=1 DIGEST: 69f516aa8eaa67fd6e7bfd3352de5d58 DIGEST_TEXT: UPDATE `sbtest1` SET `pad` =? WHERE `id` =? CURRENT_SCHEMA: sbtest. MESSAGE_TEXT: Rows matched: 1 Changed: 1 Warnings: 0.1 row in set (0.00 sec)
From the query information in the performance_schema.events_statements_current table, we can clearly see what the SQL statement that the thread is executing is through the SQL_TEXT field. If it is a production environment, now, you can go to the relevant developers to negotiate, the next time you encounter similar statements must be submitted in time to avoid similar problems next time.
2.3 find out who holds the table lock
The instruments (wait/lock/table/sql/handler) for table-level locks is enabled by default, and the corresponding consumers table is performance_schema.table_handles, which is only controlled by the global configuration item global_instrumentation in setup_consumers, and is enabled by default. So, by default, you only need to set the system configuration parameter performance_schema=ON, and let's use an example to demonstrate how to find out who holds the table-level lock.
First, two sessions are opened. The first session performs an explicit table-level lock on a table (the innodb engine), and the second session performs an DML statement operation on the table.
# session 1 with table-level lock root@localhost: sbtest 02:15:17 > use sbtestDatabase changedroot@localhost: sbtest 02:40:27 > select connection_id (); +-+ | connection_id () | +-+ | 18 | +-+ 1 row in set (0.00 sec) root@localhost: sbtest 02:40:29 > lock table sbtest1 read Query OK, 0 rows affected (0.00 sec) # session 2 performs a update update on the table root@localhost: sbtest 10:26:37 > use sbtestDatabase changedroot@localhost: sbtest 02:15:33 > select connection_id () +-+ | connection_id () | +-+ | 19 | +-+ 1 row in set (0.00 sec) root@localhost: sbtest 02:40:34 > update sbtest1 set pad='xxx' where id=1; # blocked
Then, open a third session and use the show processlist statement to query thread information. You can find that the update statement is waiting for the MDL lock (Waiting for table metadata lock).
Root@localhost: (none) 02:40:14 > show processlist +-+ -+-+ | Id | User | Host | db | Command | Time | State | Info | + -+- + | 3 | qfsys | 192.168.2.168Binlog Dump 41042 | NULL | Binlog Dump | 18565 | Master has sent all binlog to slave Waiting for more updates | NULL | | 18 | root | localhost | sbtest | Sleep | 67 | NULL | 19 | root | localhost | sbtest | Query | 51 | Waiting for table metadata lock | update sbtest1 set pad='xxx' where id=1 | | 20 | root | localhost | NULL | Query | 0 | starting | show processlist | + -+-+-- +-- + 4 rows in set (0.00 sec)
As shown above, since we are waiting for a MDL lock, we query the performance_schema.metadata_ locks table in session 3, and the order of records represents the time order in which the lock is held, as follows:
Root@localhost: (none) 02:41:41 > select * from performance_schema.metadata_locks where ownership threadreadid (connection_id ())\ G * * 1. Row * * OBJECT_TYPE: TABLE OBJECT_SCHEMA: sbtest OBJECT_NAME: sbtest1OBJECT_INSTANCE_BEGIN: 140622530920576 LOCK_TYPE: SHARED_READ_ONLY LOCK_DURATION: TRANSACTION LOCK_STATUS: GRANTED SOURCE: sql_parse. Cc:5996 OWNER_THREAD_ID: 113 # Thread with an internal ID of 113 is awarded LOCK_TYPE: SHARED_READ_ONLY The thread holding the institute does not allow other threads to modify the data of the sbtest1 table OWNER_EVENT_ID: 11thread * 2. Row * * OBJECT_TYPE: GLOBAL OBJECT_SCHEMA: NULLOBJECT_ NAME: NULLOBJECT_INSTANCE_BEGIN: 140620517607728 LOCK_TYPE: INTENTION_EXCLUSIVE LOCK_DURATION: STATEMENT LOCK_STATUS: GRANTED SOURCE: sql_base.cc:3190 OWNER_THREAD_ID: 114# Thread with an internal ID of 114is awarded LOCK_TYPE: INTENTION_EXCLUSIVE But this is only an intention lock OWNER_EVENT_ID: 12 years * 3. Row * * OBJECT_TYPE: TABLE OBJECT_SCHEMA: sbtest OBJECT_NAME: sbtest1OBJECT_INSTANCE_BEGIN: 140620517607824 LOCK_TYPE: SHARED_WRITE LOCK_DURATION: TRANSACTION LOCK_ STATUS: PENDING SOURCE: sql_parse.cc:5996 OWNER_THREAD_ID: 114# Threads with internal ID of 114are waiting for LOCK_TYPE: SHARED_WRITE to be awarded OWNER_EVENT_ID: 123rows in set (0.00 sec)
Troubleshooting deadlock, we know that MDL locks are very common, and most operations on tables will first add MDL locks to the table (according to the lock information recorded in the performance_schema.metadata_ locks table is not useful). When we see this information, we may immediately think that we need to query the three Innodb engine locks and transaction information tables under information_schema (INNODB_LOCK_WAITS, INNODB_LOCKS, INNODB_TRX). We tried to look at these three tables (session 3 execution) and found that none of them were recorded.
Root@localhost: (none) 02:41:53 > select * from information_schema.INNODB_TRX;Empty set (0.00 sec) root@localhost: (none) 02:42:58 > select * from information_schema.INNODB_LOCKS;Empty set, 1 warning (0.00 sec) root@localhost: (none) 02:43:02 > select * from information_schema.INNODB_LOCK_WAITS;Empty set, 1 warning (0.00 sec)
Of course, some people may say, just four threads, the "Command" of the second session is sleep, it should be it, try it kill. Yes, this can be tried in this case, but what if there are dozens or hundreds of normal persistent connections in the production environment in the sleep state? At this time, we can't try one by one. At this time, we can try to query some table-level lock information (query the performance_schema.table_handles table through session 3), as follows:
Root@localhost: (none) 02:43:06 > select * from performance_schema.table_handles where ownershipTHREADDIDENT0\ G * * 1. Row * * OBJECT_TYPE: TABLE OBJECT_SCHEMA: sbtest OBJECT_NAME: sbtest1OBJECT_INSTANCE_BEGIN: 140622530923216 OWNER_THREAD_ID: 113OWNER_EVENT_ID: 11 INTERNAL_LOCK: NULL EXTERNAL_LOCK: READ EXTERNAL # found that the internal ID is The thread of 113 holds the READ EXTERNAL table-level lock of the sbtest1 table This is why a thread with an internal ID of 114 cannot acquire the MDL write lock 1 row in set (0. 00 sec)
Through the relevant data obtained by the above query, thread 113 explicitly adds a table-level read lock to the sbtest1 table and has been in sleep state for a long time, but we do not know what SQL statement the thread is executing. We can query it through the performance_schema.events_statements_ current table as follows:
Root@localhost: (none) 02:43:22 > select * from performance_schema.events_statements_current where thread_id=113\ G * * 1. Row * * THREAD_ID: 113 EVENT_ID: 10 END_EVENT_ID: 10 EVENT_NAME: statement/sql/lock_tables SOURCE: socket_connection.cc:101 TIMER_START: 18503556405463000 TIMER_END: 18503556716572000 TIMER_WAIT: 311109000 LOCK_TIME: 293000000 SQL_TEXT: lock table sbtest1 read # can be seen here The thread with an internal ID of 113 executed a read lock statement on table sbtest1: DIGEST: 9f987e807ca36e706e33275283b5572b DIGEST_TEXT: LOCK TABLE `sbtest1` READ CURRENT_SCHEMA: sbtest.1 row in set (0.00 sec)
From the query information in the performance_schema.events_statements_current table, we can clearly see what the SQL statement that the thread is executing is through the SQL_TEXT field. If it is a production environment, now, you can go to the relevant developer to confirm that if there is no special action, you can try to kill the thread (session 3 execution, processlist_id is 18), and optimize for this problem to avoid similar problems next time.
# how do I know what the process id corresponding to internal ID 113is? We can query root@localhost through the performance_ schema.thread table: (none) 02:48:19 > select processlist_id from performance_schema.threads where thread_id=113;+-+ | processlist_id | +-+ | 18 | +-+ 1 row in set (sec) # execute killroot@localhost: (none) 02:48:24 > kill 18 Query OK, 0 rows affected (0.00 sec) root@localhost: (none) 02:48:40 > show processlist +-+ -+-+ | Id | User | Host | db | Command | Time | State | Info | + -+-+ | 3 | qfsys | 192.168.2.168 Binlog Dump 41042 | NULL | Binlog Dump | 18994 | Master has sent all binlog to slave Waiting for more updates | NULL | | 19 | root | localhost | sbtest | Sleep | 480 | NULL | | 20 | root | localhost | NULL | Query | 0 | starting | show processlist | +-+-- -+-+ 3 rows in set (0.00 sec) # returns session 2 in which the update statement was executed Statement executed successfully root@localhost: sbtest 02:40:34 > update sbtest1 set pad='xxx' where id=1 Query OK, 0 rows affected (7 min 50.23 sec) Rows matched: 0 Changed: 0 Warnings: 02.4 find out who holds the row-level lock
The performance_schema.data_ lock table involved in this case is new in MySQL 8.0 and is not supported in versions prior to 8.0. it is only an extension of performance_schema for MySQL 5.7here.
If a transaction is not committed for a long time, we can query the corresponding transaction information from information_schema.innodb_trx, performance_schema.events_transactions_current and other tables, but we have no way to know which locks the transaction holds. Although the information_schema.innodb_ locks table is used to record transaction lock information, it does not record lock information for two transactions until two different transactions wait. Since 8.0, a data_ locks table has been provided in performance_schema to record lock information for any transaction (and the information_schema.innodb_locks table has been discarded). There is no need for a lock wait relationship (note that only innodb storage engine layer locks are recorded in this table), as follows
First, we open a session (session 1) in 8.0 and explicitly open a transaction.
Root@localhost: xiaoboluo 01:26:09 > use xiaoboluoDatabase changedroot@localhost: xiaoboluo 01:26:19 > select * from t_luoxiaobo limit 1 +-+ | id | test | datet_time | +-+ | 2 | 1 | 2017-09-06 01:11:59 | +-+-- -+ 1 row in set (0.00 sec) root@localhost: xiaoboluo 01:26:21 > begin Query OK, 0 rows affected (0.00 sec) root@localhost: xiaoboluo 01:26:23 > update t_luoxiaobo set datet_time=now () where id=2;Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0
Open another session (session 2) to query the data_locks table
Root@localhost: performance_schema 01:27:21 > select * from data_locks\ G * * 1. Row * * ENGINE: INNODB ENGINE_LOCK_ID: 55562:62ENGINE_TRANSACTION_ID: 55562 THREAD_ID: 54 # holds thread internal ID EVENT_ID: 85 OBJECT_SCHEMA: xiaoboluo # library name OBJECT _ NAME: t_luoxiaobo # Table name PARTITION_NAME: NULLSUBPARTITION_NAME: NULL INDEX_NAME: NULL # Index name OBJECT_INSTANCE_BEGIN: 140439793477144 LOCK_TYPE: TABLE # Table level lock LOCK_MODE: IX # IX lock LOCK_STATUS: GRANTED # granted status LOCK_DATA: NULL** * 2. Row * * ENGINE: INNODB ENGINE_LOCK_ID: 55562:2:4:2ENGINE_TRANSACTION_ID: 55562 THREAD_ID: 54 # Internal lock thread ID EVENT_ID: 85 OBJECT_SCHEMA: xiaoboluo # library name OBJECT_NAME: t_luoxiaobo # Table name PARTITION_NAME: NULLSUBPARTITION_NAME: NULL INDEX_NAME: PRIMARY # index as primary key OBJECT_INSTANCE_BEGIN: 140439793474104 LOCK_TYPE: RECORD # record lock LOCK_MODE: X # exclusive lock LOCK_STATUS: GRANTED # granted status LOCK_DATA: 2 # locked data record The record here corresponds to the value2 rows in set of INDEX_NAME: PRIMARY (0.00 sec)
We can see from the query results that there are two rows of lock records, the first row is the IX lock on the table t_luoxiaobo with the status GRANTED, and the second behavior uses the X lock record lock of the primary key index with the status GRANTED.
Now, let's simulate two scenarios in which lock waiting occurs in DML. We open a new session (session 3), which does the same thing to the table t_luoxiaobo if the transaction in session 1 is not committed.
Root@localhost: (none) 09:34:49 > use xiaoboluoDatabase changedroot@localhost: xiaoboluo 09:34:54 > begin;Query OK, 0 rows affected (0.00 sec) root@localhost: xiaoboluo 09:34:55 > update t_luoxiaobo set datet_time=now () where id=2; # blocked
When you go back to session 2 and query the data_locks table, you can find that there are four rows of locks recorded.
Root@localhost: (none) 09:35:18 > select * from performance_schema.data_locks\ Gentleman * 1. Row * *. THREAD_ID: 55. LOCK_TYPE: TABLE LOCK_MODE: IX LOCK_STATUS: GRANTED LOCK_DATA: NULL** 2. Row * * ENGINE: INNODB ENGINE_LOCK_ID: 55563:2:4:2ENGINE_TRANSACTION_ID: 55563 THREAD_ID: 55 # Internal thread ID EVENT_ID: 8 OBJECT_SCHEMA: xiaoboluo OBJECT_NAME: t_luoxiaobo PARTITION_NAME: NULLSUBPARTITION_NAME: NULL INDEX_NAME: PRIMARY # the name of the index in which the lock occurred OBJECT_INSTANCE_BEGIN: 140439793480168 LOCK_TYPE: RECORD # record lock LOCK_MODE: X # exclusive lock LOCK_STATUS: WAITING # waiting for the lock to be granted to the index value locked by LOCK_DATA: 2 # This is exactly the same as the X lock with a primary key value of 2 held by the internal ID 54 thread, indicating that it is blocked by the internal ID 54 thread * * 3. Row * *. THREAD_ID: 54. LOCK_TYPE: TABLE LOCK_MODE: IX LOCK_STATUS: GRANTED LOCK_DATA: NULL** 4. Row *. THREAD_ID: 54 EVENT_ID: 85 OBJECT_SCHEMA: xiaoboluo OBJECT_NAME: t_luoxiaobo PARTITION_NAME: NULLSUBPARTITION_NAME: NULL INDEX_NAME: PRIMARYOBJECT_INSTANCE_BEGIN: 140439793474104 LOCK_TYPE: RECORD LOCK_MODE: X LOCK_STATUS: GRANTED LOCK_DATA: 24 rows in set (0.00 sec)
From the query data above, you can see that two new lock records with thread ID 55 have been added to the performance_schema.data_ locks table, and the IX lock state is GRANTED,X lock state is WAITING, indicating that it is waiting to be granted. But it is not very intuitive to see the lock wait relationship here, we can use the sys.innodb_lock_waits view to see it.
Root@localhost: (none) 09:44:52 > select * from sys.innodb_lock_waits\ G * * 1. Row * * wait_started: 2018-01-14 21:51:59 wait_age: 00:00:11 wait_age_secs: 11 locked_table: `xiaoboluo`.`t _ luoxiaobo` locked _ table_schema: xiaoboluo locked_table_name: t_luoxiaobo locked_table_partition: NULLlocked_table_subpartition: NULLlocked_ index: PRIMARY locked_type: RECORD waiting_trx_id: 55566 waiting_trx_started: 2018-01-14 21:51:59 waiting_trx_age: 00:00:11waiting_trx_rows_locked: 1waiting_trx_rows_modified: 0 waiting_pid: 8 Waiting_query: update t_luoxiaobo set datet_time=now () where id=2 waiting_lock_id: 55566 waiting_lock_mode: X blocking_trx_id: 55562 blocking_pid: 7 blocking_query: NULL blocking_lock_id: 55562 waiting_lock_mode: X blocking_trx_started: 2018-01-14 21:34:44 blocking_trx _ age: 00:17:26blocking_trx_rows_locked: 1blocking_trx_rows_modified: 1sql_kill_blocking_query: KILL QUERY 7sql_kill_blocking_connection: KILL 71 row in set (0.02 sec)
PS: in MySQL 5.7, you can also use sys.innodb_lock_waits view queries, but in 8.0, this view joins different tables for the query (replacing the information_schema.innodb_locks and information_schema.innodb_lock_waits tables used in previous versions with performance_schema.data_locks and performance_schema.data_lock_waits tables). In addition, there is no sys library by default in MySQL 5.6 and earlier versions We can use the following statement instead:
SELECT r.trx_wait_started AS wait_started, TIMEDIFF (NOW (), r.trx_wait_started) AS wait_age, TIMESTAMPDIFF (SECOND, r.trx_wait_started, NOW ()) AS wait_age_secs, rl.lock_table AS locked_table, rl.lock_index AS locked_index, rl.lock_type AS locked_type, r.trx_id AS waiting_trx_id, r.trx_started as waiting_trx_started, TIMEDIFF (NOW () R.trx_started) AS waiting_trx_age, r.trx_rows_locked AS waiting_trx_rows_locked, r.trx_rows_modified AS waiting_trx_rows_modified, r.trx_mysql_thread_id AS waiting_pid, sys.format_statement (r.trx_query) AS waiting_query, rl.lock_id AS waiting_lock_id, rl.lock_mode AS waiting_lock_mode, b.trx_id AS blocking_trx_id B.trx_mysql_thread_id AS blocking_pid, sys.format_statement (b.trx_query) AS blocking_query, bl.lock_id AS blocking_lock_id, bl.lock_mode AS blocking_lock_mode, b.trx_started AS blocking_trx_started, TIMEDIFF (NOW (), b.trx_started) AS blocking_trx_age, b.trx_rows_locked AS blocking_trx_rows_locked, b.trx_rows_modified AS blocking_trx_rows_modified CONCAT ('KILL QUERY', b.trx_mysql_thread_id) AS sql_kill_blocking_query, CONCAT ('KILL' B.trx_mysql_thread_id) AS sql_kill_blocking_connectionFROM information_schema.innodb_lock_waits w INNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id INNER JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id INNER JOIN information_schema.innodb_locks bl ON bl.lock_id = w.blocking_lock_id INNER JOIN information_schema.innodb_locks rl ON rl.lock_id = w.requested _ lock_idORDER BY r.trx_wait_started 3. View recent SQL execution information
3.1 View the most recent top sql
The statement current event record table and statement event history table in performance_schema can be used to query some recently executed SQL statements in the database, as well as statement-related information. Here, we take the events_statements_history table as an example. The query results are sorted in reverse order according to the statement completion time, as follows:
Root@localhost: performance_schema 04:33:33 > select THREAD_ID,EVENT_NAME,SOURCE,sys.format_time (TIMER_WAIT), sys.format_time (LOCK_TIME), SQL_TEXT,CURRENT_SCHEMA,MESSAGE_TEXT,ROWS_AFFECTED,ROWS_SENT,ROWS_EXAMINED from events_statements_history where current schematic schemas' order by TIMER_WAIT desc limit 10\ G * * 1. Row * * THREAD_ID: 114EVENT_NAME: statement/sql/update SOURCE: socket_connection.cc:101sys.format_time (TIMER_WAIT): 24.93 msys.format_time ( LOCK_TIME): 24.93 m SQL_TEXT: update sbtest1 set pad='xxx' where id=1 CURRENT_SCHEMA: sbtest MESSAGE_TEXT: Rows matched: 0 Changed: 0 Warnings: 0 ROWS_AFFECTED: 0 ROWS_SENT: 0 ROWS_EXAMINED: 0 row * * THREAD_ID: 114EVENT_NAME: statement/sql/update SOURCE: socket_connection.cc:101sys.format_time (TIMER_WAIT): 7.84msys.format_time (LOCK_TIME): 7.84m SQL_TEXT: update sbtest1 set pad='xxx' where id=1 CURRENT _ SCHEMA: sbtest MESSAGE_TEXT: Rows matched: 0 Changed: 0 Warnings: 0 ROWS_AFFECTED: 0 ROWS_SENT: 0 ROWS_EXAMINED: 0.10 rows in set (0.00 sec)
According to our usual principle of optimizing slow SQL, priority is given to those with the most execution times, followed by those with the longest execution time. The above query results are not what we usually call top sql. We can use the events_statements_summary_by_ digest table to query the statistical top sql.
Root@localhost: performance_schema 05:04:41 > select SCHEMA_NAME,DIGEST_TEXT,COUNT_STAR,sys.format_time (SUM_TIMER_WAIT) as sum_time,sys.format_time (MIN_TIMER_WAIT) as min_time,sys.format_time (AVG_TIMER_WAIT) as avg_time,sys.format_time (MAX_TIMER_WAIT) as max_time,sys.format_time (SUM_LOCK_TIME) as sum_lock_time,SUM_ROWS_AFFECTED,SUM_ROWS_SENT SUM_ROWS_EXAMINED from events_statements_summary_by_digest where SCHEMA_NAME is not null order by COUNT_STAR desc limit 10\ gateway * 1. Row * * SCHEMA_NAME: sbtest DIGEST_TEXT: UPDATE `sbtest1` SET `pad` =? WHERE `id` =? COUNT_STAR: 10 sum_time: 2.19h min_time: 216.90 us avg_time: 13.15m max_time: 1.50 hsum_lock_time: 2.04hSUM_ROWS_AFFECTED: 3SUM_ROWS_SENT: 0SUM_ROWS_EXAMINED: 4 hours * 2. Row * * SCHEMA_NAME: sbtest DIGEST_TEXT: SHOW WARNINGS COUNT_STAR: 9 sum_time: 397.62 us min_time: 16.50 us avg_time: 44.18 us max_time: 122.58 ussum_lock_time: 0 psSUM_ROWS_AFFECTED: 0SUM_ROWS_SENT: 0SUM_ROWS_EXAMINED: 0.2 * * 5. Row * * SCHEMA_NAME: sbtest DIGEST_TEXT: SELECT * FROM `sbtest1` LIMIT? COUNT_STAR: 5 sum_time: 138.93 ms min_time: 145.77 us avg_time: 27.79 ms max_time: 112.29 mssum_lock_time: 95.53 msSUM_ROWS_AFFECTED: 0SUM_ROWS_SENT: 104SUM_ROWS_EXAMINED: 104.10 rows in set (0.00 sec)
PS: the SQL statement text recorded in the events_statements_summary_by_digest table is not complete. By default, only 1024 bytes are intercepted, and these 1024 bytes of SQL text are also used for hash calculation. If the same cumulative amount of hashcode is calculated together, the data provided by performance_schema can only be counted as a supplement to slow log analysis. If you need complete SQL statement text, you have to rely on slow query log analysis.
3.2View SQL with recent failed execution
A colleague once asked that the code reported syntax errors on some operations of the database (such as python's ORM module operating the database), but the code did not record the text of SQL statements. He asked whether the specific SQL text could be seen in the MySQL database layer to see if there was anything wrong. At this time, the first thing that comes to mind is to check the error log. Unfortunately, the error log is not logged for syntax errors in SQL statements. If you do not fully understand performance_schema, you are likely to reply to your colleagues that syntax errors are not recorded at the MySQL level.
In fact, the statement event record table of performance_schema records more detailed information for the execution status of each statement, such as events_statements_ table and events_statements_summary_by_ events_statements_ table (events_statements_ table records all the execution error messages of statements, but the events_statements_summary_by_digest table only records the statement record statistics of errors that occur during the execution of statements, and the specific error types are not recorded. For example, if the syntax error class is not recorded, let's demonstrate how to use these two tables to query the statement information where the error occurred.
First, we simulate a syntactically incorrect SQL and start a session by using the events_statements_history_long or events_statements_ query table to query the SQL statement with syntax errors (session 1)
Root@localhost: performance_schema 05:18:09 > select * from;ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near'at line 1
Then, we query the record with error number 1064 in the events_statements_history_ long table to open another session (session 2)
Root@localhost: sbtest 05:32:55 > use performance_schemaDatabase changedroot@localhost: performance_schema 05:33:03 > select THREAD_ID,EVENT_NAME,SOURCE,sys.format_time (TIMER_WAIT) as exec_time,sys.format_time (LOCK_TIME) as lock_time,SQL_TEXT,CURRENT_SCHEMA,MESSAGE_TEXT,ROWS_AFFECTED,ROWS_SENT,ROWS_EXAMINED,MYSQL_ERRNO from events_statements_history where MYSQL_ERRNO=1064\ G * * 1. Row * * THREAD_ID: 119EVENT_NAME: statement/sql/error SOURCE: socket_connection.cc:101 exec_time: 71.72 us lock_time: 0 ps SQL_TEXT: select * fromCURRENT_SCHEMA: sbtestMESSAGE_TEXT: You have an error in your SQL syntax Check the manual that corresponds to your MySQL server version for the right syntax to useROWS_AFFECTED: 0ROWS_SENT: 0ROWS_EXAMINED: 0MYSQL_ERRNO: 10641 row in set (0.01sec)
Maybe you don't know what the error number is, you can query the statement record that the number of errors is not zero, and it is the one that finds the MESSAGE_TEXT field prompt message as a syntax error.
Root@localhost: performance_schema 05:34:00 > select THREAD_ID,EVENT_NAME,SOURCE,sys.format_time (TIMER_WAIT) as exec_time,sys.format_time (LOCK_TIME) as lock_time,SQL_TEXT,CURRENT_SCHEMA,MESSAGE_TEXT,ROWS_AFFECTED,ROWS_SENT,ROWS_EXAMINED,MYSQL_ERRNO,errors from events_statements_history where errors > 0\ G * * 1. Row * * THREAD_ID: 119EVENT_NAME: statement/sql/error SOURCE: socket_connection.cc:101 exec_time: 71.72 us lock_time: 0 ps SQL_TEXT: select * fromCURRENT_SCHEMA: sbtestMESSAGE_TEXT: You have an error in your SQL syntax Check the manual that corresponds to your MySQL server version for the right syntax to useROWS_AFFECTED: 0ROWS_SENT: 0ROWS_EXAMINED: 0MYSQL_ERRNO: 1064 errors: 11 row in set (0.00 sec)
Use the events_statements_summary_by_ digest table to query the records of SQL statements where errors occurred. First, we create one or two statements in session 1 to execute statements that are bound to cause errors.
Root@localhost: sbtest 05:32:34 > select *; ERROR 1096 (HY000): No tables usedroot@localhost: sbtest 05:40:57 > select * from sbtest4 where id between 2000 and 2000 and xx=1;ERROR 1054 (42S22): Unknown column 'xx' in' where clause'
Then, we query the events_statements_summary_by_ digest table for records with errors greater than 0, which are executed in session 2.
Root@localhost: performance_schema 05:34:03 > select SCHEMA_NAME,DIGEST_TEXT,COUNT_STAR,sys.format_time (AVG_TIMER_WAIT) as avg_time,sys.format_time (MAX_TIMER_WAIT) as max_time,sys.format_time (SUM_LOCK_TIME) as sum_lock_time,SUM_ERRORS,FIRST_SEEN,LAST_SEEN from events_statements_summary_by_digest where summers erros operations 0\ G * * 1. Row *. 10. Row * * SCHEMA_NAME: sbtestDIGEST_TEXT: SELECT * # here is the first statement that executes the error COUNT_STAR: 1 avg_time: 55.14 us max_time: 55.14 ussum_lock_time: 0 psSUM_ERRORS: 1FIRST_SEEN: 2018-06-25 17:40:57LAST_SEEN: 2018-06-25 17 row 57 * 11. * SCHEMA_NAME: sbtestDIGEST_TEXT: SELECT * FROM `sbtest4` WHERE `id` BETWEEN? AND? AND `xx` =? # here is the second statement that executes the error COUNT_STAR: 1 avg_time: 101.68 us max_time: 101.68 ussum_lock_time: 0 psSUM_ERRORS: 1FIRST_SEEN: 2018-06-25 17:41:03LAST_SEEN: 2018-06-25 17 avg_time 41 sec 0311 rows in set (0.00 sec)
PS: as we said earlier, the events_statements_summary_by_ digest table does not record specific error messages, but only does error statement statistics, so if you need to query specific error messages (such as specific error codes, specific error messages, specific error SQL text, etc.), you also need to query the events_statements_history or events_statements_history_ long table.
Root@localhost: performance_schema 05:45:03 > select THREAD_ID,EVENT_NAME,SOURCE,sys.format_time (TIMER_WAIT) as exec_time,sys.format_time (LOCK_TIME) as lock_time,SQL_TEXT,CURRENT_SCHEMA,MESSAGE_TEXT,ROWS_AFFECTED,ROWS_SENT,ROWS_EXAMINED,MYSQL_ERRNO from events_statements_history where MYSQLTERRNOLOG0\ G * * 1. Row * *. Row * * THREAD_ID: 119EVENT_NAME: statement/sql/select SOURCE: socket_connection.cc:101 exec_time: 55.14 us lock_time: 0 ps SQL_TEXT: select * CURRENT_SCHEMA: sbtestMESSAGE_TEXT: No tables usedROWS_AFFECTED: 0ROWS_SENT: 0ROWS_EXAMINED: 0MYSQL_ERRNO: 1096 hours * 3. Row * * THREAD_ID: 119EVENT_NAME: statement/sql/select SOURCE: socket_connection.cc:101 exec_time: 0 us lock_time: 0 ps SQL_TEXT: select * from sbtest4 where id between 100 and 2000 and xx=1CURRENT_SCHEMA: sbtestMESSAGE_TEXT: Unknown column 'xx' in' where clause'ROWS_AFFECTED: 0ROWS_SENT: 0ROWS_EXAMINED: 0MYSQL_ERRNO: 10543 rows in set (101.68 sec) 4. View SQL execution progress information
The MariaDB branch supports a progress display function that does not depend on performance_schema performance data, and the last column of the result returned by the show processlist statement is the progress information.
Root@localhost Sun Jan 14 14:08:29 2018 14:08:29 [(none)] > show processlist +- +-+ | Id | User | Host | db | Command | Time | State | Info | Progress | + -+ | 4 | root | localhost | employees | Query | 6 | altering table | alter table salaries add index i_salary (salary) | 93.939 | | 5 | root | localhost | NULL | Query | 0 | init | | show processlist | 0.000 | + -+-+ 2 rows in set (0.00 sec)
A similar function is also provided in MySQL. By recording and calculating phase events with predictable workload in phase events, you can get the phase information and progress information of the execution of a statement. Here are examples of how to view
4.1 View the SQL runtime
First of all, we need to configure to enable. Phase events are not enabled by default. Start a session (session 1).
Root@localhost: performance_schema 05:59:26 > use performance_schemaDatabase changedroot@localhost: performance_schema 05:59:45 > update setup_instruments set enabled='yes',timed='yes' where name like 'stage/%';Query OK, 120 rows affected (0.00 sec) Rows matched: 129 Changed: 120 Warnings: 0root@localhost: performance_schema 05:59:47 > update setup_consumers set enabled='yes' where name like'% stage%';Query OK, 3 rows affected (0.00 sec) Rows matched: 3 Changed: 3 Warnings: 0
Open a second session (session 2) and query thread_id
Root@localhost: sbtest 06:02:22 > select sys.ps_thread_id (connection_id ()) +-- + | sys.ps_thread_id (connection_id ()) | +-+ | 119 | +-- -+ 1 row in set (0.00 sec)
First clean up the old information to avoid interference (session 1)
# turn off the event logging function of other threads first, and use the thread_idroot@localhost: performance_schema 06:05:38 > update performance_schema.threads set INSTRUMENTED='NO' where thread identification records queried in the previous steps: 119 query OK, 101 rows affected (0.00 sec) Rows matched: 101 Changed: 101 Warnings: emptying the three tables of phase events root@localhost: performance_schema 05:59:52 > truncate events_stages_current;truncate events_stages_history;truncate events_stages_history_long Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.02 sec)
Now, go back to session 2 and execute the DML statement
Root@localhost: sbtest 06:06:37 > select count (*) from sbtest.sbtest4 where id between 100 and 200 sec + | count (*) | +-+ | 50 | +-+ 1 row in set (0.00 sec)
Query the events_stages_history_ long table in session 1
Root@localhost: performance_schema 06:10:37 > select THREAD_ID,EVENT_NAME,SOURCE,sys.format_time (TIMER_WAIT) as exec_time,WORK_COMPLETED,WORK_ESTIMATED from events_stages_history_long +-+ | THREAD _ ID | EVENT_NAME | SOURCE | exec_time | WORK_COMPLETED | WORK_ESTIMATED | +- | stage/sql/starting | socket_connection.cc:107 | 54.19 us | NULL | NULL | | 119 | stage/sql/checking permissions | sql_authorization.cc:810 | 3.62 us | NULL | NULL | | 119 | stage/sql/Opening tables | sql_base.cc:5650 | 10.54 us | NULL | NULL | | 119 | stage/sql/init | sql_select.cc:121 | 16.73 us | NULL | NULL | | | 119 | stage/sql/System lock | lock.cc:323 | 4.77 us | NULL | NULL | | 119 | stage/sql/optimizing | sql_optimizer.cc:151 | 4.78 us | NULL | NULL | | 119 | stage/sql/statistics | sql_optimizer.cc:367 | 50.54 us | NULL | NULL | 119 | stage/sql/preparing | sql_optimizer.cc:475 | 7.79 us | NULL | NULL | 119 | stage/sql/executing | sql_executor.cc:119 | 381.00 ns | | | NULL | NULL | | stage/sql/Sending data | sql_executor.cc:195 | 36.75 us | NULL | NULL | | stage/sql/end | sql_select.cc:199 | 931.00 ns | NULL | NULL | | stage/sql/query end | sql_parse.cc:4968 | 5.31 us | NULL | NULL | | stage/sql/closing tables | sql_parse.cc:5020 | 2.26 us | NULL | NULL | 119 | stage/sql/freeing items | sql_parse | .cc: 5596 | 8.71 us | NULL | NULL | | 119 | stage/sql/cleaning up | sql_parse.cc:1902 | 449.00 ns | NULL | NULL | +-+- -+ 15 rows in set (0.01 sec)
Through the above query data, we can clearly see the whole process of the execution of a select statement, as well as the time cost of each process and other information, what is the execution phase of the DDL statement?
First clean up the old information to avoid interference (session 1)
Root@localhost: performance_schema 06:10:48 > truncate events_stages_current;truncate events_stages_history;truncate events_stages_history_long;Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.02 sec)
Then, execute the DDL statement (session 2)
Root@localhost: sbtest 03:37:32 > alter table sbtest1 add index iTunc (c)
At this point, the phase event information is queried in session 1 (the DDL statement is not completed at this time. As you can see from the last line of record information, the WORK_COMPLETED and WORK_ESTIMATED column values are not NULL, indicating that the phase event is a measurable event)
Root@localhost: performance_schema 06:30:04 > select THREAD_ID,EVENT_NAME,SOURCE,sys.format_time (TIMER_WAIT) as exec_time,WORK_COMPLETED,WORK_ESTIMATED from events_stages_history_long +-+- -+-+ | THREAD_ID | EVENT_NAME | SOURCE | exec_time | WORK_COMPLETED | WORK_ESTIMATED | +-+ -+ | 119 | stage/sql/starting | socket_connection.cc:107 | 44.17 us | NULL | NULL | | 119 | stage/sql/checking permissions | sql_authorization.cc:810 | 1.46 us | NULL | NULL | | 119 | stage/sql/ Checking permissions | sql_authorization.cc:810 | 2.29 us | NULL | stage/sql/init | sql_table.cc:9031 | 2.16 us | NULL | NULL | 119 | stage/sql/Opening tables | sql_base.cc:5650 | 107.57 us | NULL | NULL | | stage/sql/setup | sql_table.cc:9271 | 19.19 us | NULL | NULL | 119 | stage/sql/creating table | sql_table.cc:5222 | 1.06 ms | NULL | | | NULL | 19 | stage/sql/After create | sql_table.cc:5355 | 76.22 us | NULL | NULL | | 119 | stage/sql/System lock | lock.cc:323 | 4.38 us | NULL | NULL | | 119 | stage/sql/preparing for alter table | sql_table.cc:7454 | 28.63 ms | NULL | NULL | | stage/sql/altering table | sql_table.cc:7508 | 3.91 us | NULL | NULL | | 119 | stage/innodb/alter table (read competes with and internal sort | ) | ut0stage.h:241 | 27.09s | 230040 | 470155 | + -+ 12 rows in set (0.01 sec)
After the execution of the DDL statement is complete, we view the phase event information again (session 1)
Root@localhost: performance_schema 06:31:07 > select THREAD_ID,EVENT_NAME,SOURCE,sys.format_time (TIMER_WAIT) as exec_time,WORK_COMPLETED,WORK_ESTIMATED from events_stages_history_long +-+- -+-+ | THREAD_ID | EVENT_NAME | SOURCE | exec_time | WORK_COMPLETED | WORK_ESTIMATED | +-+ | stage/innodb/alter table (read competes with and internal sort) | ut0stage.h:241 | 27.09s | 230040 | 470155 | | 119 | stage/innodb/alter table (merge sort) | ut0stage.h:501 | 1.15m | 345060 | 512319 | stage/innodb/alter table (insert) | ut0stage.h:501 | 11.83s | 460146 | 523733 | stage/innodb/alter table (flush) | ut0stage.h:501 | 18.35s | 523658 | 523733 | stage/innodb/alter table (log apply index) | ut0stage.h:501 | 54.63 ms | 524042 | 524042 | | stage/innodb/alter table (flush) | ut0stage.h:501 | 21.18 us | 524042 | 524042 | | | stage/sql/committing alter table to storage engine | sql_table.cc:7535 | 5.12 us | NULL | NULL | | 119 | stage/innodb/alter table (end) | ut0stage.h:501 | 233.52 ms | 524042 | 524042 |. +-+-| -+-+ 24 rows in set (0.01 sec)
From the above query data, we can clearly see the whole process of adding an index in an alter statement, as well as the time cost of each process. The longest execution time is stage/innodb/alter table (merge sort), followed by stage/innodb/alter table (read competitive and internal sort). It shows that the main time cost of creating an index in this example lies in the internal data sorting and sort merge operations.
PS: the data in the long history table of phase events is generated quickly, and the default quota of 10000 rows may soon be filled. You can adjust the quota to a large value in the configuration file to view the execution phase of the DDL statement completely (for example, performance_schema_events_stages_history_long_size=1000000, and be careful to turn off other irrelevant tasks).
4.2 View the progress of SQL implementation
In the official MySQL version, there is no intuitive way to query the progress of the execution of the entire statement under performance_schema, but you can check it with the sys.session view described in the following chapters.
Root@localhost: performance_schema 04:16:38 > select * from sys.session where connexion id ()\ Gtincter * 1. Row * * thd_id: 45 conn_id: 4. State: alter table (merge sort) time: 30current_statement: alter table sbtest1 add index iTunc (c) statement_latency: 29.42s progress: 46.40 # Progress percentage here lock_latency: 2.19 ms rows_examined: 0 rows_sent: 0 rows_affected: 0 tmp_tables: 0 tmp_disk_tables: 0 full_scan: NO. Program_name: mysql1 row in set (0.33 sec) 5. View recent transaction execution information
Although we can query the total execution time of a statement through the slow query log, if there are some large transactions in the database that are rolled back or terminated abnormally during execution, the slow query log can not help. At this time, we can use the events_transactions_* table of performance_schema to view the records related to the transaction. These tables record in detail whether any transactions are rolled back, active (long-event uncommitted transactions are also active events) or committed, and so on. Let's simulate several transaction situations and take a look at the transaction event record table.
First, we need to configure enable, and transaction events are not enabled by default (session 1).
Root@localhost: performance_schema 04:16:59 > update setup_instruments set enabled='yes',timed='yes' where name like 'transaction';Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0root@localhost: performance_schema 04:23:12 > update setup_consumers set enabled='yes' where name like'% transaction%';Query OK, 3 rows affected (0.00 sec) Rows matched: 3 Changed: 3 Warnings: 0
Perform cleanup to avoid interference with other transactions (session 1)
Root@localhost: performance_schema 04:30:25 > truncate events_transactions_current;truncate events_transactions_history;truncate events_transactions_history_long;Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec)
Then, open a new session (session 2) to execute the transaction and simulate the transaction rollback
Root@localhost: sbtest 04:18:34 > use sbtestDatabase changedroot@localhost: sbtest 04:24:27 > begin;Query OK, 0 rows affected (0 sec) root@localhost: sbtest 04:25:02 > update sbtest1 set pad='yyy' where id=1;Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0
Session 1 queries for active transactions, which represent currently executing transaction events, which need to be queried from the events_transactions_ current table.
Root@localhost: performance_schema 04:33:44 > select THREAD_ID,EVENT_NAME,STATE,TRX_ID,GTID,SOURCE,TIMER_WAIT,ACCESS_MODE,ISOLATION_LEVEL,AUTOCOMMIT,NESTING_EVENT_ID,NESTING_EVENT_TYPE from events_transactions_current\ G * * 1. Row * * THREAD_ID: 47 EVENT_NAME: transaction STATE: ACTIVE TRX_ID: NULL GTID: AUTOMATIC SOURCE: transaction.cc:209 TIMER_WAIT: 21582764879000 ACCESS_MODE: READ WRITEISOLATION_LEVEL: READ COMMITTED AUTOCOMMIT : NONESTING_EVENT_ID: 30NESTING_EVENT_TYPE: STATEMENT1 row in set (0.00 sec)
Session 2, rollback transactions, transactions completed by rollback are no longer active
Root@localhost: sbtest 04:25:08 > rollback;Query OK, 0 rows affected (0.01sec)
Session 1, query transaction event history table events_transactions_history_long
Root@localhost: performance_schema 04:27:34 > select THREAD_ID,EVENT_NAME,STATE,TRX_ID,GTID,SOURCE,TIMER_WAIT,ACCESS_MODE,ISOLATION_LEVEL,AUTOCOMMIT,NESTING_EVENT_ID,NESTING_EVENT_TYPE from events_transactions_history_long\ G * * 1. Row * * THREAD_ID: 45 EVENT_NAME: transaction STATE: ROLLED BACK TRX_ID: NULL GTID: AUTOMATIC SOURCE: transaction.cc:209 TIMER_WAIT: 39922043951000 ACCESS_MODE: READ WRITEISOLATION_LEVEL: READ COMMITTED AUTOCOMMIT: NONESTING_EVENT_ID: 194NESTING_EVENT_TYPE: STATEMENT1 row in set (0.00 sec)
You can see that a row of transaction event information is recorded in the transaction event table, and the thread with thread ID 45 executes a transaction with a transaction state of ROLLED BACK. Now, let's simulate the normal commit of the transaction.
# session 2root@localhost: sbtest 04:40:27 > begin;Query OK, 0 rows affected (0.00 sec) root@localhost: sbtest 04:40:29 > update sbtest1 set pad='yyy' where id=1;Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0root@localhost: sbtest 04:40:31 > commit Query OK, 0 rows affected (0.01sec) # session 1root@localhost: performance_schema 04:38:32 > select THREAD_ID,EVENT_NAME,STATE,TRX_ID,GTID,SOURCE,TIMER_WAIT,ACCESS_MODE,ISOLATION_LEVEL,AUTOCOMMIT,NESTING_EVENT_ID,NESTING_EVENT_TYPE from events_transactions_current\ G * * 1. Row * * THREAD_ID: 44 EVENT_NAME: transaction STATE: COMMITTED TRX_ID: 421759004106352 GTID: AUTOMATIC SOURCE: handler.cc:1421 TIMER_WAIT: 87595486000 ACCESS_MODE: READ WRITEISOLATION_LEVEL: READ COMMITTED AUTOCOMMIT : YESNESTING_EVENT_ID: 24003703NESTING_EVENT_TYPE: STATEMENT** 2.row * * THREAD_ID: 47 EVENT_NAME: transaction STATE: COMMITTED TRX_ID: NULL GTID: ec123678-5e26-11e7-9d38-000c295e08a0:181879 SOURCE: Transaction.cc:209 TIMER_WAIT: 7247256746000 ACCESS_MODE: READ WRITE ISOLATION_LEVEL: READ COMMITTED AUTOCOMMIT: NONESTING_EVENT_ID: 55NESTING_EVENT_TYPE: STATEMENT2 rows in set (0.00 sec)
From the query data above, you can see that the transaction event in the second row of transaction event record is in COMMITTED state, indicating that the transaction has been committed successfully
PS: if a transaction is uncommitted for a long time (long events are in ACTIVE state), although we can query the uncommitted transaction event information from the events_transactions_ current table, we can not directly see when the transaction started. We can use the information_schema.innodb_trx table to help us determine.
Root@localhost: performance_schema 04:57:50 > select * from information_schema.innodb_trx\ G * * 1. Row * trx_id: 2454336 trx_state: RUNNING trx_started: 2018-01-14 16:43:29trx_requested_lock_id: NULL trx_wait_started: NULL Trx_weight: 3 trx_mysql_thread_id: 6.1 row in set (0.00 sec) 6. View error details of multithreaded replication
Official MySQL supports library-based parallel replication from version 5.6, and transaction-based parallel replication in MySQL version 5.7. after we enable parallel replication, if an error occurs, the specific error details can not be seen through the show slave status statement (show slave status statement can only see the error message of the SQL thread, but under multi-thread replication The error message of the SQL thread is a summary based on the error information of the worker thread, similar to the following:
Admin@localhost: (none) 12:45:19 > show slave status\ Gbot. Last_Errno: 1062 Last_Error: Coordinator stopped because there were error (s) in the worker (s). The most recent failure being: Worker 1 failed executing transaction '23fb5832-e4bc-11e7-8ea4-525400a4b2e1pur2553990' at master log mysql-bin.000034, end_log_pos 98797. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any. Last_SQL_Errno: 1062 Last_SQL_Error: Coordinator stopped because there were error (s) in the worker (s). The most recent failure being: Worker 1 failed executing transaction '23fb5832-e4bc-11e7-8ea4-525400a4b2e1pur2553990' at master log mysql-bin.000034, end_log_pos 98797. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.1 row in set (0.00 sec)
According to the error prompt, check the performance_schema.replication_applier_status_by_ worker table, which records the details of each worker thread in detail, from which we can find the specific cause of the worker thread that reported the error.
Admin@localhost: (none) 12:51:53 > select * from performance_schema.replication_applier_status_by_worker where lastest error * * 1. Row * * CHANNEL_NAME: WORKER_ID: 2 THREAD_ID: NULL SERVICE_STATE: OFFLAST_SEEN_TRANSACTION: 23fb5832-e4bc-11e7-8ea4-525400a4b2e1:2553991LAST_ERROR_NUMBER: 1062LAST_ERROR_MESSAGE: Worker 2 failed executing transaction '23fb5832 -e4bc-11e7-8ea4-525400a4b2e1pur2553991' at master log mysql-bin.000034 End_log_pos 99514 Could not execute Write_rows event on table sbtest.sbtest4; Duplicate entry '833353' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log FIRST, end_log_pos 99514LAST_ERROR_TIMESTAMP: 2018-01-02 14 PRIMARY', Error_code 0881 row in set (0.00 sec)
From querying the performance_schema.replication_applier_status_by_ worker table, we can find that the specific replication error message is due to the conflict of the primary key.
PS: due to historical reasons, the replication information record table in performance_schema only records the information related to GTID, while the slave_master_info, slave_relay_log_info and slave_worker_info tables under the mysql system dictionary database record the information related to binlog position. In addition, if you select the relevant replication information to record to the file, then there are master.info, relay_log.info and other files to record binlog position-related information on the disk.
| | author profile |
Luo Xiaobo Walk senior database technology expert
IT has worked in the industry for many years, has served as an operation and maintenance engineer, senior operation and maintenance engineer, operation and maintenance manager, database engineer, has participated in the design and preparation of version release systems, lightweight monitoring systems, operation and maintenance management platform, database management platform, familiar with MySQL architecture, InnoDB storage engine, like to specialize in open source technology, the pursuit of perfection.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.