Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of deadlock and Log in MySQL

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail the example analysis of deadlocks and logs in MySQL. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.

Recently, several data exceptions have occurred one after another in online MySQL, all of which broke out in the early hours of the morning. Because the business scenario is a typical data warehouse application, it cannot be repeated with less pressure during the day. Even some anomalies are quite strange, and in the end, the root cause analysis is quite painstaking. So in the actual business, how can we quickly locate online MySQL problems and fix anomalies? Below, I will share the relevant experiences and methods based on two actual case.

Case1: part of the data update failed

One day, the channel students reported that the extremely individual channel data of a certain report was 0, and most of the channel data were normal. This data is routinely updated by a statistical program in the early hours of every morning. In theory, either all are normal or all fail, so what is the reason for very few data anomalies?

The first thing we can think of is to look at the statistical task log, but after looking at the log printed by the statistical program, we did not find an exception description such as SQL update failure, so what happened to the database at that time? Before viewing the MySQL-server log, habitually take a look at the database status:

I happened to see a deadlock on this update in the early morning:

Because of the limited space, I have omitted a lot of context here. From this log, we can see that TRANSACTION 1 and TRANSACTION 2 respectively hold a certain number of row locks, then wait for each other's locks, and finally MySQL detects deadlock, and then chooses to roll back TRANSACTION 1:Innodb 's current way of handling deadlocks is to roll back transactions that hold the least row-level exclusive locks.

Then there are three questions:

1. Innodb row lock does not lock only one row?

Because this table is innodb engine, InnoDB supports row locks and table locks. InnoDB row locking is achieved by locking the index items on the index, unlike Oracle, which is achieved by locking the corresponding data rows in the data block. The row lock implementation feature of InnoDB means that InnoDB uses row-level locks only if the data is retrieved through index conditions, otherwise InnoDB will use table locks and lock all scanned rows! In practical applications, special attention should be paid to this feature of InnoDB row locks, otherwise it may lead to a large number of lock conflicts, thus affecting concurrency performance. Because the row lock of MySQL is a lock for index, not a lock for records, although it accesses records of different rows, lock conflicts will occur if the same index key is used. When we use range conditions instead of equality conditions to retrieve data and request shared or exclusive locks, InnoDB will lock the index entries of existing data records that meet the conditions; in addition, gap locks will also lock multiple rows. InnoDB will also use gap locks when locking through range conditions, and InnoDB will also use gap locks if an equal condition is used to lock a record that does not exist!

Having said that, let's take a look at the index of our business table:

You can see that the index of this table is extremely unreasonable: there are three indexes, but update does not use the index completely, so update does not use the index exactly, and needs to lock multi-row range data, thus causing a deadlock.

After knowing the principle, we carefully build a four-field composite index to let update accurately follow the innodb index. In fact, after we update the index, the deadlock problem is solved.

Note: innodb will print out not only the locks held and waited by the transaction, but also the record itself. Unfortunately, it may exceed the length reserved by innodb for the output (only 1m can be printed and only the last deadlock information can be retained). If you cannot see the complete output, you can create an innodb_monitor or innodb_lock_ monitor table in any library. In this way, the innodb status information is complete and recorded in the error log every 15s. For example, create table innodb_monitor (an int) engine=innodb;, can delete this table when it does not need to be recorded in the error log.

2. Why only some update statements failed in the rollback

If you roll back, why do only some update statements fail, and not all update in the entire transaction?

This is because our innodb is automatically submitted by default:

In the case of multiple update or insert statements, commit immediately after each SQL,innodb is executed to persist the change and release the lock, which is why only a few statements fail after the deadlock rolls back the transaction in this example.

It is important to note that there is usually another situation that can cause some statements to be rolled back, which requires special attention. There is a parameter in innodb called innodb_rollback_on_timeout

This is described in the official manual:

In MySQL 5.1, InnoDB rolls back only the last statement on a transaction timeout by default. If-innodb_rollback_on_timeout is specified, a transaction timeout causes InnoDB to abort and rollback the entire transaction (the same behavior as in MySQL 4.1). This variable was added in MySQL 5.1.15.

Explanation: if this parameter is off or does not exist, only the last Query of the transaction will be rolled back if the timeout occurs, and the entire transaction will be rolled back if the transaction encounters a timeout.

3. How to reduce the probability of innodb deadlock?

Deadlocks are difficult to eliminate completely in row locks and transaction scenarios, but lock conflicts and deadlocks can be reduced by table design and SQL adjustment, including:

Try to use a low isolation level. For example, if a gap lock occurs, you can change the transaction isolation level of the session or transaction to the RC (read committed) level to avoid it, but you need to set the binlog_format to row or mixed format.

Carefully design the index and use the index to access the data as far as possible to make the locking more accurate, thus reducing the chance of lock conflict

Choose a reasonable transaction size, and small transactions are less likely to have lock conflicts.

When locking a recordset display, it is best to request a lock of sufficient level at one time. For example, if you want to modify the data, it is best to apply for an exclusive lock directly, rather than apply for a shared lock first, and then request an exclusive lock when you modify it, which can easily lead to a deadlock.

When different programs access a set of tables, they should try to agree to access the tables in the same order, and for a table, access the rows in the table in a fixed order as much as possible. This will greatly reduce the chances of deadlocks.

Access data with equal conditions as far as possible, so as to avoid the effect of gap lock on concurrent insertion.

Do not apply for more than the actual lock level; do not show locking when querying unless you have to

For some specific transactions, table locks can be used to improve processing speed or reduce the possibility of deadlocks.

Case2: weird Lock wait timeout

For several days in a row, there was a task failure at 6: 00 am and 8: 00 am respectively. During load data local infile, we reported an exception in Lock wait timeout exceeded try restarting transaction innodb's Java SQL. We communicated with the students on the platform and learned that this was due to the short Lock time or lock conflict in our own business database. But when you think about it, shouldn't you? Isn't it always good? And basically are single-form tasks, there is no multi-person conflict.

No matter whose problem it is, let's first see if there is anything wrong with our own database:

Default lock timeout time 50s, this time is really not short, it is estimated that the adjustment is useless, in fact, it is really useless to try a dead horse doctor.

And this time SHOW ENGINE INNODB STATUS\ G did not show any deadlock messages, and then turned to the MySQL-server log, hoping to see what the data was doing before and after that moment. Here is a brief introduction to the composition of the MySQL journaling file system:

(a) error log: records problems that occur when starting, running, or stopping mysqld. It is enabled by default.

(B) general log: general query log, recording all statements and instructions. Opening the database will result in a performance loss of about 5%.

(C) binlog log: binary format, a statement that records all changed data, mainly for slave replication and data recovery.

(d) slow log: records all queries that take more than long_query_time seconds to execute or that do not use indexes, which are turned off by default.

(e) Innodb logs: innodb redo log, undo log, used to recover data and undo operations.

As you can see from the above introduction, the log of this problem may be in d and b. If you see whether it is not in the next d, then you can only enable b, but b has a certain loss to the performance of the database. Because it is a full log, the amount is very large, so you must be careful to turn it on:

I just opened the full log half an hour before and after the problem every day, and I didn't find any MySQL-client requests to our business database! The log format is as follows, recording all connections and commands:

The problem was basically determined. The above exception was thrown before the client request came to us. After repeated communication and confirmation, the platform verified that it was because they needed to retrieve the SQL from the SQL task table and update the task status before performing the insert. As a result, some SQL timed out waiting for lock due to the existence of a large number of insert and update concurrency on the hour.

MySQL log analysis script

As the early morning is the peak of the data warehouse business, many problems break out at this time, some strange problems are often beyond the village, there is no such store, can not be repeated during the day. How to capture the logs we care about to quickly locate the problem, this is the most important, here I wrote a small script, crontab deployment, you can choose the time range to open, every minute to sample the log, need to note that general log is fine, do not easily open, otherwise the database performance loss.

This is the end of this article on "example analysis of deadlocks and logs in MySQL". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report