Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of Mysql transaction and data consistency processing

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces the example analysis of Mysql transaction and data consistency processing, which is very detailed and has a certain reference value. Interested friends must read it!

In the InnoDB of MySQL, the default Tansaction isolation level is REPEATABLE READ (readable)

If you want to UPDATE the same form after SELECT, it is best to use SELECT. UPDATE .

For example:

Suppose there is a quantity in the merchandise form products that stores the quantity of goods. Before the order is established, you must determine whether the quantity of the quantity item is sufficient (quantity > 0) before updating the quantity to 1. The code is as follows:

SELECT quantity FROM products WHERE id=3; UPDATE products SET quantity = 1 WHERE id=3; Why isn't it safe?

A small number of situations may not be a problem, but a large amount of data access will definitely be a problem. If we need to deduct inventory in the case of quantity > 0, suppose the quantity read by the program in the first line of SELECT is 2, it looks like the number is not wrong, but when MySQL is preparing to UPDATE, someone may have deducted the inventory into 0, but the program has no idea that it will be wrong on the wrong UPDATE. Therefore, the transaction mechanism must be used to ensure that the data read and committed are correct.

So we can test it like this in MySQL, with the following code:

SET AUTOCOMMIT=0; BEGIN WORK; SELECT quantity FROM products WHERE id=3 FOR UPDATE

At this point, the id=3 data in the products data is locked (Note 3), and other transactions must wait for this transaction to be committed before execution.

SELECT * FROM products WHERE id=3 FOR UPDATE

This ensures that the numbers quantity reads in other transactions are correct.

UPDATE products SET quantity ='1' WHERE id=3; COMMIT WORK

The Commit is written to the database and the products is unlocked.

Note 1: BEGIN/COMMIT is the start and end point of the transaction. You can use more than two MySQL Command windows to observe the lock status interactively.

Note 2: in the middle of a transaction, only SELECT... FOR UPDATE or LOCK IN SHARE MODE with the same data will wait for the end of other transactions before execution, usually SELECT. Will not be affected by this.

Note 3: since InnoDB is preset to Row-level Lock, the locking of data columns can be referenced in this article.

Note 4: try not to use the LOCK TABLES instruction in the InnoDB form. If you have to use it, please read the official instructions for the use of LOCK TABLES in InnoDB, so as to avoid frequent deadlocks in the system.

More advanced usage

If we need to query first and then update the data, it is best to use the statement like this:

UPDATE products SET quantity ='1' WHERE id=3 AND quantity > 0

In this way, you can deal with it without adding things.

Mysql handles high concurrency to prevent inventory from overselling

After reading a very good article, I would like to turn to this study.

Boss Wang taught us another lesson today. In fact, mysql deals with the problem of high concurrency and preventing inventory from being oversold. Boss Wang already mentioned it last year, but unfortunately, even if everyone understood it at that time, there was still no awareness of this aspect in real development. Today, based on some of my understanding, I would like to sort out this question, and hope that there will be more such courses in the future.

First of all, let's describe the problem of oversold inventory: general e-commerce websites will encounter activities such as group buying, seconds killing, special offers, and a common feature of such activities is the surge in visits and thousands or even tens of thousands of people snapping up a product. However, as an active commodity, inventory must be very limited, how to control inventory to prevent overbuying, in order to prevent unnecessary losses is a headache for many e-commerce website programmers, which is also the most basic problem.

From a technical point of view, many people will certainly think of transaction, but transaction is a necessary condition to control inventory oversold, but not a sufficient and necessary condition.

For example:

Total inventory: 4 items

Requestor: a, 1 commodity b, 2 commodities c, 3 commodities

The procedure is as follows:

BeginTranse (Open transaction) try {$result = $dbca- > query ('select amount from s_store where postID = 12345'); if (result- > amount > 0) {/ / quantity is the inventory quantity requested to be reduced $dbca- > query (' update s_store set amount = amount-quantity where postID = 12345');} catch ($e Exception) {rollBack (rollback)} commit (commit transaction)

The above code is the code we usually control inventory to write, most people will write this, it seems not a big problem, in fact, there are huge loopholes. The access to the database is actually the access to the disk files. The tables in the database are actually files saved on disk, or even a file contains multiple tables. For example, due to high concurrency, three users, a, b and c, have entered the transaction, and a shared lock will be generated at this time, so in select, the inventory quantity checked by these three users is 4. At the same time, it should be noted that the result checked by mysql innodb is version controlled, before other users update without commit (that is, before a new version is generated) The result found by the current user is still the same version.

Then there is update, if these three users arrive at update at the same time, the update update statement will serialize the concurrency, that is, three users who arrive here at the same time will order, execute one by one, and generate an exclusive lock. Before the current update statement commit, other users wait for execution, and after commit, a new version is generated; after this execution, the inventory must be negative. However, according to the above description, we modify the code so that it will not be overbought, as follows:

BeginTranse (open transaction) try {/ / quantity is the inventory quantity requested to be reduced $dbca- > query ('update s_store set amount = amount-quantity where postID = 12345'); $result = $dbca- > query (' select amount from s_store where postID = 12345'); if (result- > amount)

< 0){ throw new Exception('库存不足'); }}catch($e Exception){ rollBack(回滚)}commit(提交事务) 另外,更简洁的方法: beginTranse(开启事务)try{ //quantity为请求减掉的库存数量 $dbca->

Query ('update s_store set amount = amount-quantity where amount > = quantity and postID = 12345');} catch ($e Exception) {rollBack (rollback)} commit (commit transaction)

=

1. In the case of second kill, it is absolutely impossible to read and write the database so frequently, which will cause serious performance problems.

You must use caching, put items that need to be killed in the cache, and use locks to handle their concurrency. When the order submitted by the user is received, the number of goods will be reduced (locked / unlocked) first and then processed in other aspects. The processing failure will increase the data by 1 (lock / unlock), otherwise the transaction will be successful.

When the number of goods decreases to 0, it means that the second kill of the goods is over and the requests of other users are rejected.

2, this certainly can not directly operate the database, will fail. Direct reading and writing to the database is too much pressure on the database, so use caching.

Put the items you want to sell, such as 10 items, into the cache; then set up a counter in memcache to record the number of requests. This request can be based on the number of items you want to sell. For example, if you want to sell 10 items, only 100 requests are allowed. When the counter reaches 100, the next entry shows the end of the second kill, which can reduce the pressure on your server. Then, according to these 100 requests, the first-paid, first-paid and later-paid prompt goods will be killed in seconds.

3. First of all, when multiple users modify the same record concurrently, it is certain that the users who submit later will overwrite the results submitted by the former.

This can be solved directly by using a locking mechanism, optimistic or pessimistic.

Optimistic lock: this is to design a version number field in the database and make it + 1 each time you modify it, so that you can know whether it is submitted concurrently compared with the version number before submission, but there is a disadvantage that it can only be controlled in the application. If there is no way to modify the same data optimistic lock across applications, you can consider pessimistic lock at this time.

Pessimistic locking: locking data directly at the database level, similar to using select xxxxx from xxxx where xx=xx for update in oralce, so that other threads will not be able to commit data.

In addition to locking, the way of receiving lock can also be used. The idea is to design a status identification bit in the database, and the user will identify the status identification bit as being edited before modifying the data. In this way, when other users want to edit this record, the system will find that other users are editing, and the system will reject their editing request, similar to the file you are executing in the operating system. Then when you want to modify the file, the system will remind you that the file cannot be edited or deleted.

4. Locking at the database level is not recommended. It is recommended to use the server's memory lock (lock primary key). When a user wants to modify the data of an id, the id to be modified is stored in the memcache. If another user triggers to modify the data of the id and reads that the memcache has the value of the id, that user is prevented from modifying it.

5. In practical application, mysql is not asked to concurrently read and write directly, but with the help of "external forces", such as caching, using master-slave libraries to achieve read-write separation, dividing tables, using queue writes and other methods to reduce concurrent read and write.

Pessimistic lock and optimistic lock

First of all, when multiple users modify the same record concurrently, it is certain that the users who submit later will overwrite the results submitted by the former. This can be solved directly by using a locking mechanism, optimistic or pessimistic.

Pessimistic lock (Pessimistic Lock), as its name implies, is very pessimistic. Every time you go to get the data, you think that someone else will modify it, so you lock it every time you get the data, so that others will block the data until it gets the lock. Many of these locking mechanisms are used in traditional relational databases, such as row locks, table locks, read locks, write locks and so on.

Optimistic lock (Optimistic Lock), as the name implies, is very optimistic. Every time you go to get the data, you think that others will not modify it, so you will not lock it. But when you update it, you can judge whether others have updated the data during this period. You can use mechanisms such as version number. Optimistic locks are suitable for multi-read applications, which can improve throughput, such as optimistic locks provided by databases that are similar to the write_condition mechanism.

The two locks have their own advantages and disadvantages, and you can't simply define which is better than the other. Optimistic locks are more suitable for scenarios with less data modification and more frequent reads, even if there are a small number of conflicts, which saves a lot of lock overhead and improves the throughput of the system. However, if there are frequent conflicts (in the case of a lot of write data), the upper layer is constantly using retry, which degrades performance, so it is more appropriate to use pessimistic locks in this case.

Actual combat

Modify the amount of this table and open two command line windows

First window A

SET AUTOCOMMIT=0; BEGIN WORK; SELECT * FROM order_tbl WHERE order_id='124' FOR UPDATE

Second window B:

# Update the inventory quantity of order ID 124 UPDATE `order_ tbl` SET amount = 1 WHERE order_id = 124

We can see that window An adds things and locks this data, and this problem occurs when window B executes:

The first window is a complete submission of things:

SET AUTOCOMMIT=0; BEGIN WORK; SELECT * FROM order_tbl WHERE order_id='124' FOR UPDATE;UPDATE `order_ tbl` SET amount = 10 WHERE order_id= 124 commit WORK; above is all the contents of the article "sample Analysis of Mysql transaction and data consistency processing". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 270

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report