Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The method of dealing with concurrency in mysql Database

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "the method of dealing with concurrency on the mysql database side". In the daily operation, I believe that many people have doubts about the method of dealing with concurrency on the mysql database side. The editor has consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "the method of dealing with concurrency on the mysql database side". Next, please follow the editor to study!

Since the birth of ASP.NET, Microsoft has provided a lot of methods to control concurrency. Before we learn about these methods, let's briefly introduce concurrency!

Concurrency: concurrency occurs when multiple visitors access an update operation at the same time or at the same time.

For concurrent processing, it is divided into pessimistic concurrency processing and optimistic concurrency processing.

The so-called pessimistic / optimistic concurrent processing can be understood as follows:

Pessimists believe that concurrency is easy to occur during the running of a program, so pessimists propose their processing model: when I execute a method, I do not allow other visitors to intervene in the method. (pessimists often think that something bad will happen to them.)

Optimists believe that concurrency is rare during the running of a program, so optimists propose their processing model: allow other visitors to intervene when I execute a method. Optimists often think that something bad will not happen to them.

What about the pessimists in the C # language?

In C #, the ways of locking data such as LOCK, Monitor, Interlocked and so on belong to the category of pessimistic concurrent processing! Once the data is locked, no other visitors have access to it. If you are interested, please refer to: lock, Monitor and Lock in C # and the difference.

However, a common problem with the pessimist's pattern of dealing with concurrency is that it can result in very inefficient execution.

Here: take a simple example:

In the ticketing system, Xiao Ming goes to buy tickets and wants to buy the D110 train from Beijing to Shanghai. if the pessimist handles the concurrency mode, then the conductor will lock the ticket of the D110 train before making the ticket operation. However, during the period when the ticket for train D110 was locked, the conductor went to the toilet or had a cup of coffee, and other window conductors were not allowed to sell tickets. If this method is adopted, 1.4 billion people in China will not have to travel because they cannot buy tickets.

Therefore: when dealing with database concurrency, pessimistic locks should be used with caution! Specifically, it depends on whether the concurrency of the database is large, if it is relatively large, it is recommended to use the optimist processing mode, if relatively small, you can appropriately adopt the pessimist processing mode!

OK . Having said so much, that is, laying the groundwork, the title of this section is called the solution of database concurrency, and we finally have to go back to nature, starting with the solution of database concurrency!

So here comes the question?

What are the ways to deal with database concurrency?

In fact, the concurrent processing of the database is also divided into optimistic locks and pessimistic locks, but based on the database level! About the concurrency processing at the database level, you can refer to my blog: optimistic lock pessimistic lock application

Pessimistic locks: assume that concurrency conflicts occur, shielding all operations that may violate data integrity. [1]

Optimistic locks: assuming that there are no concurrency conflicts, only check for violations of data integrity when committing operations. [1] optimistic lock can not solve the problem of dirty reading.

The most common way to handle multi-user concurrent access is to lock it. When a user locks an object in the database, other users can no longer access the object. The impact of locking on concurrent access is reflected in the granularity of the lock. For example, a lock placed on a table restricts concurrent access to the entire table; a lock placed on a data page restricts access to the entire data page; and a lock placed on a row restricts concurrent access to that row. It can be seen that the row lock granularity is the smallest, the concurrent access is the best, and the page lock granularity is the largest, the lower the concurrent access performance.

Pessimistic locks: assume that concurrency conflicts occur, shielding all operations that may violate data integrity. [1] pessimistic locks assume that there is a high probability that other users will attempt to access or change the object you are accessing or changing, so in a pessimistic lock environment, lock the object before you start to change the object. and don't release the lock until you commit the changes. The downside of pessimism is that whether it is a page lock or a row lock, the locking time may be very long, which may lock an object for a long time and restrict the access of other users, that is, pessimistic locks have poor concurrent access.

Optimistic locks: assuming that there are no concurrency conflicts, only check for violations of data integrity when committing operations. [1] optimistic lock can not solve the problem of dirty reading. Optimistic locks assume that there is little chance that other users will attempt to change the object you are changing, so optimistic locks do not lock the object until you are ready to commit the change, and do not lock it when you read and change the object. It can be seen that the locking time of optimistic lock is shorter than that of pessimistic lock, and optimistic lock can obtain better concurrent access performance with larger lock granularity. But if the second user reads the object just before the first user commits the change, then when he completes his change and commits, the database will find that the object has changed, so the second user has to reread the object and make changes. This indicates that in an optimistic locking environment, the number of times concurrent users read objects is increased.

The purpose of this article is to explain the database concurrency solution based on C # (general version, EF version), so we should start with C #, preferably with a small project.

The project is ready for you, as follows:

First we need to create a small database:

Create database BingFaTestgouse BingFaTestgo create table Product-- merchandise table (ProductId int identity (1)) primary key,-- commodity ID key ProductName nvarchar (50),-- trade name ProductPrice money,-- unit price ProductUnit nvarchar (10) default ('yuan / jin'), AddTime datetime default (getdate ())-- add time) create table Inventory-- inventory table (InventoryId int identity (1pc1) primary key,ProductId int FOREIGN KEY REFERENCES Product (ProductId),-- foreign key ProductCount int,-- inventory quantity VersionNum TimeStamp not null InventoryTime datetime default (getdate ()),-- time) create table InventoryLog (Id int identity (1) primary key,Title nvarchar (50),)-- Test data: insert into Product values ('Apple', 1) 'yuan / jin', GETDATE () insert into Inventory (ProductId,ProductCount,InventoryTime) values (1)

The database created is very simple, with three tables: commodity table, inventory table, and log table.

With the database, we create a C# project, which adopts the C# DataBaseFirst mode and is structured as follows:

The project is simple and easy to build using the EF DataBaseFirst pattern.

Now that the project has been built, shall we simulate the occurrence of concurrency?

The main code is as follows (reduce inventory, insert log):

# region does not do concurrency processing / simulates an inventory reduction operation without concurrency control / public void SubMitOrder_3 () {int productId = 1; using (BingFaTestEntities context = new BingFaTestEntities ()) {var InventoryLogDbSet = context.InventoryLog; var InventoryDbSet = context.Inventory / / inventory table using (var Transaction = context.Database.BeginTransaction ()) {/ / inventory reduction operation var Inventory_Mol = InventoryDbSet.Where (A = > A.ProductId = = productId). FirstOrDefault (); / / inventory object Inventory_Mol.ProductCount = Inventory_Mol.ProductCount-1 Int A4 = context.SaveChanges (); / / insert log InventoryLog LogModel = new InventoryLog () {Title = "insert a piece of data to calculate whether concurrency occurs",}; InventoryLogDbSet.Add (LogModel) Context.SaveChanges (); / 1.5 Simulation time Thread.Sleep; / / half a second Transaction.Commit ();} # endregion

At this point, we add a breakpoint at int productId=1 and run the program (open four browsers to execute at the same time), as follows:

The results show that the log generates four pieces of data, while the shortage of inventory is reduced by only one. This result is obviously incorrect because of the concurrency, which is essentially caused by dirty reading, misreading and unrepeatable reading.

So, now that the problem has occurred, we will find a way to solve it. There are two ways: pessimistic locking method and optimistic locking method.

Pessimist approach:

The pessimist method (locks the update operation with a uodlock lock, that is, once locked, other visitors are not allowed to access the operation) is similar to this method, which can be implemented through stored procedures and will not be explained here

OK, knowing the field of type Timestamp, let's create a stored procedure that handles concurrency with the small database above, as follows

Create proc LockProc-optimistic lock control concurrency (@ ProductId int, @ IsSuccess bit=0 output) asdeclare @ count as intdeclare @ flag as TimeStampdeclare @ rowcount As int begin transelect @ count=ProductCount,@flag=VersionNum from Inventory where ProductId=@ProductId update Inventory set ProductCount=@count-1 where VersionNum=@flag and ProductId=@ProductIdinsert into InventoryLog values ('insert a piece of data to calculate whether concurrency occurs') set @ rowcount=@@ROWCOUNTif @ rowcount > 0set @ IsSuccess=1elseset @ IsSuccess=0commit tran

This stored procedure is simple, performing two operations: reducing inventory and inserting a piece of data. There is one input parameter: productId, and one output parameter, IsSuccess. If concurrency occurs, the value of IsSuccess is False, and if the execution is successful, the value of IsSuccessIsSuccessis True.

Here, I would like to explain one point: the program uses pessimistic lock, which is serial, and optimistic lock, which is parallel.

That is to say, with pessimistic lock, only one visitor's request is executed at a time, and when the previous visitor's access is completed and the lock is released, the next visitor will enter the locked program and execute it until all visitors finish executing. Therefore, the pattern in which pessimistic locks are executed in strict order can ensure the success of all visitors.

When using optimistic locks, visitors execute in parallel, and everyone accesses the same method at the same time, but only one visitor succeeds and the other visitors fail at the same time. So what do you do with visitors who fail to execute? It is unreasonable to return failure information directly, and the user experience is not good, so you need to customize a rule to allow the failed visitor to re-execute the previous request.

Time is limited, so there is no more writing. Because concurrency control is a stored procedure on the database side, the C # code is also very simple. As follows:

# region Universal concurrent processing Mode stored procedure implementation / stored procedure implementation / public void SubMitOrder_2 () {int productId = 1; bool bol = LockForPorcduce (productId); / / 1.5 Simulation time-consuming Thread.Sleep / / half a second int retry = 10; while (! bol & & retry > 0) {retry--; LockForPorcduce (productId) } private bool LockForPorcduce (int ProductId) {using (BingFaTestEntities context = new BingFaTestEntities ()) {SqlParameter [] parameters = {new SqlParameter ("@ ProductId", SqlDbType.Int), new SqlParameter ("@ IsSuccess", SqlDbType.Bit)} Parameters [0] .value = ProductId; parameters [1] .Direction = ParameterDirection.Output; var data = context.Database.ExecuteSqlCommand ("exec LockProc @ ProductId,@IsSuccess output", parameters); string N2 = parameters [1] .Value.ToString (); if (N2 = = "True") {return true } else {return false;} # endregion

Here, it needs to be explained as follows:

When the value of IsSuccess is False, the method should be executed repeatedly. My rule is to repeat the request ten times, which is a good solution to the failed message that feedback directly to the user. Improved user experience.

The following focuses on how the EF framework avoids database concurrency. Before I explain, allow me to quote a few paragraphs from other people's blogs:

In the process of software development, concurrency control is a mechanism to ensure that errors caused by concurrent operations are corrected in time. From ADO.NET to LINQ to SQL to today's ADO.NET Entity Framework,.NET, they all provide a good support scheme for concurrency control.

Compared with the concurrent processing mode in the database, the concurrent processing mode in Entity Framework has achieved a lot of simplification.

In the System.Data.Metadata.Edm namespace, there are ConcurencyMode enumerations that specify concurrency options for attributes in the conceptual model.

ConcurencyMode has two members:

The member name indicates that None never validates this property when writing. This is the default concurrency mode. Fixed always validates this property when writing.

When the model property is the default value of None, the model property is not detected, and when the property is modified at the same time, the input attribute value is processed as a data merge.

When the model property is Fixed, the system detects the model property, and when the property is modified at the same time, the system fires an OptimisticConcurrencyException exception.

Developers can define different ConcurencyMode options for each property of an object, which can be found in * .Edmx:

In fact, in EF DataBaseFirst, we only need to set the attribute whose type is TimeStamp version number, as follows:

After you have set the version number property, you can do concurrent testing. When concurrency occurs in the system, the program will throw an exception, and all we need to do is to catch the exception, and then repeat the method of executing the request according to our own rules until the return is successful.

So how do you catch concurrent exceptions?

In the C # code, you need to use the exception class: DbUpdateConcurrencyException to catch. The specific usage in EF is as follows:

Public class SaveChangesForBF: BingFaTestEntities {public override int SaveChanges () {try {return base.SaveChanges ();} catch (DbUpdateConcurrencyException ex) / / (OptimisticConcurrencyException) {/ / concurrent save error return-1;}

After setting the property, EF will automatically detect concurrency and throw an exception. After we catch the exception using the above method, we can execute the rules we have executed repeatedly. The specific code is as follows:

# region EF exclusive concurrent processing mode / stored procedure implementation / public void SubMitOrder () {int C = LockForEF (); / / 1.5 Simulation time-consuming Thread.Sleep (500); / / half a second int retry = 10 While (C 0) {retry--; C = LockForEF ();}} / mimic an inventory reduction operation EF exclusive concurrent processing mode / public int LockForEF () {int productId = 1 Int C = 0; using (SaveChangesForBF context = new SaveChangesForBF ()) {var InventoryLogDbSet = context.InventoryLog; var InventoryDbSet = context.Inventory / / inventory table using (var Transaction = context.Database.BeginTransaction ()) {/ / inventory reduction operation var Inventory_Mol = InventoryDbSet.Where (A = > A.ProductId = = productId). FirstOrDefault (); / / inventory object Inventory_Mol.ProductCount = Inventory_Mol.ProductCount-1 C = context.SaveChanges (); / / insert log InventoryLog LogModel = new InventoryLog () {Title = "insert a piece of data to calculate whether concurrency occurs",} InventoryLogDbSet.Add (LogModel); context.SaveChanges (); / 1.5 Simulation time Thread.Sleep; / / half a second Transaction.Commit ();}} return C;} # endregion

= own, program processing concurrency =

Using async/await, create an asynchronous function, create a child thread, and execute it in turn using await

At this point, the study of "the method of dealing with concurrency on the mysql database side" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report