In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
The main content of this article is to explain "what is the optimization method of 2PC in distributed database". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what is the optimization method of distributed database for 2PC"?
Two-phase commit (2PC)
There are mainly two kinds of two-phase commit protocols. One is TCC at the application layer. For example, seata by Alibaba implements the TCC pattern. The characteristic of this pattern is that each service needs to provide the three implementations of try/confirm/cancel, which need to be implemented in the business code and have a high level of business intrusion.
What I share today is the resource-oriented 2PC protocol, which was first proposed by Jim Gray. The whole transaction is divided into two phases, prepare phase and commit phase, which are completed by the coordination node and DB resource manager.
Here we will take the classic e-commerce system as an example. The whole system is divided into three services: order, account and inventory. After we receive the purchase request from the customer, the coordination node needs to coordinate the order service to generate the order, and the account service deducts the payment for goods. Inventory service deducts the inventory of goods. If the databases of these three services are on different slices, the coordination process is as follows:
1.prepare stage
The orchestrating node sends prepare requests to all services, and when each service receives the prepare request, it attempts to execute the local transaction, but does not actually commit the local transaction. The process of trying to execute will check whether there are conditions for executing the transaction, such as whether the resource is locked. When all services attempt to execute successfully, a yes will be returned to the orchestration node, as shown below:
2.commit/rollback stage
If all services in the prepare phase return yes, then the orchestrating node will notify each service to perform the commit operation, and each service will actually commit the local transaction. As shown below:
If a service returns no in the prepare phase, the orchestrating node needs to notify all services to roll back the local transaction.
There are problems with 2PC
Above we have briefly analyzed the implementation process of the 2PC protocol, so what is the problem with 2PC?
1. Performance problem
Local transactions lock resources during the prepare phase. For example, if the account service deducts 100 yuan from the xiaoming account, the xiaoming account must be locked first. In this way, if there are other transactions to modify the xiaoming account, you must wait for the previous transaction to complete. This results in latency and performance degradation.
two。 Coordination node single point failure
The orchestration node is single-node, and if a failure occurs, the entire transaction will remain blocked. For example, the first phase prepare is successful, but in the second phase, the coordinator node downtime before issuing the commit instruction, the data resources of all services are locked, and the subsequent transactions can only wait.
3. Data inconsistency
If the first phase of prepare is successful, but in the second stage of commit, if the coordinator node notifies that the inventory service has failed, it is equivalent to generating an order and deducting the account, but not the inventory. This leads to data inconsistencies.
Percolator model
Mainstream NewSQL databases, such as TiDB, are solved using the Percolator model. The official website links as follows:
Https://pingcap.com/blog-cn/percolator-and-txn/
The Percolator model comes from the Google paper:
"Large-scale Incremental Processing Using Distributed Transactions and Notifications"
You can see the following link to the original text, and there are many translated versions on the Internet:
Https://www.cs.princeton.edu/courses/archive/fall10/cos597B/papers/percolator-osdi10.pdf
The premise of Percolator is that the database of local transactions supports multi-version concurrency control protocol, that is, mvcc. Now mainstream databases such as mysql and oracle are supported.
A) initial phase
Let's take a look at the classic e-commerce case mentioned above. In the initial stage, we assume that the order quantity is 0, the account service is 1000, and the inventory service is 100. after the customer places an order, the order service increases one order, and the account service deducts 100. the inventory service deducts the quantity of goods 1. The initial data for each slice is as follows:
":" is preceded by a timestamp or data version, followed by a data value. In these three tables, the first record does not save the real data, but holds a pointer to the real data. For example, in the order table, the 6 version of the data points to 5 versions of the data, and the number of orders is 0.
B) prepare phase
In the prepare phase, the coordinator node sends prepare commands to each service, and the three tables enter the prepare phase. In the prepare phase, Percolator defines the concept of master lock. Only one service can acquire the master lock per distributed transaction, such as the order service in this case, and the locks of other services point to the master lock, as shown in the following table:
In the prepare phase, each service writes a log and records the private version of the transaction based on the timestamp, so that other transactions cannot manipulate the three pieces of data.
C) commit phase
In the commit phase, the coordinator node only needs to communicate with the order service, because the order service has primary lock, that is, the coordinator node only communicates with slices that have primary lock. At this point, the data is as follows:
At this point, we notice that except for the lock of the order service, we have added version 8 to version 7, indicating that there is no private version of the order service, but the private version of the account service and inventory service is still there. What makes Percolator unique is that it starts asynchronous threads to update account services and inventory services. The final data is as follows:
Because the coordinator node only needs to communicate with the slice that obtained the primary lock, it either succeeds or fails, which avoids the data inconsistency problem caused by the failure of all nodes in commit.
The log is recorded in the prepare phase, and if a slice commit fails, you can commit it again according to the log, which ensures that the data is ultimately consistent.
If the coordinator node is down, the asynchronous thread can do the resource release work to avoid the resource can not be released due to the single point of failure communication failure.
Here we should pay attention to two points:
The choice of primary lock is random, for example, in this case, the order service is not necessarily selected.
After the orchestration node sends commit, the order service is submitted successfully. If other transactions need to read two pieces of data of account service and inventory service, although there is lock on the two pieces of data, the search primary@order.bal finds that it has been submitted, so it can be read.
Summary
There are three problems with 2PC protocol, namely, performance problems, single point of failure and data inconsistency.
The Percolator model simplifies the communication process between the coordinating node and the slice, and allows the coordinating node to communicate with only one of the primary slices. On the one hand, it reduces the communication overhead, on the other hand, it avoids the data inconsistency caused by the failure of some nodes in the commit phase because of a single point of failure.
Percolator logs during the prepare phase, so that even if the orchestration node fails, transaction recovery can be done according to the log after recovery.
Percolator uses asynchronous threads to release resources, so that even if the orchestrating node fails, you don't have to worry about not releasing resources.
The well-known NewSQL database TiDB is based on the Percolator model to optimize the 2PC protocol.
But we should know that the performance problems of 2PC still exist, fortunately, the mainstream distributed databases have been optimized, the performance loss will only become smaller and smaller.
At this point, I believe you have a deeper understanding of "what is the optimization method of distributed database for 2PC". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.