In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
Reprint this article need to indicate the source: Wechat official account EAWorld, violators will be prosecuted.
Introduction:
Micro-service advocates to split complex single applications into several services with simple functions and loose coupling, which can reduce the difficulty of development, enhance scalability and facilitate agile development, so it is respected and used by more and more developers and companies. However, after the micro-service of the system, a seemingly simple function may need to invoke multiple services and operate multiple database implementations, and the distributed transaction problem of service invocation becomes very prominent, almost unavoidable. Distributed transaction has become the biggest obstacle to the landing of micro-services, and it is also one of the most challenging technical problems. So how do we deal with it in actual development? This article will introduce the actual combat of distributed transactions in the actual development of micro-services.
Table of contents:
1. Distributed transaction explanation
two。 Distributed transaction solution-servicecomb-pack
3. Practical explanation of distributed transaction
1. Distributed transaction explanation
1.1 transaction principles
Before we talk about distributed transactions, let's talk about transactions. To put it simply, transaction is a logical unit in the execution process of database management system, which can guarantee that either a group of database operations are executed successfully or all fail, and the principle of doing these is the four ACID characteristics of transactions.
A. short for atomicity of Atomic, transactions are executed as a whole and either succeed or fail.
C. short for Consistency consistency, transactions should ensure that data transitions from one consistent state to another.
I. short for Isolation isolation, when multiple transactions are executed concurrently, the execution of one transaction does not affect the execution of other transactions.
D.Durability persistence check, committed transaction modification data will be persisted.
1.2 traditional stand-alone database transactions
In the traditional single application architecture, our business data is usually stored in a database, and each module in the application operates the database directly. In this scenario, the transaction is guaranteed by the ACID-based features provided by the database.
For example, in a scenario where a user places an order, a series of collaborative operations of user, order, payment, inventory and other modules are involved. If there is a problem with one of the modules, we can use the transaction features provided by the database to ensure that this order operation is either successful or failed. Because these modules use the same database and the same transaction manager, no additional operations are needed to ensure the characteristics of the transaction.
1.3 distributed transactions for microservices
In a broad sense, distributed transactions are also transactions, but they are different from stand-alone transactions: due to the definition of business and the design of system micro-service architecture, many large-scale business processes are split into single basic services, and in order to ensure that each micro-service can be developed and deployed and run independently, it usually adopts the architecture of a micro-service and a database. Then the internal service is encapsulated and exposed in the form of Rest api. In this way, the data operation based on database has become the cooperative operation between multiple micro-service systems that provide micro-services. In this case, the original stand-alone transaction mode can not be used, because multiple services mean that there are multiple transaction managers and multiple resources, and the local transaction manager of a single micro-service can only guarantee the ACID of local transactions. In order to ensure the transactionality of the business among multiple services, micro-services participating in distributed transactions usually rely on the coordinator to complete the relevant consistency coordination operations.
Then in the actual development of micro-service system, how to implement the coordinator to deal with distributed transactions? the solution here is to use the servicecomb-pack framework provided by Huawei to solve this problem.
two。 Distributed transaction solution:
Servicecomb-pack
2.1 compensation mode
Before talking about servicecomb-pack, understand two concepts: imperfect compensation (saga) and perfect compensation (tcc).
Saga: imperfect compensation. Generally speaking, in the system, we will write a compensation logic for the business logic. If the business logic fails, we will execute this compensation logic. We call this compensation logic a reverse operation, which will also leave traces of operation. For example, in the banking system, when a customer goes to ATM to withdraw money, the bank will deduct the user's account first. If the withdrawal is not successful this time The banking system will issue a correction operation to send the previously deducted money back to the user's account, which is queried by open source in the transaction record.
Tcc: perfect compensation, the cancel phase will be completely clear of the previous business logic operations, users are not aware of. For example, to initiate a transaction on a trading platform, you will not directly deduct the account balance in the try phase, but check the user's quota and refresh the amount, and then actually operate the account in the confirm phase. If an exception occurs, then the business logic needs to be executed during the cancel phase to cancel the consequences of the try phase and release the quota occupied during the try phase. The transaction will not be completed until the execution of the confirm is completed.
2.2servicecomb-pack
Servicecomb-pack, which comes from Huawei's micro-service framework servicecomb, is an open source distributed transaction final consistency solution. The project has been incubated by the Apache Software Foundation and has graduated from apache. Before version 0.3.0, it was called servicecomb-saga, but now it has been renamed servicecomb-pack.
The servicecomb-pack architecture mainly consists of two components: alpha and Omega
Alpha:alpha is actually a server, which needs to be compiled and run by users. Its function is the distributed transaction coordinator mentioned above. Its main function is to communicate with the Omega client, receive transaction events sent by omega, then persist the transaction and modify the status of the coordination sub-transaction, so as to ensure that the state of all sub-transactions in the global transaction is consistent, that is, either all execution is completed or all execution fails.
In fact, the omega:Omega side can be regarded as an agent embedded in the micro-service, and its main function is to monitor the execution of the local sub-transaction and send the sub-transaction execution event and the global transaction ID to the alpha-server side. In the abnormal case, it will carry out the corresponding compensation operation according to the operation event sent by the alpha.
From the figure above, we can get a general idea of how the whole servicecomb-pack works, but there is a question: how does the alpha-server side know that multiple subtransactions sent by Omega belong to the same global transaction? In fact, a global transaction ID is generated at the beginning of the distributed transaction, and then when the service at the sub-transaction is invoked, the global transaction ID is passed to the sub-transaction, and then the alpha side will bind and persist the sub-transaction events passed by the global transaction ID and Omega to the database, thus forming a complete transaction call chain. Through this global transaction ID, we can completely track the execution of the entire distributed transaction.
Omega will inject relevant processing modules into the application in a faceted programming way to help us build the context of distributed transaction calls. Omega handles the preparation of the transaction at the initial stage of the transaction, and does some cleaning operations after the transaction is executed, such as creating distributed transaction initiation events and related sub-events, and produces related transaction termination or failure events according to the success or failure of the transaction execution. The advantage of this is that the user's code only needs to add a few annotation to describe the execution scope of the distributed transaction and the function information related to the recovery of the local transaction, and the Omega can track the execution of the local transaction through the injected code. Omega notifies Alpha of the execution of the local transaction as an event. Since it is impossible for a single Omega to know the execution of other participating services under a distributed transaction, Alpha is required to play a very important role as a coordinator. Alpha collates and summarizes the collected event information of distributed transactions, and the execution of distributed transactions can be known by analyzing the relationship between these events. Alpha sends relevant execution instructions to Omega and Omega performs relevant commit or recovery operations to achieve the final consistency of distributed transactions.
After understanding some of the details of the Pack implementation, we can further understand the relationship between the modules of Alpha and Omega under the ServiceComb Pack architecture from the following figure [1].
The whole architecture is divided into three parts, one is the Alpha coordinator, the other is the Omega injected into the micro-service instance, and the interaction protocol between Alpha and Omega. At present, ServiceComb Pack supports the implementation of Saga and TCC distributed transaction coordination protocols.
Omega includes transaction annotation module (Transaction Annotation) and transaction interceptor (Transaction Interceptor) related to analyzing users' distributed transaction logic; transaction context (Transaction Context), transaction callback (Transaction Callback), transaction executor (Transaction Executor) related to distributed transaction execution; and transaction transfer (Transaction Transport) module responsible for communicating with Alpha.
Transaction annotation module is the user interface of distributed transactions. Users add these annotations to their own business codes to describe the information related to distributed transactions, so that Omega can carry out related processing according to the coordination requirements of distributed transactions. If you extend your own distributed transactions, you can also do so by defining your own transaction annotations.
In the module of transaction interceptor, with the help of AOP, we can add the relevant interception code to the code marked by the user, obtain the information related to distributed transaction and local transaction execution, and communicate with Alpha with the help of transaction transmission module.
Transaction context provides a means for Omega to transmit transaction invocation information. With the correspondence between global transaction ID and local transaction ID mentioned earlier, Alpha can easily retrieve all local transaction event information related to a distributed transaction.
Transaction executor is mainly a module designed to deal with transaction call timeout. Because the connection between Alpha and Omega may be unreliable, it is difficult for the alpha side to judge whether the Omega local transaction timeout is caused by the direct network between Alpha and Omega or the call of Omega itself, so a transaction executor is designed to monitor the local execution of Omega and simplify the timeout operation of Omega. At present, the default implementation of Omega is to call the transaction method directly, and the background service of Alpha determines whether the transaction execution time is timed out by scanning the event table.
The transaction callback registers with Alpha when Omega establishes a connection with Alpha, and when Alpha needs to perform relevant coordination operations, it directly calls the callback method registered by Omega to communicate. Since micro-service instances start and stop frequently in cloud scenarios, we cannot assume that Alpha can always find transaction callbacks on the original registration. Therefore, we recommend that micro-service instances are stateless, so that Alpha only needs to find the corresponding Omega for communication based on the service name.
The transaction transmission module is responsible for the communication between Omega and Alpha. In the specific implementation process, Pack defines the transaction interaction methods of TCC and Saga by defining the relevant Grpc description interface files, as well as the events related to the interaction [2].
3. Distributed transaction actual combat
How to use servicecomb-pack in a project requires the following steps:
3.1 alpha-server configuration
3.1.1 compiling alpha-server
1. Environmental preparation
JDK1.8
Maven3.x
two。 Source code acquisition
Github address: https://github.com/apache/servicecomb-pack
$git clone: https://github.com/apache/servicecomb-pack.git
$git checkout 0.4.0
3. Modify the configuration file
Find alpha-server/src/main/resource/application.yaml and change the datasource information to local information.
4. Build alpha-server locally
$cd servicecomb-pack
$mvn clean install-DskipTests-Pspring-boot-2
After executing the command, you can find the executable jar package of alpha-server in alpha/alpha-server/target/saga/alpha-server-$ {version}-exec.jar
5. Initialize the database
You can find two sql files, schema-mysql.sql and schema-postgresql.sql, in the alpha\ alpha-server\ src\ main\ resources directory, and you can initialize them according to the selected database.
6. Start alpha-server
Java-Dspring.profiles.active=prd-D "spring.datasource.url=jdbc:postgresql://$ {host_address}: 5432/saga?useSSL=false"-jar alpha-server-$ {saga_version}-exec.jar
* Note: please change ${saga_version} and ${host_address} to actual values before executing the command
At this point, the alpha-server global transaction manager has started successfully.
3.1.2 replace postgresql with mysql
Currently, alpha-server supports both pg and mysql databases. The default is pg. If you want to change it to mysql, you need to do the following:
1. Install and run mysql
two。 Modify the pom file to add dependencies
Alpha-server/pom.xml, adding mysql dependencies
Ependency > mysql mysql-connector-java runtime 8.0.15
3. Modify the configuration file
Find alpha-server/src/main/resource/application.yaml and change the datasource information to local information.
Spring: profiles: mysql datasource: username: ${username} password: ${password} url: jdbc:mysql://$ {host_address}: ${port} / ${database_name}? serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8&useSSL=false platform: mysql continue-on-error: false driver-class-name: com.mysql.cj.jdbc.Driver
(swipe left and right to see all the code)
* Note: ${username}, ${password}, ${host_address}, ${port}, ${database_name} need to be replaced with actual values
4. Build alpha-server locally (same as above)
5. Start alpha-server
Java-Dspring.profiles.active=mysql-Dloader.path=./plugins-D "spring.datasource.url=jdbc:mysql://$ {host_address}: 3306 serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8&useSSL=false ${database_name}? serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=UTF-8&useSSL=false"-jar alpha-server-$ {saga_version}-exec.jar (swipe left and right to see all the code)
* Note: please change ${saga_version} and ${host_address} to actual values before executing the command
At this point, the alpha-server side has been configured to compile.
3.2 Omega configuration
After configuring alpha-server, the coordinator of distributed transactions has been configured, and the rest is the configuration of omega, that is, how to use servicecomb-pack to deal with distributed transactions in actual development. This lecture will be combined with an actual case: the order process and delete product process in the shopping system to explain how saga mode and tcc mode are used respectively.
3.2.1 Environmental preparation
This case: the shopping system adopts distributed micro-service architecture, which is divided into three micro-service applications: orderManage order management application, productManage product management application and stockManage inventory management application.
1. Add dependency
Add the dependencies required by Omega in the pom files of each of the three applications:
Org.apache.servicecomb.pack omega-spring-starter ${servicecomb-pack.version} org.apache.servicecomb.pack omega-transport-resttemplate ${servicecomb-pack.version} org.apache.servicecomb.pack omega-spring-cloud-consul-starter ${servicecomb-pack.version} org.apache.servicecomb.pack omega-spring-cloud-eureka-starter ${servicecomb-pack.version}
* Note: please change ${servicecomb-pack.version} to the actual version number (recommended version is 0.4.0)
* Note: if you need to cluster, omega-spring-cloud-consul-starter and omega-spring-cloud-eureka-starter choose one of the two, depending on the registry of the project.
two。 Modify the configuration file
Add the alpha-server configuration to the application.yml configuration files of the three applications as follows:
# configure alpha-server address alpha: cluster: address: 10.15.15.172:8080omega: enabled: true
Note: application.name must not be too long, because the format of instanceId is application.name+IP and the length is 36, otherwise alpha-server transaction persistence will report an error
The above two attributes are required, because alpha-server will find the corresponding Omega based on application.name. Other application configurations can be added on their own. Address can be actually added according to the configuration in alpha-server.
At this point, the environment is ready, and let's start writing the application code.
3.2.2 coding of saga schemas
In this case, we use an order-issuing process to illustrate how the code is written in saga mode. The order issuing process includes: click to place an order, query inventory, pay, update inventory The order application invokes the inventory application and the product application as the initial service, and the corresponding service of these two applications is used as the participating service (sub-transaction). The order application issues an order in the order application, and the order application uses rest template to initiate a call to the product application to verify the product inventory, and then the order application initiates a payment request to the inventory application (sub-transaction 1). After the payment is successful, the order application initiates a request to update the inventory to the inventory application (sub-transaction 2).
1. @ SagaStart
First of all, we need to describe the boundary of the saga transaction in the application code as the starting point of the distributed transaction, so we need to add the annotation @ SagaStart to the createOrder () method in the order application:
2. @ Compensable
@ Compensable represents a local subtransaction, so you need to add this note to the method of creating payments and updating inventory to mark the logic as a subtransaction and describe the compensation method in the compensationMethod property of Compensable. Note that the parameters of the compensation method and the local transaction method must be the same, otherwise Omega will not report the error of finding the recovery method when the system starts to check the parameters. Payment:
The corresponding compensation method is paid:
Update inventory:
Update the inventory compensation method:
* Note: the implemented services and compensation methods must meet the idempotent requirements
* Note: by default, timeout requires a declaration
* Note: if the starting point of a global transaction coincides with a child transaction, you need to declare @ SagaStart and @ Compensable annotations at the same time
* Note: the input parameters of the compensation method must be the same as those of the try method, otherwise an error will be reported at startup (alpha-server cannot find the compensation method)
3.2.3 coding of tcc schema
Below we will use the delete inventory process to explain how the tcc schema is coded. Delete inventory process: initiated by the product application (distributed transaction initiation), the inventory application is called to delete the inventory information of the corresponding product (TCC sub-transaction).
This call uses feign, so you need to add corresponding dependencies to the pom file in the product application:
1. @ TccStart
We use the delete method in the product application as the starting point for distributed transactions, so we add the annotation @ TccStart to this method:
2. @ Participate
Add this comment to the method at the child transaction, and define the relevant confirmation and cancel the method name through the confirmMethod and cancelMethod attributes. It should be noted that the parameters of the confirm,cancel method mentioned here must be the same as those of the try method.
Confirm logic:
Cancel logic:
* Note: the input parameters of confirm and cancel methods must be the same as those of try methods.
* Note: timeout is not supported in tcc mode at present
3.2.4 acquisition of event information
By default, port 8080 is used to handle grpc requests initiated at Omega for operations such as transaction contexts, while port 8090 is used to handle event information at query alpha.
1. Saga- event information query api
Count the status of all events:
Http://${alpha-server.address:port}/saga/stats
Count the status of recent events:
Http://${alpha-server.address:port}/saga/recent
Query the event list based on the event status:
Http://${alpha-server.address:port}/saga/transactions
Query the corresponding distributed event list based on the service name:
Http://${alpha-server.address:port}/saga/findTransactions
2. Tcc- event information query api
Tcc currently does not provide a formal query interface. But there is a test interface. In AlphaTccEventController, you can modify the source code according to the test interface and recompile it.
At present, there are not many event query api provided by alpha-server. If there are other requirements, users can write their own API to query the database.
All the views in this article come from personal opinions, and omissions and mistakes are inevitable. Welcome to correct them and hope to communicate and make progress with you.
[1] quoted from:
Http://servicecomb.apache.org/cn/docs/distributed-transaction-of-services-1/
[2] quoted from:
Http://servicecomb.apache.org/cn/docs/distributed-transaction-of-services-1/
Selected questions:
Ask 1:TCC whether it implements strong consistent transactions?
A: it can be understood that tcc is divided into three steps: try, confirm, and cancel. Each sub-transaction needs to implement these three steps. The Try part completes the preparation of the business, the confirm part completes the business commit, and the cancel part completes the rollback of the transaction. Only the completion of the confirm phase can be counted as the completion of the whole transaction.
Q2: how do you understand that the starting point of a global transaction coincides with a child transaction?
A: for example, a business method is the starting point of the transaction and belongs to a sub-transaction within the distributed transaction. In this case, you need to declare both @ SagaStart and @ Compensable annotations or @ TccStart and @ Participate annotations.
Ask how the 3:cancel operation is done? Is it done through undo log or through compensation statements?
Answer: no matter the cance compensation logic in saga mode or tcc mode, the alpha-server coordinator sends instructions to the omega side. The omega side finds the corresponding compensation method through the cancel method declared in the sub-transaction comments, and then executes the logic in the compensation method. This logic must be realized by yourself according to the current business logic, such as withholding inventory, generally adding back the deducted inventory and so on. The underlying implementation can be found in the "GrpcCompensateStreamObserver" class in the source code.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.