Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand and master MySQL

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to understand and master MySQL". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to understand and master MySQL.

MySQL is divided into server layer and storage engine.

Server layer

Connectors: managing connection permission verification

Query cache: hit the cache and swap back the query result directly

Parsers: parsing syntax

Optimizer: generate execution plan, select index

Executor: operation index returns result

Storage engine

The storage engine is responsible for data storage and extraction; its architecture is plug-in. Innodb became the default storage engine for mysql in the mysql5.5.5 version.

Comparison of each storage engine:

InnoDB: support transactions, support foreign keys, InnoDB is a clustered index, data files are tied to the index, there must be a primary key, through the primary key index is very efficient. But the secondary index needs two queries, first query the primary key, and then query the data through the primary key, full-text index is not supported.

MyISAM: does not support things, does not support foreign keys, MyISAM is a nonclustered index, data files are separated, and the index holds pointers to data files. The primary key index and the secondary index are independent, and the query efficiency of MyISAM is higher than that of InnnDB, so when doing read-write separation, we generally choose InnoDB as the host and MyISAM as the slave.

Memory: relatively large defects are rarely used in scenarios; file data is stored in memory, and if an exception occurs in the mysqld process, the data will disappear when the machine is restarted or shut down.

The execution process of sql

The first step is that the client connects to the connector of the mysql database, and the connector acquires permission to maintain the administrative connection; after the connection is completed, if you do not have subsequent instructions, the connection will be idle, and if you do not use the connection for too long, the connection will be disconnected. The idle time is 8 hours by default, which is controlled by the wait_timeout parameter.

In the second step, you send a sql to the mysql database. At this time, the query cache starts to work to see if the sql has been executed before. If so, return the cached data directly to the client, as long as the cache is invalidated as long as the table has been updated, so some data tables that are rarely updated can consider using database caching. Using caching for tables that are updated frequently will do more harm than good. The caching method, such as the following sql, is specified through SQL_CACHE:

Select SQL_CACHE * from table where xxx=xxx

Step 3 when the cache is not hit, the parser starts to work; the parser determines whether you are select, update or insert, and parses whether your syntax is correct.

Step 4 the optimizer decides which index to use based on the index of your table and the sql statement, and determines the order of join.

The fifth step executes the sql, invokes the interface of the storage engine, scans the traversal table or inserts updated data.

MySQL log

Introduction to MySQL Log

Mysql has two important logs-redolog and binlog. Redolog is a log that belongs exclusively to innodb, and binlog is a log that belongs to the server layer. Here's what these two logs are for: when we update the database data, the two log files will also be updated to record the database update operation.

Redolog, also known as redo log, is used to record changes in transaction operations, recording the value after data modification, regardless of whether the transaction is committed or not. It is used during database restart recovery, and innodb uses this log to restore to the state it was before the database went down, so as to ensure the integrity of the data. Redolog is a physical log, which records what changes have been made to the data of a table. Redolog is of a fixed size, that is, the subsequent log will overwrite the previous log.

Binlog, also known as the archive log, records all operations that make changes to the MySQL database, but does not include operations such as SELECT and SHOW. Binlog is a logical log that records what actions are performed by a table. Binlog is an appended write log, and the subsequent log will not be overwritten by the previous one.

Data update process

We perform an update operation as follows: read the corresponding data to memory-> update data-> write redolog log-> redolog status is prepare-> write binlog log-> commit transaction-> redolog status is commit, and the data is officially written to the log file. We found that the submission mode of redolog is "two-stage commit". The purpose of this is to ensure the accuracy of data recovery during data recovery, because data recovery is done through the backed-up binlog, so make sure that redolog is consistent with binlog.

MVCC of MySQL

Transaction isolation level is skipped here, I believe that most of the partners know the relevant knowledge, here only introduce the principle of mysql transaction isolation-mvcc (multi-version concurrency control). I need to introduce snapshot reading and current reading before learning mvcc.

Snapshot read and current read

A snapshot read is a select statement that looks like:

Select * from table

Under the Repeatableread transaction isolation level, snapshot reading is characterized by obtaining snapshot data of the current database, which is not visible to all non-commit data, and snapshot reading does not lock the data.

The current read is a pessimistic lock on the read data so that other current reads cannot manipulate the data. The current read sql includes:

Select... Lock in share modeselect... For updateinsertupdatedelete

The last three sql give the database an exclusive lock (X lock), while the first sql gives a shared lock (S lock) to the database. X lock is that once one currently reads the lock, other current reads do not have the right to read or write to the transaction, and other current reads will be blocked. The S lock is when a current read puts an S lock on a piece of data, other current reads can also put an S lock on the data but not an X lock, and the current read data of the S lock can not change the data. The concurrent chapter on pessimistic locking and optimistic locking in the database will be introduced.

MVCC principle

The technology that innodb implements snapshot read and current read pessimistic lock is mvcc. When innodb inserts a piece of data, it is followed by two hidden columns, one that holds the system version number of the row at the time of creation, and the other that holds the system version number of the row deleted. Each time you start a new transaction, the system version number is automatically incremented, and the system version number at the beginning of the transaction is used as the ID of the transaction. Innodb updates a piece of data by setting the old data deletion version number, then inserting a new piece of data and setting the creation version number, and then deleting the old data. So how to ensure that snapshot reads read data that is not commit? there are two conditions:

InnoDB only looks for rows that create a version earlier than the current transaction version, that is, the system version number of the row is less than or equal to the system version number of the transaction, which ensures that the row read by the transaction either exists before the transaction starts, or is inserted or modified by the transaction itself.

The deleted version of the row is either undefined or greater than the current transaction version number. This ensures that the rows read by the transaction are not deleted before the transaction begins. Only records that meet the above two conditions can be returned as query results.

The database lock also determines whether to block something by comparing the version number.

MySQL index

Index introduction

According to the data structure, the index can be divided into hash table, ordered array, search tree, hopping table:

The hash table is suitable for scenarios with only equivalent queries.

Ordered array is suitable for scenarios with equivalent query and range query, but the update of ordered array index is very expensive, so it is best used in static data table.

The search efficiency of the search tree is stable and will not fluctuate greatly, and when scanning sequentially based on the index, the two-way pointer can also be used to move quickly from left to right, which is very efficient.

The jump table can be understood as an optimized hash index.

Innodb uses the B + tree index model, and it is a multi-tree. Although the binary tree is the most efficient index, the index needs to be written to disk, and it becomes frequent if the binary tree disk io is used. The innodb index is divided into primary key index (clustered index) and non-primary key index (secondary index). The primary key index stores all the information of the row of data, and the secondary index stores the primary key of the row; so when using the secondary index, the primary key value is found first, and then the data is queried back to the table, while using the primary key index, there is no need to return to the table.

For secondary indexes, you can use overlay indexes to optimize sql, see the following two sql

Select * from table where key=1;select id from table where key=1

Key is a secondary index. The first sql is to query out the id first, and then query the real data according to the id table. On the other hand, it is not necessary to return the data directly after the second query index. The second sql index key covers our query requirements, which is called the overlay index.

General index and unique index

InnoDB reads and writes data by data page. When you want to read a piece of data, you first read all the data on this page into memory, and then find the corresponding data instead of reading it directly. The default size of each page is 16KB.

When a data page needs to be updated, if there is a data page in memory, it will be updated directly; if there is no data page, it will be updated without affecting data consistency; the update operation will be cached in change buffer first, and then written to update operation the next time the query needs to access the data page. In addition to the query, the change buffer will be written to disk, and the background thread will also write change buffer to disk regularly. For a unique index, all update operations have to determine whether this operation will violate the uniqueness constraint, so the update of the unique index cannot use change buffer, while the ordinary index can, the unique index update has one more uniqueness check process than the ordinary index update.

Joint index

Indexes on two or more columns are called federated indexes (composite indexes). The federated index can reduce the index overhead. Taking the federated index as an example, the establishment of such an index is equivalent to the establishment of three indexes: a, ab, and abc-- Mysql uses the fields in the index from left to right. A query can use only a part of the index, but only the leftmost part, and the index is very effective when the leftmost field is a constant reference, which is the leftmost prefix principle. According to the leftmost prefix principle, combinatorial indexes are ordered, so which index is put first is more particular. There is another knowledge point for the composite index-index push-down. Suppose there is a composite index (a _ with the following sql:

Selet * from table where a=xxx and b=xxx

The sql will filter twice first to find the a=xxx data and then to find the b=xxx data from the a=xxx. The difference between using index push-down and not using index push-down is that without index push-down, you will first find out the primary key of a=xxx data, and then query the table according to the queried primary key to query all row data, and then find out b=xxx data on all row data; the execution process of index push-down is to find out the primary key of a=xxx data first, and then query the primary key of b=xxx on these primary keys, and then return to the table.

Features of index push-down:

For innodb engine tables, index pushdown can only be used for secondary indexes

Index push-down can be used for fields whose query fields are not all federated indexes, the query condition is a multi-conditional query and the query conditional clause fields are all federated indexes.

Optimizer and Index

After the index is established, a statement may hit multiple indexes, and it is left to the optimizer to select the appropriate index. The purpose of the optimizer in selecting the index is to find an optimal execution scheme and execute the statement at the minimum cost. So how does the optimizer determine the index? The optimizer will give priority to the index with the least number of rows scanned, and will make a comprehensive judgment based on the use of temporary tables, sorting and other factors. Before MySQL starts to execute sql, it does not know how many records meet this condition, but can only estimate based on the statistics of mysql, which can be obtained by data sampling.

Other index knowledge points

Sometimes you need to index long character columns, which makes the index very large and slow and takes up memory. You can usually use the first part of the character as the index, which is the prefix index. This can greatly save index space and improve the efficiency of the index, but it will also reduce the selectivity of the index.

Impact of dirty pages on the database:

When the memory data page is inconsistent with the disk data, we call this memory page a dirty page. After the memory data is written to disk, the data is consistent, which is called clean page. When you want to read data and the database does not have memory, you need to eliminate the data pages in memory-clean pages can be eliminated directly, while dirty pages need to be brushed into disk before being eliminated. If a query has too many dirty pages to be eliminated, it will take longer for the query. In order to reduce the impact of dirty pages on database performance, innodb controls the proportion of dirty pages and the timing of dirty pages refresh.

MySQL syntax analysis and optimization of count (*)

Count (*) for innodb, it needs to read the data from disk and accumulate the count; while the MyISAM engine stores the total number of rows of a table on disk, so executing count (*) returns this number directly, just like innodb if there is a where condition. So how to optimize count (*)? One idea is to use caching, but you need to pay attention to the issue of double-write consistency (described in the caching section after double-write consistency). You can also design a table to store count (*).

For count (primary key id), the InnoDB engine traverses the entire table, takes out the id value of each row, and returns it to the server layer. After the server layer gets the id, it determines that it is impossible to be empty, and accumulates by line. For count (1), the InnoDB engine traverses the entire table, but takes no values. Server layer for each row returned, put a number "1" in, the judgment is impossible to empty, accumulate by line. If you just look at the difference between the two uses, you can see that count (1) executes faster than count (primary key id). Because returning id from the engine involves parsing data rows and copying field values. For count (field): if the "field" is defined as not null, read out the field line by line from the record, determine that it cannot be null, and accumulate by line; if the definition of "field" is allowed to be null, then when you execute it, you should judge that it may be null, and then judge again that it is not null. For count (*), instead of taking out all the fields, it is specially optimized to accumulate by row without taking values. So sorting efficiency:

Count (*) = count (1) > count (id) > count (field)

Order by

Mysql allocates a piece of memory to each thread for sorting processing, called sort_buffer. The execution process of a sql including sorting is as follows: apply for sorting memory sort_buffer, then query the entire row of data one by one, and then put the required field data into the sorting memory, dye it back to do a quick sort of the data in the sorting memory, and then return it to the client. When the amount of data is too large for sorting memory, temporary disk files will be used to assist sorting. When there is not enough data in our sorting memory, mysql uses rowid sorting to optimize. Rowid sorting is relative to full-field sorting, and does not put all fields into sort_buffer, so you have to go back to the table query after sorting in sort buffer. In a few cases, order by can be optimized using federated indexes + index overrides.

Join

Before understanding join, we should first understand the concept of driven table-when two tables are related, there will be a driven table and a driven table. The driven table is also called the outer table (R table), and the driven table is also called the inner table (S table). Generally, we regard a small table as a driver table (when a join condition is specified, the table with a small number of rows satisfying the query condition is the "driver table"; when the join condition is not specified, the table with a small number of rows is the "driver table"; this is also what the MySQL internal optimizer does).

Suppose there is a sentence like sql (xxx is the index):

Select * from table1 left join tablet2 on table1.xxx=table2.xxx

The execution of this statement is to traverse the table table1, and then go to the table table2 to find records that meet the criteria based on the xxx value in each row of data extracted from the table table2. This process is similar to the nested query when we write the program, and can use the index of the driven table, which is called NLJ. When xxx is not an index, using NLJ will do multiple full table scans of table2 (table2 for every piece of data fetched from table1), and the number of scans will soar. At this point, mysql will adopt another query strategy. Mysql will first read the data of table1 into the memory space of an join_buffer, then fetch each row of data from table2 in turn, compare it with the data in join_buffer, and return those that meet the join condition as part of the result set.

When using join, we should follow the following points:

A small watch drives a big watch.

Consider using join only when the driven table is indexed (in NLJ query mode).

Optimization of sql

1) in mysql, if you do a function calculation on a field, you don't need an index.

Such as the following sql (data is the index):

Select * from tradelog where month (data) = 1

The optimizer will abandon the search tree for such a sql because it cannot know the interval of the data.

2) implicit type conversion can cause the index to fail.

Such as the following sql:

Select * from table where xxx=110717

Xxx is varchar. In mysql, if a string is compared with a number, the string is converted into a number and then compared. Here, it is equivalent to using CAST (xxx ASsigned) to make it impossible to walk the index.

3) the index column participates in the calculation and will not leave the index.

4) like% xxx will not walk the index, like xxx% will move the index

5) if you use or in the where clause, you will not walk the index in innodb, but MyISAM will.

Execution plan and slow query log execution plan

Add explain before querying sql to view the execution plan of the sql, such as:

EXPLAIN SELECT * FROM table

This sql returns a table like this:

Idselect_typetablepartitionstypepossible_keyskeykey_lenrefrowsfilteredextra

1simple

This table is the execution plan of sql, and we can know the operation of our sql by analyzing this execution plan. The columns are now explained:

1) id: the order in which select clauses or action tables are executed in the query.

2) select_type: the type (simple to complex) of each select clause in the query includes:

SIMPLE: subqueries or UNION are not included in the query

PRIMARY: the query contains complex subsections

SUBQUERY: contains a subquery in the SELECT or WHERE list, which is marked as SUBQUERY

DERIVED: derived, subqueries contained in the FROM list are marked as DERIVED

UNION: if the second SELECT appears after UNION, it is marked as UNION

UNION RESULT: the SELECT that gets the result from the UNION table is marked as UNION RESULT

3) type: indicates how MySQL finds the required rows in the table, also known as "access type", including:

ALL:Full Table Scan, MySQL will traverse the entire table to find matching rows

Index:Full Index Scan,index differs from ALL in that index only traverses the index tree.

Range: index range scan. The scan of the index starts at a certain point and returns rows that match the range. It is common in between.

< >

Wait for inquiry

Ref: a non-unique index scan that returns all rows that match a single value. It is common to use a non-unique prefix of a non-unique index, that is, a unique index.

Eq_ref: unique index scan, with only one record in the table matching for each index key. Common in primary key or unique index scans

Onst and system: use these types of access when MySQL optimizes some part of the query and converts it to a constant. If you put the primary key in the where list, MySQL can convert the query to a constant. System is a special case of const type. Use system when the query table has only one row.

NULL:MySQL breaks up statements during optimization and executes without even accessing tables or indexes.

4) possible_keys: indicates which index MySQL can use to find rows in the table. If there is an index on the field involved in the query, the index will be listed, but not necessarily used by the query.

5) key: displays the index actually used by MySQL in the query. If no index is used, it is displayed as NULL.

6) key_len: indicates the number of bytes used in the index, which can be used to calculate the length of the index used in the query.

7) ref: indicates the join matching condition of the above table, that is, which columns or constants are used to find the value on the index column.

8) rows: indicates the join matching condition of the above table, that is, which columns or constants are used to find the value on the index column.

9) Extra: other important information includes:

Using index: this value indicates that the overlay index was used in the corresponding select operation

Using where:MySQL will use the where clause to filter the result set

Using temporary: indicates that MySQL needs to use temporary tables to store result sets, which is common in sorting and grouping queries

The sort operation that cannot be done with indexes in Using filesort:MySQL is called "file sorting".

Slow query log

Mysql supports slow logging-mysql writes information about sql that has been queried for a long time to the log. This query time threshold is specified by the parameter long_query_time, and the default value of long_query_time is 10. Query sql running more than 10s will be recorded in the slow query log. By default, the Mysql database does not start the slow query log, and we need to set this parameter manually. Slow logging supports writing log records to files as well as database tables.

You can check whether the slow log is enabled through the following sql:

Show variables like'% slow_query_log%'

Start the slow query through the following sql:

Set global slow_query_log=1

Using sql to modify the slow log setting takes effect only for the current database, and will become invalid if MySQL is restarted. If you want to take effect permanently, you must modify the configuration file my.cnf.

View the threshold for modifying slow queries through the following sql:

Show variables like 'long_query_time%';set global long_query_time=4; Master-Slave backup principle

Master-slave replication means that one server acts as the master database server, and one or more servers act as the slave database server, and the data in the master server is automatically copied to the slave server. Through this means, we can achieve the separation of read and write, write data in the main database, and read data from the database, thus improving the availability of the database. MySQL master-slave replication involves three threads, one running on the master node (log dump thread) and the other two (I _ O thread, SQL thread) running on the slave node.

Primary node binary log dump thread:

When the slave node connects to the master node, the master node creates a logdump thread to send the contents of the binlog. When reading an operation in binlog, this thread locks the binlog on the master node, and when the read is complete, even before initiating to the slave node, the lock is released.

Slave node binlog O thread: used to copy the binlog of the master library to the local relay log. First, the slave library will start a worker thread, called the IO worker thread, which is responsible for establishing a normal client connection with the master library. If the process catches up with the main library, it will go to sleep until it is notified of a new event in the main library, and it will not be awakened to log the received events to the relay log (Relay log).

SQL thread from node:

The SQL thread is responsible for reading the contents of the relay log, parsing it into specific operations and performing them, and finally ensuring the consistency of the master-slave data.

Master-slave backup delay

The most direct manifestation of master / slave delay is that the slave database consumes relay logs (relay log) at a slower speed than the master database produces binlog. The possible causes are:

Large transactions, the master database must wait for the transaction execution before it is written to the binlog, and then passed to the standby database. When a thing is used for a long time, there will be a delay in the slave database because of the execution of this thing.

There is a lot of pressure from the library.

Of course, the active / standby delay is not good, so what are the ways to minimize the active / standby delay? There are several ways to do this:

One master and multiple followers-pick up several slave libraries and let these slave libraries share the pressure of reading. This method is suitable for reading from the library when the pressure is high.

Output through binlog to external systems, such as Hadoop, to enable external systems to provide statistical query capabilities

Distributed transaction

Because of the lack of space, we no longer popularize the concept of distributed transactions, but directly introduce two kinds of distributed transactions: XA distributed transaction and TCC distributed transaction.

XA distributed transaction

XA is a strongly consistent thing that is committed in two phases. In MySQL version 5.7.7, Oracle officially fixed a "bug" that had existed in MySQL XA so that the implementation of MySQL XA conformed to the standard of distributed transactions.

Roles in XA transactions:

Resource Manager (resource manager): used to manage system resources and is the path to transactional resources. Database is a kind of resource manager. Resource management should also have the ability to manage transaction commit or rollback.

Transaction manager (transaction manager): the transaction manager is the core manager of distributed transactions. The transaction manager communicates with each resource manager (resource manager), coordinates and completes the transaction processing. Each branch of a transaction is identified by a unique name.

The XA specification is based on a two-phase commit protocol:

In the first phase, the transaction middleware requests all related databases to prepare to commit (pre-commit) their respective transaction branches to confirm whether all related databases can commit their respective transaction branches. When a database receives a pre-commit, if it can commit its own transaction branch, record its actions in the transaction branch and give the transaction middleware a reply that agrees to commit. At this point, the database will no longer be able to add any operations to the transaction branch, but at this time the database will not actually commit the transaction. The operation of the database on the shared resource has not been released (locked). If for some reason the database cannot commit its own transaction branch, it will roll back all its operations, release the lock on the shared resource, and return to the transaction middleware failure response.

In the second phase, the transaction middleware reviews the pre-commit results returned by all databases. If all databases can be committed, the transaction middleware will require all databases to formally commit so that the global transaction is committed. If any database pre-commit fails, the transaction middleware will require all other databases to roll back their operations so that the global transaction is rolled back.

Mysql allows multiple database instances to participate in a global transaction. The collection of commands for MySQL XA is as follows:

-- Open a transaction and place the transaction in the ACTIVE state, and all SQL statements executed thereafter will be placed in the transaction. XA START xid-- puts the transaction in the IDLE state, indicating that the SQL operation within the transaction is complete. The preparatory action of a XA END xid-- transaction commit, in which the transaction state is placed in the PREPARED state. If the transaction fails to complete the pre-commit preparation operation, the statement fails. The XA PREPARE xid-- transaction is finally committed and the persistence is completed. The XA COMMIT xid-- transaction rollback terminates the XA ROLLBACK xid-- to view the PREPARED status of the xa transaction that exists in the MySQL. XA RECOVER

MySQL plays the role of a participant in a XA transaction and is dominated by the transaction coordinator. The XA transaction has one more PREPARE state than the ordinary local transaction. The ordinary transaction is begin- > commit, while the distributed transaction is PREPARE- > commit when other database transactions such as begin- > PREPARE reach the PREPARE state. Distributed transaction sql example:

Xa start 'aaa'; insert into table (xxx) values (xxx); xa end' aaa'; xa prepare 'aaa'; xa commit' aaa'

Problems with XA transactions:

Single point of problem: the role of the transaction manager in the entire process is critical. If the transaction manager goes down, such as when the first phase is completed, or when the transaction manager is ready to commit in the second phase, the resource manager will block all the time, causing the database to become unusable.

Synchronous blocking: after it is ready, the resources in the resource manager remain blocked until the commit is complete.

Data inconsistency: although the two-phase commit protocol is designed for strong consistency of distributed data, there is still the possibility of data inconsistency. For example, in the second phase, it is assumed that the coordinator issued a transaction commit notification, but because of network problems, the notification was only received by some participants and performed the commit operation, while the rest of the participants remained blocked because they did not receive the notification. At this time, there is a data inconsistency.

TCC distributed transaction

TCC, also known as flexible transaction, achieves the final consistency of transaction through transaction compensation mechanism. It is not a strongly consistent transaction. TCC divides a transaction into two phases, or two transactions. TCC has better concurrency than XA transactions. XA is a global transaction, while TCC is made up of two local transactions.

Suppose we buy an item and the backend needs to operate two tables-- the credit table plus points and the inventory table deducting inventory. These two tables are stored in two databases and perform this transaction using a TCC transaction:

1) Phase 1 of TCC implementation: Try

In the try phase, it is not to directly reduce inventory and add points, but to change the relevant data into a preparatory state. The inventory table locks an inventory first, and the locking method allows you to reserve a lock field. When this field is one, it means that the item is locked. The integral table adds a piece of data, which is also locked in the same way as the inventory table. Its sql is like:

Update stock set lock=1 where id=1;insert into credits (lock,...) Values (1J..)

If both sql are executed successfully, they will enter the Confirm phase, and if the execution is not successful, they will enter the Cancel phase.

2) Phase 2 of TCC implementation: Confirm

At this stage, the status of the formal reduction of inventory plus points order is changed to paid. Performing sql will lock inventory deductions, accumulate points for accumulation, and some other logic.

3) the third stage of TCC implementation: Cancel

When the try phase is unsuccessful, it is executed, in which the locked inventory is restored and the locked points are deleted. Returns to the state before the transaction was executed.

The principle of TCC transaction is simple, but it is not easy to use. First of all, TCC transactions are very intrusive to the system, and the second is to make the business logic complex. In practice, we have to rely on TCC transaction middleware to realize TCC transactions. Usually, a TCC transaction implementation looks like this: a service exposes a service, which is called normally, and other services are not aware of the existence of TCC transactions, while inside the service, three interfaces of Try,Confirm,Cancel are implemented and registered with TCC middleware. When the service is invoked, the transaction operation is done by the service and the TCC middleware.

The TCC transaction middleware also needs to do other things, such as ensuring the success of Confirm or Cancel execution. If you find that the Cancel or Confirm of a service has not been successful, you will keep retrying to call his Cancel or Confirm logic. Make sure he succeeds! Even if you cannot succeed after many attempts, you can notify the system that you need to manually troubleshoot the exception. TCC transactions also need to consider the handling of some abnormal situations, such as the order service suddenly hangs and then restarts again, and the TCC distributed transaction framework should be able to ensure that the previously unfinished distributed transactions continue to execute. TCC distributed transaction framework also needs to record logs and save all stages and states of distributed transactions, so that the system can troubleshoot anomalies and recover data after the system is online. At present, the open source TCC transaction frameworks are: Seata ByteTCC tcc-transaction and so on.

At this point, I believe you have a deeper understanding of "how to understand and master MySQL". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report