Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the knowledge points of database principle

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly introduces "what are the knowledge points of the principle of the database". In the daily operation, I believe that many people have doubts about the knowledge points of the principle of the database. The editor consulted all kinds of data and sorted out the simple and easy-to-use operation methods. I hope it will be helpful for you to answer the questions of "what are the knowledge points of database principle?" Next, please follow the editor to study!

1. Transaction

(1) definition

A transaction refers to a set of operations that satisfy the ACID feature, either by committing a transaction through Commit or by rolling back using Rollback.

(2) characteristics

A. atomicity

The transaction is regarded as an indivisible minimum unit, and all operations of the transaction are either committed successfully or all failed rollback. Rollback can be implemented with a rollback log, which records the changes performed by the transaction, which can be performed in reverse during the rollback.

B. consistency

The database maintains a consistent state before and after the transaction is executed. In a consistent state, all transactions read the same data. For a relational database, it is required that the updated data can be seen by subsequent visits, which is strong consistency; if some or all of the subsequent access is tolerated, it is weak consistency * *; if the updated data is required to be accessed after a period of time, it is the final consistency.

C. isolation

Changes made by one firm are not visible to other transactions until they are finally committed.

D. persistence

Once the transaction commits, its changes are saved to the database forever. Even if the system crashes, the result of transaction execution cannot be lost.

2. Concurrency consistency

(1) lost updates

Both T1 and T2 transactions modify a data. T1 modifies first, T2 then modifies, and T2 changes cover T1 modifications. Here is an example of an aircraft booking system for everyone to understand: a ticket office (A business) reads out the ticket balance An of a flight, which is set up as A16; B ticket office (B service) reads out the ticket balance An of the same flight, which is also 16; A ticket booking office sells a ticket and modifies the balance A ← A Mueller 1. So An is 15, write A back to the database; B ticket office also sold a ticket, modify the balance A ← Amer 1. So An is 15, write A back to the database. As a result, two tickets were sold, and the balance of tickets in the database was reduced by only 1.

(2) non-repeatable

T2 reads a piece of data, and T1 modifies it. If T2 reads this data again, the result of the read at this time is different from that of the first time. Specifically, the current transaction first read the data once, and then read the data again is the data modified successfully by other transactions, resulting in the mismatch of the data read twice.

(3) Phantom reading

T1 reads a certain range of data, T2 inserts new data in this range, and T1 reads this range of data again, and the reading result is different from that of the first reading. Or, in more popular terms, transaction A first gets N pieces of data according to the conditional index, and then transaction B changes the M pieces of data other than these N pieces of data or adds M pieces of data that meet the search criteria of transaction A. transaction A searches again and finds that there are N pieces of data, resulting in illusory reading. In other words, the current transaction reads less data the first time than it reads the data entries later.

3. Blockade

(1) blocking granularity

Two locking granularities are provided in MySQL: row-level locks and table-level locks. You should try to lock only the part of the data that needs to be modified, not all resources. The less the amount of data locked, the less likely the lock contention will occur, and the higher the concurrency of the system will be. However, locking requires resource consumption, and various operations of the lock (including acquiring the lock, releasing the lock, and checking the lock status) will increase the system overhead. Therefore, the smaller the blocking granularity, the greater the system overhead. When choosing the blocking granularity, you need to make a tradeoff between lock overhead and concurrency.

(2) blockade type

A. read-write lock

Exclusive lock is abbreviated as X lock, also known as write lock; shared lock is abbreviated as S lock, also known as read lock. There are two rules: a transaction can read and update A by adding an X lock to the data object A. No other transactions can add any locks to A during locking. A transaction adds an S lock to the data object A, which can be read from A, but not updated. During locking, other transactions can add S locks to A, but not X locks.

B. intention lock

Multi-granularity locking can be more easily supported using intent locks (Intention Locks). In the case of row-level locks and table-level locks, if transaction T wants to add an X lock to table A, it needs to first detect whether any other transaction has locked table An or any row in table A, then every row in table A needs to be detected once, which is very time-consuming. The intention lock introduces an IX/IS,IX/IS table lock on top of the original Xamp S lock, which is used to indicate that a transaction wants to add an X lock or an S lock on a data row in the table. There are two rules: a transaction must acquire a table's IS lock or a stronger lock before acquiring an S lock for a data row object, and a transaction must acquire a table's IX lock before acquiring an X lock for a data row object.

(3) blockade agreement

The blocking protocol is divided into three-level locking protocol and two-stage locking protocol. MySQL's InnoDB storage engine uses a two-stage locking protocol, which automatically adds locks when needed according to the isolation level, and all locks are released at the same time, which is called implicit locking. InnoDB can also use specific statements for display locking:

SELECT... LOCK In SHARE MODE;SELECT... FOR UPDATE

4. Isolation level

In order to avoid losing updates, dirty reads, unrepeatable reads and phantom reads, four transaction isolation levels are defined in the standard SQL specification, and different isolation levels handle transactions differently. The details are as follows:

5. Multi-version concurrency control

Multi-version concurrency control is a specific way for MySQL's InnoDB storage engine to achieve isolation levels, which are committed reads and repeatable reads. The uncommitted read isolation level always reads the latest rows of data without using MVCC. Serializable isolation levels require locking on all read rows, which cannot be achieved using MVCC alone. MVCC replaces row locks in most cases. In the earliest database system, only read and write can be concurrent, read and write have to be blocked. After the introduction of multiple versions, only writes block each other, and the other three operations can be done in parallel, which greatly improves the concurrency of InnoDB. However, using MVCC requires additional storage space for each row of records, and more row maintenance and inspection work is required.

6 、 Next-Key Lock

(1) Record Lock

Lock the index on a record, not the record itself. If the table does not have an index set, InnoDB automatically creates a hidden clustered index on the primary key, so Record Lock can still be used.

(2) Gap Locks

Locks the gap between indexes, but does not include the index itself. For example, when one transaction executes the following statement, other transactions cannot insert 15 in t.c.

SELECT c FROM t WHERE c BETWEEN 10 and 20 FOR UPDATE

(3) Next-Key Lock

It is a combination of Record Lock and Gap Lock that locks not only the indexes on a record, but also the gaps between indexes. For example, if an index contains the following values: 10, 11, 13, and 20, then you need to lock the following range:

(negative infinity, 10] (10, 11] (11, 13] (13, 20] (20, positive infinity)

At this point, the study of "what are the knowledge points of database principle" is over. I hope to be able to solve everyone's doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report