Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the problems of high frequency MySQL

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article introduces the relevant knowledge of "what are the topics of high-frequency MySQL". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Tell me about the internal execution of a query statement by MySQL?

The client first connects to the MySQL server through a connector.

After the connector permission verification is passed, first query whether there is a query cache, if there is a cache (this statement has been executed before), then directly return the cached data, and if there is no cache, enter the parser.

The parser performs syntax analysis and lexical analysis on the query statement to determine whether the SQL syntax is correct. If the query syntax error is correct, it will directly return the error information to the client, and if the syntax is correct, enter the optimizer.

The optimizer optimizes the query statement. For example, if there are multiple indexes in a table, the optimizer will determine which index performs better.

After execution, the optimizer enters the executor, and the executor begins to execute statements for query comparison until all the data that meets the condition is queried and returned.

MySQL prompt "this column does not exist" is executed to which node is reported?

This error is reported during execution to the parser phase, because MySQL checks the correctness of the SQL statement during the parser phase.

What are the advantages and disadvantages of MySQL query caching?

MySQL query caching occurs after the connector, and its advantage is that it is efficient and returns results directly if there is already a cache. The disadvantage of query cache is that the cache hit rate is relatively low due to frequent invalidation, and any update table operation will empty the query cache, so it is very easy to invalidate the query cache.

How to turn off the query caching function of MySQL?

MySQL query caching is enabled by default. Configure the querycachetype parameter to DEMAND (use on demand) to disable query caching. After MySQL 8.0, the query caching feature is deleted directly.

What are the common engines of MySQL?

The common engines of MySQL are InnoDB, MyISAM, Memory and so on. InnoDB has been the default storage engine since MySQL version 5.5.5.

Can MySQL set up the database engine at the table level? How to set it?

You can set different engines for different tables. Use the engine= engine name (such as Memory) in the create table statement to set the storage engine for this table. The complete code is as follows:

What is the difference between InnoDB and MyISAM, the storage engines commonly used in create table student (id int primary key auto_increment, username varchar, age int) ENGINE=Memory?

The biggest difference between InnoDB and MyISAM is that InnoDB supports transactions, while MyISAM does not. The main differences are as follows:

InnoDB supports secure recovery after crash, while MyISAM does not support secure recovery after crash.

InnoDB supports row-level locks, while MyISAM does not support row-level locks, only to table locks

InnoDB supports foreign keys, but MyISAM does not support foreign keys

MyISAM has higher performance than InnoDB.

MyISAM supports full-text indexing of FULLTEXT type, while InnoDB does not support full-text indexing of FULLTEXT type, but InnoDB can use sphinx plug-in to support full-text indexing, and the effect is better.

InnoDB primary key query performance is higher than MyISAM.

What are the features of InnoDB?

1) insert buffer (insert buffer): for nonclustered index insertions and updates, instead of inserting each time directly into the index page, first determine whether the inserted nonclustered index page is in the buffer pool, and if so, insert it directly, otherwise, put it into an insert buffer first. It is as if the deceptive database, a nonclustered index, has been inserted into the leaf node, and then the insert buffer and nonclustered index page child nodes are merged at a certain frequency, when multiple inserts can be merged into one operation. this greatly improves the performance of inserting and modifying nonclustered indexes.

2) double write: two writes bring reliability to InnoDB and are mainly used to solve partial write failures (partial page write). The doublewrite consists of two parts, one is the in-memory doublewrite buffer with a size of 2m, and the other is a series of 128pages, or two extents, in the shared table space on the physical disk, which is also 2m in size. When the job of the buffer pool is refreshed, it does not write to the hard disk directly, but copies the dirty page to the doublewrite buffer in memory first through the memcpy function, and then writes it twice through doublewrite buffer, each time writing 1m to the physical disk of the shared tablespace, and then immediately calls the fsync function to synchronize the disk. As shown in the following figure

3) Adaptive hash indexing (adaptive hash index): because InnoDB does not support hash indexes, but in some cases hash indexes are very efficient, so the adaptive hash index function appears. The InnoDB storage engine monitors the lookup of indexes on the table, and automatically builds hash indexes if it is observed that establishing hash indexes can improve performance.

There are three pieces of data in a self-increasing table. Restart the database after deleting two pieces of data, and then add another piece of data. What is the ID of this data?

If the engine of this table is MyISAM, then ID=4, if it is InnoDB, then ID=2 (versions prior to MySQL 8).

What will cause the self-increment primary key not to be contiguous in MySQL?

The following conditions can cause MySQL self-increment primary keys not to be contiguous:

Unique primary key conflicts cause self-increasing primary key discontiguity

Transaction rollback can also cause self-increment primary keys to be discontiguous.

Can self-increasing primary keys be persisted in InnoDB?

Whether the increment primary key can be persisted or not refers to whether InnoDB can restore the auto-increment column before restart after MySQL restart. InnoDB does not have the persistence ability before 8.0, but after MySQL 8.0, the self-increment primary key is saved to redo log (a log type, which will be described in more detail below). When MySQL restarts, it will recover from the redo log log.

What are independent and shared tablespaces? What's the difference between them?

Shared tablespace: refers to all the table data in the database. The index files are all in one file. By default, the file path of this shared tablespace is in the data directory. Independent tablespaces: each table will be generated and stored as a separate file. The biggest difference between a shared table space and an independent table space is that if you put the table back into the shared table space, the table will not be deleted even if the table space is deleted, so the table is still very large, while the independent table space will clear the space if the table is deleted.

How to set up independent tablespaces?

Independent tablespaces are controlled by the parameter innodbfileper_table. Setting it to ON is independent tablespaces. Since MySQL version 5.6.6, this value has defaulted to ON.

How to shrink tablespaces?

You can shrink the table space by rebuilding the table. There are three ways to rebuild the table:

Alter table t engine=InnoDB

Optmize table t

Truncate table t

Tell me about the implementation process of rebuilding the table?

Create a temporary file and scan all the data pages of the table t primary key

Use the records of table t in the data page to generate a B+ tree and store it in a temporary file

During the generation of temporary files, all operations on t are recorded in a log file (row log)

After the temporary file is generated, the operations in the log file are applied to the temporary file to get a data file with the same logical data as table t

Replace the data file of table t with a temporary file.

Where is the structure information of the table stored?

Table structure definition occupies a relatively small storage space. Before MySQL 8, table structure definition information was stored in files with the suffix .frm. After MySQL 8, table structure definition information is allowed to be stored in system data tables.

What is an overlay index?

An overlay index means that the information on the index is sufficient to satisfy the query request and there is no need to go back to the primary key to retrieve the data.

If you delete the primary key of an InnoDB table, will there be no primary key and there will be no way to query back the table?

You can query back to the table, and if the primary key is deleted, InnoDB will generate a 6-byte rowid as the primary key.

After executing a update statement, I execute the hexdump command to view the contents of the ibd file directly. Why don't I see any change in the data?

It may be because after the execution of the update statement, InnoDB only guarantees that it has written the redo log and memory, and it may not have time to write the data to disk.

What's the difference between a memory table and a temporary table?

Memory table, which refers to the table using the Memory engine, the syntax for building the table is create table. Engine=memory . The data of this table is stored in memory and will be emptied when the system is rebooted, but the table structure is still there. Apart from the fact that these two features look "strange", it is a normal table in terms of other features.

Temporary tables, on the other hand, can use various engine types. If you are using the temporary table of the InnoDB engine or the MyISAM engine, the data is written to disk.

What are the problems caused by concurrent transactions?

Dirty reading

Modification lost

Non-repeatable

Illusory reading

What is dirty reading and illusory reading?

Dirty reading means that one transaction reads the uncommitted data of another transaction during processing; illusory reading means that multiple queries in the same transaction return different result sets (such as increasing or decreasing row records).

Why is there a hallucination? What are the problems caused by illusions?

Because row locks can only lock existing rows, there are no restrictions on newly inserted operations, so it is possible to produce phantom reads. The problems caused by phantom reading are as follows:

The destruction of the semantics of row locks

Breaches data consistency.

How to avoid illusions?

Use gap locks to avoid illusions. Gap lock, which is specially used to solve the problem of phantom reading, locks the gap between rows, and can block the newly inserted operation gap lock. The introduction of gap lock also brings some new problems, such as reducing the degree of concurrency and may lead to deadlock.

How do I view free connections for MySQL?

From the command line of MySQL, use show processlist; to view all connections, where the Command column is displayed as the idle connection for Sleep, as shown in the following figure:

How to recalculate the total number of entries?

Use distinct to remove duplicates and use count to count the total number of entries. The specific implementation script is as follows:

Select count (distinct f) from t

What is the function of the last*insert*id () function? What are the characteristics?

Lastinsertid () is used to query the last self-increment table number, it is characterized by whether the query does not need to specify the table name, use select last_insert_id () to query, because there is no need to specify the table name, so it is always the last self-increment number, can be overwritten by the self-increment number of other tables. For example, the maximum number of table An is 10, and the value queried by lastinsertid () is 10. In this case, table B inserts a piece of data with a maximum number of 3, and the value of the query using lastinsertid () is 3.

How many ways are there to delete data from a table? What's the difference between them?

There are two ways to delete data: delete and truncate. The differences between them are as follows:

Delete can add where conditions to delete part of the data, truncate can not add where conditions can only delete the entire table

The deletion information of delete is recorded in the log of MySQL, while the deletion information of truncate is not recorded in the log of MySQL, so the information of detele can be recovered while the information of truncate cannot be recovered.

Truncate performs faster than delete because it does not log.

The usage scripts for delete and truncate are as follows:

Delete from t where username='redis'; truncate table t

How many fuzzy queries are supported in MySQL? What's the difference between them?

Two kinds of fuzzy queries are supported in MySQL: regexp and like,like match any number of characters or any single character, while regexp supports regular expression matching and provides more matching methods than like. Examples of regexp and like are as follows: select * from person where uname like'% SQL%'; > select from person where uname regexp '.SQL *.

Does MySQL support enumeration? How to achieve it? What is its use?

MySQL supports enumerations, which are implemented as follows:

Create table t (sex enum ('boy','grid') default' unknown')

The function of an enumeration is to predefine the result value, and when the insert data is not within the range of the enumeration value, the insertion fails, prompting an error Data truncated for column 'xxx' at row n.

What is the difference between count (column) and count (*)?

The biggest difference between count (column) and count () is that the statistical results may be inconsistent. Count (column) statistics will not count data with a value of null, while count () will count all the information, so the final statistical results may be different.

What is the following true statement about count?

A. the query performance of count is the same under all storage engines. B. the performance of count in MyISAM is lower than that of InnoDB. C. count is read one by one in InnoDB and then counted cumulatively. D. count stores the total number of entries in InnoDB and fetches them directly when querying.

A: C

Why doesn't InnoDB record the total number of entries and return them directly when querying?

Because InnoDB uses the transaction implementation, and the transaction design uses multi-version concurrency control, even if the query is made at the same time, the results may be different, so InnoDB can not save the results directly, because this is not accurate.

Can I use the number of table rows in show table status as the total number of rows in the table? Why?

No, because the show table status is estimated by sampling statistics, and the official document says that the error may be about 40%, so the number of table rows in show table status cannot be used directly.

Which of the following SQL has the highest query performance?

A. select count (*) from t where time > 1000 and time1000 and time1000 and time, 2018

No, because the operation is involved on the index column.

# can I create an index for the first 6 digits of my mobile phone number? How to create it? Yes, there are two ways to create it:

Alter table t add index index_phone (phone (6)); create index index_phone on t (phone (6))

# what is a prefix index? A prefix index is also called a local index, such as adding an index to the first 10 bits of an ID card, and a similar way of adding an index to some information in a column is called a prefix index.

# Why should I use prefix indexing? The prefix index can effectively reduce the size of the index file, so that each index page can save more index values, thus improving the speed of index query. However, prefix indexes also have their drawbacks. Prefix indexes cannot be triggered in order by or group by, nor can they be used to overwrite indexes.

# when is it appropriate to use a prefix index? When the string itself may be long and the first few characters start to be different, it is suitable to use prefix indexing On the contrary, it is not suitable to use prefix indexing, for example, the length of the whole field is 20, the index selectivity is 0.9, and the selectivity of our prefix index for the first 10 characters is only 0.5. Then we need to continue to increase the length of prefix characters, but at this time the advantage of prefix index is not obvious, there is no need to create a prefix index.

# what is karma? Page is the logical block of computer management memory. Hardware and operating system often divide the main memory and disk storage area into continuous blocks of equal size, and each storage block is called a page. Main memory and disk exchange data in pages. The designers of the database system skillfully make use of the principle of disk pre-reading, setting the size of a node to equal to a page, so that each node only needs one disk IO to be fully loaded.

# what are the common storage algorithms for indexes?

Hash storage method: store values in key or value, store values in an array, use hash values to confirm the location of the data, and use linked lists to store data in case of hash conflicts

Ordered array storage method: stored sequentially, the advantage is that you can use dichotomy to find data quickly, but the disadvantage is update efficiency, which is suitable for static data storage.

Search tree: store in the way of tree, good query performance and fast update speed. # Why should InnoDB use B + tree instead of B tree, Hash, red-black tree or binary tree? Because there are the following problems with B-tree, Hash, red-black tree, or binary tree:

B-tree: both leaf nodes and non-leaf nodes save data, which results in fewer pointers that can be saved in non-leaf nodes (some data are also called fan out). In the case of fewer pointers, the height of the tree can only be increased, resulting in more IO operations and lower query performance.

Hash: although it can be located quickly, there is no order, and the complexity of IO is high

Binary tree: the height of the tree is uneven, it cannot be self-balanced, the search efficiency is related to the data (the height of the tree), and the IO cost is high.

Red-black tree: the height of the tree increases with the increase of the amount of data, and the IO cost is high. # Why does InnoDB use B+ who stores the index? B in B+Tree is Balance, which means balance. It optimizes on the basis of the classical B Tree and adds a sequential access pointer to each leaf node of B+Tree. A pointer to an adjacent leaf node is added to form a B+Tree with a sequential access pointer, which improves interval access performance: if you want to query all data records with key from 18 to 49, when 18 is found All the data nodes can be accessed at once by traversing the nodes and pointers in order, which greatly mentions the efficiency of interval query (there is no need to return to the upper parent node to repeat traversal lookups to reduce IO operations).

The index itself is also very large, so it is impossible to store it all in memory, so the index is often stored on the disk in the form of index files, so the disk IO consumption will be generated in the index search process, and the IO access consumption is several orders of magnitude higher than the memory access, so the index structure organization should minimize the access times of disk IO in the search process, so as to improve the index efficiency. To sum up, InnDB can provide the overall operational performance of the database only if it adopts the data structure of B+ tree to store the index.

# which is better, unique index or ordinary index?

For query operations: the performance of a normal index is similar to that of a unique index, querying from the index tree

For update operations: unique indexes perform more slowly than normal indexes, because unique indexes need to read data into memory first, and then perform unique validation of data in memory, so it is slower to execute than ordinary indexes. # what are the factors that affect the query index selected by the optimizer? The purpose of the optimizer is to select the optimal execution scheme with the lowest cost. The factors that affect the optimizer's index selection are as follows:

The fewer rows are scanned, the less the execution cost will be and the higher the execution efficiency will be.

Whether temporary tables are used

Whether to sort or not. # how does MySQL determine the number of index scan rows? The number of scan rows of MySQL is roughly obtained and judged by the index statistics column (cardinality), while the index statistics column (cardinality) can be obtained by the query command show index. The number of index scan rows is judged by this value.

# how does MySQL get the index cardinality? Is it accurate? The index cardinality of MySQL is not accurate, because the index cardinality of MySQL is obtained through sampling. For example, InnoDb has N data pages by default, and sampling statistics will count the different values on these pages to get an average, and then divide by the number of pages of the index to get the index cardinality.

# how does MySQL specify the index of a query? You can use force index to forcibly select an index in MySQL, as shown in the following query statement:

Select * from t force index (index_t)

# Why does it not take effect when the query index is specified in MySQL? We know that using force index in MySQL can specify the index of the query, but it does not necessarily take effect, because MySQL will select the index according to the optimizer. If the index specified by force index appears on the candidate index, MySQL will not directly use the specified index in judging the number of rows scanned. If it is not in the candidate index, even if force index specifies the index, it will not take effect.

# is there any problem with the following or query? How to optimize it?

Select * from t where num=10 or num=20

A: if using an or query causes MySQL to abandon the index and scan the full table, you can change it to:

Select * from t where num=10 union select * from t where num=20

# how to optimize the following query? The table contains indexes:

KEY mid (mid) KEY begintime (begintime) KEY dg (day,group)

Query using the following SQL:

Select f from t where day='2010-12-31 'and group=18 and begintime set global transaction isolation level read committed; / / sets the global transaction isolation level to read committed MySQL > set session transaction isolation level read committed; / / sets the current session transaction isolation level to read committed

How does InnoDB start a manually committed transaction?

InnoDB automatically commits transactions by default, and each SQL operation (non-select operation) automatically commits a transaction. If you want to manually open a transaction, you need to set set autocommit=0 to disable automatic transaction commit, which is equivalent to opening a manual commit transaction.

Autocommit=0 is set in InnoDB, and the submission operation is not manually performed after adding a message. Can this information be found?

Autocommit=0 forbids automatic transaction commit and does not commit manually after the add operation. By default, other connected clients cannot query this new data.

How do I manipulate transactions manually?

Use begin to start the transaction; rollback rolls back the transaction; commit commits the transaction. Specific examples are as follows:

Begin;insert person (uname,age) values ('laowang',18); rollback;commit;MySQL lock piece what is a lock? How many types of locks are available in MySQL?

Lock is an important means to realize database concurrency control, which can ensure that the database can run normally when multiple people operate at the same time. MySQL provides global locks, row-level locks, and table-level locks. InnoDB supports table-level locks and row-level locks, while MyISAM only supports table-level locks.

What is a deadlock?

Refers to two or more processes in the implementation process, due to competition for resources caused by a mutual waiting phenomenon, if there is no external force, they will not be able to move forward. At this point, it is said that the system is in a deadlock state or that the system has a deadlock, and these processes that are always waiting for each other are called deadlocks.

Deadlock refers to the phenomenon of mutual waiting caused by the competition for resources between two or more processes in the process of execution. If there is no external force, they will not be able to move forward. At this point, it is said that the system is in a deadlock state or that the system has a deadlock, and these processes that are always waiting for each other are called deadlocks.

What are the common deadlock cases?

When you unpack and lend several copies of the invested money to the borrower, the business logic is to lock several borrowers together in select * from xxx where id in (xx,xx,xx) for update.

Batch storage, update if it exists, insert if it doesn't exist. The solution is insert into tab (xx,xx) on duplicate key update xx='xx'.

How to deal with deadlocks?

Two common strategies for dealing with deadlocks:

Set the timeout through innodblockwait_timeout and wait until the timeout occurs

Initiate deadlock detection, and after the deadlock is found, actively roll back one of the transactions in the deadlock and let other transactions continue to execute.

How do I view deadlocks?

Use the command show engine innodb status to view the most recent deadlock.

InnoDB Lock Monitor opens the lock monitoring and outputs the log every 15s. It is recommended to close it after use, otherwise it will affect the performance of the database.

How to avoid deadlocks?

To avoid deadlocks when performing multiple concurrent writes on a single InnoDB table, you can start the transaction by using SELECT for each meta-ancestor (row) that is expected to be modified. FOR UPDATE statement to obtain the necessary locks, even if the change statements for these rows are executed later.

In a transaction, if you want to update a record, you should directly apply for a lock of sufficient level, that is, an exclusive lock, instead of applying for a shared lock first, or an exclusive lock when updating, because when the user applies for an exclusive lock again, other transactions may have acquired the shared lock of the same record, resulting in lock conflicts or even deadlocks

If a transaction needs to modify or lock multiple tables, lock statements should be used in the same order in each transaction. In the application, if different programs will access multiple tables concurrently, it should be agreed to access the tables in the same order as far as possible, which can greatly reduce the chance of deadlock.

Through SELECT... After LOCK IN SHARE MODE acquires the read lock of the row, if the current transaction needs to update the record again, it is likely to cause a deadlock.

Change the transaction isolation level.

How does InnoDB deal with deadlocks by default?

InnoDB defaults to the policy of setting deadlock time to time out, and the default innodblockwait_timeout setting is 50s.

How to turn on deadlock detection?

Set innodbdeadlockdetect to on to actively detect deadlocks. In Innodb, this value defaults to the state in which on is enabled.

What is a global lock? What are its application scenarios?

Global lock is to lock the entire database instance, and its typical use scenario is to make a logical backup of the whole database. This command makes the entire library read-only. After using this command, operations such as data update statements, data definition statements, commit statements for update transactions, and so on, are blocked.

What is a shared lock?

Shared locks, also known as read locks (read lock), are locks created by read operations. Other users can read data concurrently, but no transaction can modify the data (acquire exclusive locks on the data) until all shared locks have been released. When a transaction modifies a read lock, it is likely to cause a deadlock.

What is an exclusive lock?

Exclusive lock exclusive lock (also known as writer lock) is also known as write lock.

If something adds an exclusive lock to a row, it can only be read and written by this transaction. before the end of the transaction, other transactions cannot add any locks to it, other processes can read and cannot write, and need to wait for it to be released.

Exclusive lock is an implementation of pessimistic lock, which is also introduced above.

If transaction 1 adds an X lock to the data object A, transaction 1 can read An or modify A, and other transactions can no longer add any locks to A until transaction 1 releases the lock on A. This ensures that other transactions cannot read and modify A until transaction 1 releases the lock on A. Exclusive locks block all exclusive and shared locks.

What problems can be caused by using global locks?

If the backup is in the main database, it cannot be updated during the backup, and the business shuts down, so the update business will be in a waiting state.

If you are backing up a slave library, the binlog of master library synchronization cannot be performed during the backup, resulting in master-slave delays.

How to deal with the situation where the entire database cannot be inserted when a logical backup occurs?

If you use a global lock for logical backup, it will make the entire library read-only. Fortunately, a logical backup tool MySQLdump has been launched to solve this problem. When using MySQLdump, using the parameter-single-transaction will start a transaction before importing data to ensure data consistency, and this process supports data update operations.

How do I set the database to a global read-only lock?

Using the command flush tables with read lock (FTWRL for short), you can set the database as a global read-only lock.

Is there any other way other than that FTWRL can make the database read-only?

In addition to using FTWRL, you can use the command set global readonly=true to set the database to read-only.

What's the difference between FTWRL and set global readonly=true?

Both FTWRL and set global readonly=true set the entire database to read-only, but the biggest difference between them is that when the client executing FTWRL is disconnected, the entire database is canceled, while set global readonly=true keeps the data read-only all the time.

How to implement a table lock?

There are two kinds of tag locks in MySQL: table-level locks and metadata locks (meta data lock) referred to as MDL. The syntax for table locks is lock tables t read/write.

The lock can be released actively with unlock tables, or automatically when the client is disconnected. Lock tables syntax not only restricts the reading and writing of other threads, but also defines the next operation object of this thread.

For InnoDB, an engine that supports row locks, the lock tables command is generally not used to control concurrency. After all, the impact of locking the entire table is still too great.

MDL: does not need to be used explicitly and is automatically added when accessing a table.

The function of MDL: to ensure the correctness of reading and writing.

Add MDL read lock when you add, delete, modify and check a table; add MDL write lock when you want to change the structure of the table.

The read locks are not mutually exclusive, the read and write locks are mutually exclusive, which is used to ensure the security of the operation of the change table structure.

MDL will not be released until the transaction is committed, and when making table structure changes, be careful not to cause online queries and updates to be locked.

What's the difference between a pessimistic lock and an optimistic lock?

As the name implies, it is very pessimistic. Every time I go to get the data, I think that someone else will modify it, so it will be locked every time I get the data, so that others will block the data until it gets the lock. Because of this, pessimistic locks take a lot of time, and corresponding to optimistic locks, pessimistic locks are implemented by the database itself. when we want to use them, we just call the relevant statements of the database directly.

At this point, there are two other lock concepts involved in pessimistic locks, namely shared locks and exclusive locks. Shared lock and exclusive lock are different implementations of pessimistic lock, and they both belong to the category of pessimistic lock.

Optimistic locking is implemented by data version (Version) recording mechanism, which is the most commonly used implementation of optimistic locking. What is the data version? That is, to add a version ID to the data, usually by adding a version field of numeric type to the database table. When reading the data, the value of the version field is read out together, and each time the data is updated, add 1 to the version value. When we submit the update, we judge that the current version information recorded in the database table is compared with the version value taken out for the first time, and if the current version number of the database table is equal to the version value taken out for the first time, it will be updated, otherwise it will be regarded as out-of-date data.

For example: 1. The three fields of the database table, namely id, value, version select id,value,version from t where id=# {id} 2, each time you update the value field in the table, you need to do this in order to prevent conflicts.

What are the advantages and disadvantages of update tset value=2,version=version+1where id=# {id} and version=# {version} optimistic locks?

Because there is no lock, the advantage of optimistic locking is high execution performance. Its disadvantage is that it may produce ABA problem. ABA problem means that there is a variable V that is A value when it is read for the first time, and when it is ready to be assigned, it is checked that it is still A value, which will mistakenly assume that it will not be modified normally. In fact, during this period of time, its value may be changed to other values, and then changed back to A value. This problem is called ABA problem.

How many locking algorithms does the InnoDB storage engine have?

Record Lock-Lock on a single row record

Gap Lock-Gap lock that locks a range, excluding the record itself

Next-Key Lock-Lock a range, including the record itself.

How does InnoDB implement row locks?

Row-level lock is the smallest kind of lock in MySQL, which can greatly reduce the conflict of database operations.

There are two row-level locks in INNODB: shared lock (S LOCK) and exclusive lock (X LOCK). Shared locks allow things to read a row of records, and no thread is allowed to modify that row of records. Exclusive locks allow the current thing to delete or update a row of records that other threads cannot manipulate.

Shared lock: SELECT... LOCK IN SHARE MODE,MySQL adds a shared lock to each row in the query result set, provided that the current thread does not use an exclusive lock on any exercise in the result set, otherwise the application will block.

Exclusive lock: select * from t where id=1 for update, where the id field must be indexed. MySQL will add an exclusive lock to each row in the query result set. In transaction operations, any update or deletion of records will be automatically added with exclusive locks. The premise is that no thread currently uses an exclusive or shared lock for any exercise in the result set, otherwise the application will block.

Do you have any suggestions on optimizing locks?

Try to use a low isolation level.

Carefully design the index and use the index to access the data as far as possible to make the locking more accurate, thus reducing the chance of lock conflict.

If you choose a reasonable transaction size, small transactions are less likely to have lock conflicts.

When locking a recordset display, it is best to request a lock of sufficient level at one time. For example, if you want to modify the data, it is best to apply for an exclusive lock directly, rather than apply for a shared lock first, and then request an exclusive lock when you modify it, which can easily lead to a deadlock.

When different programs access a set of tables, they should try to agree to access the tables in the same order, and for a table, access the rows in the table in a fixed order as much as possible. This greatly reduces the chances of deadlocks.

Try to access data with equal conditions, so as to avoid the effect of gap locks on concurrent insertions.

Do not apply for more than the lock level that is actually needed.

Do not show locking when querying unless you have to. MySQL's MVCC can optimize transaction performance without locking queries in transactions; MVCC only works at COMMITTED READ (read commit) and REPEATABLE READ (repeatable read) isolation levels.

For some specific transactions, table locks can be used to improve processing speed or reduce the possibility of deadlocks.

This is the end of the content of "what are the topics of high-frequency MySQL". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report