In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
How to learn MySQL, I believe that many inexperienced people are at a loss about it. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.
MyISAM and InnoDB do not support transactions and do not support row table locks compared with MyISAMInnoDB primary foreign keys. During operation, even one record will lock a whole table, which is not suitable for highly concurrent operation of row locks. Only one row will be locked during operation, which will not affect other rows. It is suitable for high concurrency caching only indexes, and does not cache other data cache indexes and real data. And the memory size has an impact on the performance of the tablespace small concerns performance transactions default installation YY performance degradation SQL slow reasons:
The query is poorly written
Index failure
Too many join associated queries (design flaws or compelling requirements)
Server tuning and parameter settings (buffering, thread parameters)
Mysql execution order
to write
Machine reading starts with from.
SQLJoin
Table a
Mysql > select * from tbl_dept +-RD | 11 | 2 | HR | 12 | 3 | MK | 13 | 4 | MIS | 14 | 5 | FD | 15 | | +-+ 5 rows in set (0.00 sec) |
Table b
+-+ | id | name | deptId | +-- + | 1 | Z3 | 1 | 2 | Z4 | 1 | 3 | Z5 | 1 | 4 | w5 | 2 | 5 | w6 | 2 | 6 | S7 | 3 | 7 | S8 | 4 | | | 8 | S9 | 51 | +-+ 8 rows in set (0.00 sec) |
Mysql does not support full connectivity
Full connectivity can be achieved in the following ways
Mysql > select * from tbl_dept a right join tbl_emp b on a.id=b.deptId-> union-> select * from tbl_dept a left join tbl_emp b on a.id=b.deptId +-+ | id | deptName | locAdd | id | name | deptId | +-+-+ | 1 | | RD | 11 | 1 | Z3 | 1 | 1 | RD | 11 | 2 | Z4 | 1 | 1 | RD | 11 | 3 | Z5 | 1 | | 2 | HR | 12 | 4 | W5 | 2 | 2 | HR | 12 | 5 | W6 | 2 | 3 | MK | | | 13 | 6 | S7 | 3 | 4 | MIS | 14 | 7 | S8 | 4 | | NULL | 8 | S9 | 51 | 5 | FD | 15 | NULL | +-+-| -+ 9 rows in set (0.00 sec)
The uniqueness of an and b
Mysql > select * from tbl_dept a left join tbl_emp b on a.id=b.deptId where b.id is null-> union-> select * from tbl_dept a right join tbl_emp b on a.id=b.deptId where a.id is null +-+ | id | deptName | locAdd | id | name | deptId | +-+-+ | 5 | | FD | 15 | NULL | | NULL | 8 | S9 | 51 | +-+-+ 2 rows in set (0.01 sec) index |
Definition of the index:
Index is a data structure that helps SQL to obtain data efficiently. The essence of index is data structure.
It can be simply understood as: a sorted fast lookup data structure.
In addition to the data, the database system also maintains data structures that meet specific lookup algorithms, which point to the data in some way (reference), so that advanced lookup algorithms can be implemented on these data structures. This data structure is the index, and the following figure is an example:
Generally speaking, the index is also large, so the index is often stored on disk as an index file.
The index we usually talk about, if not specifically specified, generally refers to the index organized by the B-tree (multi-search tree, not necessarily binary).
Among them, clustered index, secondary index, composite index, prefix index and unique index all use B+ tree index by default, collectively referred to as index. of course, in addition to this type of index, there is also a hash index.
The pros and cons of the index. advantage
Similar to the indexing of book numbers in university libraries, it improves the efficiency of data retrieval and reduces the IO cost of the database.
Sorting the data through the index reduces the cost of data sorting and reduces the consumption of CPU.
two。 Inferior position
In fact, an index is also a table, which holds the primary key and fields that exist in the index, and points to the record of the entity table, so the index column also takes up space.
Although the index greatly improves the query speed, it slows down the speed of updating the table, such as the update,insert,delete operation, because when updating the table, MySQL not only needs the data but also saves the index file. Every time the index file is updated, the index information will be adjusted after the key value changes due to the update.
Index is only one factor to improve efficiency. On a table with a large amount of data, it is necessary to build the best index or write excellent query statements, rather than adding an index to improve efficiency.
Index classification
Single-valued index
Unique index
Composite index
Basic syntax:
Create create [unique] index indexName on mytable (cloumnname (length)); alter mytable add [unique] index [indexName] on (columnname (length)); delete drop index [indexName] on mytable to view show index from table_name\ G
There are four ways to add an index to a data table
Mysql index structure
BTree index
Hash index
Full-text full-text index
R-Tree
Those cases are indexed
The primary key automatically establishes a unique index
Fields that are frequently used as query criteria should be indexed
The fields associated with other tables in the query are indexed by foreign key relationships.
Fields that are updated frequently are not suitable for index creation, because each update updates not only the record but also the index.
Do not create indexes for fields that are not used in where conditions.
The selection problem of single key / combined index who? (combined index is recommended for high concurrency)
The sorted field in the query, if the sorted field is accessed by index, the sorting speed will be greatly improved.
Statistics or grouping fields in a query
Under what circumstances should you not build an index?
Few table records
Tables that frequently manipulate dml statements
Duplicate and evenly distributed table fields, so index only the most frequently queried and sorted data columns. Note that indexing a data column has little practical effect if it contains many duplicates.
Performance analysis.
Explian focus
What can I do?
Reading order of the table
Operation type of data read operation
Which indexes can be used
Which indexes are actually used
References between tables
How many rows per table are queried by the optimizer
Three cases of id
The id is the same and the execution order is from top to bottom
Id is different. If it is a subquery, the id sequence number is incremented, and the larger the id, the higher the priority.
Id is the same and different, but it exists at the same time.
Select_type
SIMPLE simple query
PRIMARY main query (outermost query)
SUBQUERY subquery
DERIUED the temporary table of a subquery of a query
UNION federated query
UNION RESULT federated query results
Type::
Type shows the order of access types, which is a more important indicator.
From the best to the worst:
System > const > eq_ref > ref > range > index > ALL
In general, you need to ensure that the query is at least range level, preferably ref
-type type
System: the table has only one row of records (equal to the system table). This is a special column of const type, which usually does not appear and can be ignored.
Const: indicates that the index is queried at once, and const is used to compare primary key or unique indexes. Because only one row of data is matched, quickly, if the primary key is placed in the where list, Mysql can convert the query to a constant
Eq_ref: unique index scan with only one record in the table, often used for primary key or unique index scanning (if two tables are many-to-one or one-to-one, and the joined table is one, his query is eq_ref)
Ref: a non-unique index scan that returns all rows that match a single value, which is essentially an index access, which returns all rows that match a single value, but it may find rows with multiple compound conditions, which are a combination of lookup and scanning.
Range: only a given range of rows are retrieved, and an index is used to select rows. The key column shows which index is used. Generally, queries such as betweent,in appear in where statements. This range scan index is better than full table scan.
The difference between index:index and ALL, index only traverses the index tree, and the index file is usually smaller than the data file
ALL: full table scan
-type type
Possible_keys: shows the indexes that may be applied (in theory)
Key: the index actually used. If an overlay index is used in the query, the index only appears in key.
Key_len: indicates the number of bytes used in the index, which is as short as possible without losing precision. The value displayed by kenlen is the maximum possible length of the index field, not the actual used length. Kenlen is calculated according to the table definition, not retrieved from the table.
Key_len length: 13 because char (4) * utf8 (3) + allows null (1) = 13
Ref: shows which column of the index is used, a constant if possible, and which columns or constants are used to find values on the index column
Rows: based on table statistics and index selection, roughly calculate the number of rows that need to be read to find the required records
When no index is established, query T1 T2 table T1 table corresponds to T2 table id T2 table col1 value should be 'ac'
For the field Id, T1 table is equivalent to one to many table for T2 table.
The type of T1 table is eq_ref for unique index scan, and only one record matches it. T2 table has only one colvalue corresponding to this id of T1. According to the id index query of T2 table, T1 table reads one row, T2 table reads 640 rows.
After the index is established
T1 reads one row, T2 reads 142rows, ref non-unique index scan returns all rows that match a single value, returns all rows of col corresponding to T2 corresponding to id, while T1 corresponds to col of id has only one row, so type is eq_ref
Extra
Contains important information that is not suitable for presentation in other columns
\ G: sort vertically
Using filesort: indicates that mysql will sort the data using an external index instead of reading it according to the index order within the table. The operation in mysql that cannot use the index to complete sorting is called file sorting without being framed. A composite index is established, but sorting directly using col3 leads to castles in the air, and mysql has no choice but to filesoft.
Using temporary: temporary tables are used to hold intermediate results, and MySQL uses temporary tables when sorting query results. It is common in order by sorting and group by grouping to establish a compound index col1_col2 on the table, but grouping directly through col2 results in mysql having no choice but to filesoft and build temporary tables.
Using index indicates that the overlay index is used in the corresponding select operation to avoid accessing the data rows of the table. If usingwhere appears at the same time, it means that the index is used to look up the key value of the index, and no usingwhere means that the index is used to read the data instead of performing the lookup action.
Using where indicates that where filtering is used
Using join buffer used link cache privately.
The value of the impossible buffer where clause is always false and cannot be used to get any tuples
Without the group by clause, select tables optimized away optimizes the min/max operation based on the index, or performs the count (*) operation on the myisam storage engine without waiting for the execution operation. The optimization is completed at the stage of query execution plan generation.
Distinct optimizes the distinct operation, stopping the search for the same value as soon as the first matching tuple is found
Case
Index optimization single table optimization CREATE TABLE IF NOT EXISTS `roomle` (`id` INT (10) UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT, `category_ id` INT (10) UNSIGNED NOT NULL, `category_ id` INT (10) UNSIGNED NOT NULL, `views` INT (10) UNSIGNED NOT NULL, `comments` INT (10) UNSIGNED NOT NULL, `title` VARBINARY (255) NOT NULL, `content` TEXT NOT NULL) INSERT INTO `views` (`comments`, `category_ id`, `views`, `comments`, `title`, `content`) VALUES (SELECT * FROM ARTICLE; mysql > select id,author_id from article where category_id = 1 and comments > 1 order by views desc limit 1) +-+-+ | id | author_id | +-+-+ | 3 | 1 | +-+-+ 1 row in set (0.00 sec) mysql > explain select author_id from article where category_id = 1 and comments > 1 order by views desc li imit 1 +- -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +- -+-+-- + | 1 | SIMPLE | article | NULL | ALL | NULL | 3 | 33.33 | Using where Using filesort | +-+- -+ 1 row in set 1 warning (0.00 sec)
It can be seen that although the query comes out, type is all,Extra and using filesort appears to prove that the query efficiency is very low.
Need to optimize
Build an index
Create index idx_article_ccv on article (category_id,comments,views)
Query
Mysql > explain select author_id from article where category_id = 1 and comments > 1 order by views desc limit 1 +- -+-+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +-+ -+- -+ | 1 | SIMPLE | article | NULL | range | inx_article_ccv | inx_article_ccv | 8 | NULL | 1 | 100.00 | Using index condition Using filesort | +-+- -+-+ 1 row in set 1 warning (0.00 sec)
Here we find that type has become a range query, the whole table has become a little bit of scope query optimization.
But extra still has using filesort to prove that index optimization is not successful.
So we delete the index.
Drop index idx_article_ccv on article
Create a new index and exclude range
Create index idx_article_cv on article (category_id,views); mysql > explain select author_id from article where category_id = 1 and comments > 1 order by views desc limit 1 +-- + -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +- -+ | 1 | SIMPLE | article | NULL | ref | idx_article_cv | idx_article_cv | 4 | const | 2 | 33.33 | Using where | +- +- -+ 1 row in set 1 warning (0.00 sec) at this time, you will find that the optimization successfully changed from type to ref extra to using where. In this experiment, I added another experiment and found that when indexing, it is also feasible to put comments at the end, mysql > create index idx_article_cvc on article (category_id,views,comments). Query OK, 0 rows affected (0.02 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql > explain select author_id from article where category_id = 1 and comments > 1 order by views desc limit 1 +- -+-+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +- -+ | 1 | SIMPLE | article | NULL | ref | idx_article_cvc | idx_article_cvc | 4 | const | 2 | 33.33 | Using where | | +- -- +-+ 1 row in set 1 warning (0.00 sec)
At this time, you will find that the optimization has changed from type to ref extra to using where.
In this experiment, I added another experiment and found that it is possible to put comments at the end when building an index.
It is found here that type is still ref,extra and usingwhere, but just changes the location of the index, and moves the field of the range query to the end!
Dual table optimization CREATE TABLE IF NOT EXISTS `class` (`id` INT (10) UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT, `card` INT (10) UNSIGNED NOT NULL); CREATE TABLE IF NOT EXISTS `book` (`bookid` INT (10) UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT, `card` INT (10) UNSIGNED NOT NULL); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20)); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20) INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20)); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20) INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20)); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20) INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20)); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO class (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20) INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20)); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20) INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20)); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20) INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20)); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20) INSERT INTO book (card) VALUES (FLOOR (1 + (RAND () * 20)); mysql > create index Y on book (card); explain select * from book left join class on book.card=class.card +- -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +-+-- +-- -+ | 1 | SIMPLE | book | NULL | index | NULL | Y | 4 | NULL | 20 | 100.00 | Using index | 1 | SIMPLE | class | NULL | ALL | NULL | 20 | 100.00 | Using where Using join buffer (Block Nested Loop) | + -+ 2 rows in set 1 warning (0.00 sec)
You will find that there is not much difference or full table query because the two table queries are joined to the left and the left table must be fully queried. At this time, it is only useful to index the right table.
The opposite right link must be indexed on the left table to be useful
Index the right table
Create index Y on class; explain select * from book left join class on book.card=class.card +- -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +-+-- -+-+ | 1 | SIMPLE | book | NULL | index | NULL | Y | 4 | NULL | 20 | 100.00 | Using index | | 1 | SIMPLE | class | NULL | | ref | Y | Y | 4 | db01.book.card | 1 | 100.00 | Using index | +- +-+ 2 rows in set 1 warning (0.00 sec)
You will find that the table on the right has been queried only once. Type is ref
CREATE TABLE IF NOT EXISTS `phone` (`phoneid` INT (10) UNSIGNED NOT NULL PRIMARY KEY AUTO_INCREMENT, `card` INT (10) UNSIGNED NOT NULL) ENGINE = INNODB; INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20) INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20)); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20) INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20)); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20) INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20)); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20)); INSERT INTO phone (card) VALUES (FLOOR (1 + (RAND () * 20)
Delete all indexes first
Drop index Y on book; drop index Y on class; explain select * from class left join book on class.card=book.card left join phone on book.card=phone.card +- -- + | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +- -+- -+ | 1 | SIMPLE | class | NULL | ALL | NULL | 20 | 100.00 | NULL | | 1 | SIMPLE | book | NULL | ALL | NULL | 20 | 100.00 | Using where Using join buffer (Block Nested Loop) | | 1 | SIMPLE | phone | NULL | ALL | NULL | 20 | 100.00 | Using where Using join buffer (Block Nested Loop) | +-- +- -+ 3 rows in set 1 warning (0.00 sec)
Build an index
Create index y on book (card); create index z on phone (card); explain select * from class left join book on class.card=book.card left join phone on book.card=phone.card +- -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +-+-- -+-+ | 1 | SIMPLE | class | NULL | ALL | NULL | 20 | 100.00 | NULL | | 1 | SIMPLE | book | NULL | | ref | y | y | 4 | db01.class.card | 1 | 100.00 | Using index | | 1 | SIMPLE | phone | NULL | ref | z | 4 | db01.book.card | 1 | 100.00 | Using index | +-- +-| -+ 3 rows in set 1 warning (0.00 sec)
You will find that the index is very successful. But the leftmost table of left join must be indexed by all queries.
Create index x on class (card); explain select * from class left join book on class.card=book.card left join phone on book.card=phone.card +- -+ | id | select_type | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +-+ -- +-+ | 1 | SIMPLE | class | NULL | index | NULL | x | 4 | NULL | 20 | 100.00 | Using index | | 1 | SIMPLE | book | | NULL | ref | y | 4 | db01.class.card | 1 | 100.00 | Using index | | 1 | SIMPLE | phone | NULL | ref | z | 4 | db01.book.card | 1 | 100.00 | Using index | +-+ | -+ 3 rows in set 1 warning (0.00 sec)
The result is still the same.
Create a table
CREATE TABLE staffs (id INT PRIMARY KEY AUTO_INCREMENT, `name` VARCHAR (24) NOT NULL DEFAULT'' COMMENT' name', `age`INT NOT NULL DEFAULT 0 COMMENT' age', `pos` VARCHAR (20) NOT NULL DEFAULT'' COMMENT' position', `add_ time`TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT' entry time') CHARSET utf8 COMMENT' employee record table'; INSERT INTO staffs (`name`, `age`, `pos`, `add_ time`) VALUES ('z3', 22), management and management (now ()) INSERT INTO staffs (`name`, `age`, `pos`, `INSERT INTO staffs (`name`, `age`, `pos`, `add_ time`) VALUES (`name`, `age`, `pos`) INSERT INTO staffs (`name`, `age`, `pos`) VALUES ('2000)); indexing ALTER TABLE staffs ADD INDEX index_staffs_nameAgePos (`name`, `age`, `pos`)
1. The leader cannot die, and the middle brother cannot break: when building a composite index, you must take the head index, do not skip the intermediate index and use the latter index directly, and you must add the intermediate index to use the latter index (you can use the latter index first and then the intermediate index, but you can't skip the intermediate index directly using the latter index) (for where)
You can see from the picture above that anyone who skips name can't use an index.
Mysql > explain select * from staffs where name='july' +- -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +- +-+ | 1 | SIMPLE | staffs | NULL | ref | index_staffs_nameAgePos | index _ staffs_nameAgePos | 74 | const | 1 | 100.00 | NULL | +-+ -+-+ 1 row in set 1 warning (0.00 sec) mysql > explain select * from staffs where name='july' and pos='dev' +- -+-+-- + | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +-+ -- +- -+ | 1 | SIMPLE | staffs | NULL | ref | index_staffs_nameAgePos | index_staffs_nameAgePos | 74 | const | 1 | 33.33 | Using index condition | +- -+-+-- + 1 row in set 1 warning (0.00 sec)
You can see from the statement that the key_len is unchanged after skipping the middle index, which proves that the index pos is not used.
two。 No operations can be performed on index columns (calculations, type conversions, etc.)
3. The storage engine cannot use the column to the right of the scope condition in the index (less computation on the index column)
4. Try to use overlay indexes, that is, queries that access only indexes reduce the use of select *
5. Use less (! =,) is not null, is null
6.like starting with'% 'will cause the index to fail (use the override index course to avoid index failure) overwrite the index: (the index established is as many as possible in the order of the fields queried)
7. A string without single quotation marks will cause the index to fail (mysql will cast the string type and cause the index to fail)
8. Use less or, use it to connect will fail
Index case
Suppose index (a _
Y means the index is all used and N is not used.
Whether or not the where statement index is used where axi3 andc=5 (the middle b is broken) uses a without using cwhere axi3 and bounded 4 andc=5Ywhere axi3 andc=5 and bounded 4Y where mysql automatically optimizes the sorting of statements where axi3 and b > 4 andc= 5a B is used where axi3 and b like'k% 'and cantilev 5Y like followed by constant opening index all using where bread3 and c=4Nwhere Acura 3 and c > 5 and b=4Y:mysql automatically optimizes the sentence sort range c before the index is invalidated where and and 4 and aq2Y: mysql automatically optimizes the sentence sorting where canti5 and 4 and 3Y: mysql automatically optimizes the sorting of statements
Suppose index (a _ c, d)
Create table test03 (id int primary key not null auto_increment, an int (10), b int (10), c int (10), d int (10), insert into test03 (an int (10)) values (3pint 4, 5); insert into test03 (arecy, 4, 5) values (3); insert into test03 (a, 4, 5) values (3); insert into test03 (3, 4, 5) values (3) Create index idx_test03_abcd on test03 (a _
# # #
Where axiom 3 and b > 4 and cantilever 5 uses an and b, the index after b is full invalid where axi3 and bread4 and dail6 order by c actually uses an and bmeme c, but it is also used in sorting, there is no statistics that where axiom 3 and baud 4 order by c uses an and bjol c in fact, but it is used in sorting. There are no statistics on the use of an and b in where axiom 3 and baud 4 order by d in mysql. Skipping c here causes using filesortwhere axiom 3 and daily6 order by bprit c to use a, and to sort where axiom 3 and dong6 order by c, and b uses a, resulting in using filesort, because where axi3 and baud 4 order by b is skipped, and cY all uses where axiom 3 and baud 4 and baud 61 6 order by cforme b uses aforme b and does not generate using filesort because the query b has been queried before sorting it, and b has already determined (constant) at the time of the query, so it does not skip b to sort c, but is equivalent to directly sorting c equivalent to the query statement of the third grid.
Group by is more serious group by first grouping and then sorting changing order by to group by even produces using temporary, which is similar to order by, but more serious and similar to the effect of group by.
Order By index optimization orderBy conditions Extrawhere a > 4 order by ausing where using indexwhere a > 4 order by a using busing where using indexwhere a > 4 order by busing where, using index, using filesort (the leader behind order by is not here) where a > 4 order by bperformance a using where, using index, using filesort (followed by order by) where a=const order by bMagne c if where uses the leftmost prefix of the index as a constant, then order by can use the leftmost prefix of the index where a=const and b=const order by cwhere to define it as a constant Then order by can use index where a=const and b > 3 order by b cusing where using indexorder by an asc, b desc, c desc sort inconsistent elevator
Exsites select a.* from An a where exists (select 1 from B b where a.id=b.id) the above query uses the exists statement, exists () executes A.length times, it does not cache the exists () result set, because the content of the exists () result set is not important, what matters is whether there are records in the result set, if so, return true, if not, return false. Its query process is similar to the following procedure, List resultSet= []; Array A = (select * from A) for (int iTunes 0 / I show variables like 'profiling') +-+-+ | Variable_name | Value | +-+-+ | profiling | OFF | +-+-+ 1 row in set (0.00 sec) mysql > set profiling=on; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql > show variables like 'profiling' +-+-+ | Variable_name | Value | +-+-+ | profiling | ON | +-+-+ 1 row in set (0.01sec)-
Write a few insert statements at random.
Displays the speed of the query operation statement
Mysql > show profiles +-+ | Query_ID | Duration | Query | | +-- + | 1 | 0.00125325 | show variables like 'profiling' | | | 2 | 0.00018850 | select * from dept | | 3 | 0.00016825 | select * from tb1_emp e inner join tbl_dept d on e.deptId=d.id | | 4 | 0.00023900 | show tables | | 5 | 0.00031125 | select * from tbl_emp e inner join tbl_dept d on e.deptId=d.id | | 6 | 0.00024775 | select * from tbl_emp e inner join tbl_dept d on e.deptId=d.id | | 7 | 0.00023725 | select * from tbl_emp e inner join tbl_dept d on e.deptId=d.id | | 8 | 0.00023825 | select * from tbl_emp e left join tbl_dept d on e.deptId=d.id | 9 | 0.35058075 | select * from emp group by id limit 15000 | | 10 | 0.35542250 | select * from emp group by id limit 15000 | | 11 | 0.00024550 | select * from tbl_emp e left join tbl_dept d on e.deptId=d.id | | 12 | 0.36441850 | select * from emp group by id%20 order by 5 | +-- -+ 12 rows in set 1 warning (0.00 sec)
Display the query process sql lifecycle
Mysql > show profile cpu,block io for query 3 +-- +-+ | Status | Duration | CPU_user | CPU_system | Block_ops_in | Block_ops_ Out | +-+-+ | starting | 0.000062 | 0.000040 | 0.000021 | 0 | | 0 | | checking permissions | 0.000004 | 0.000003 | 0.000001 | 0 | 0 | checking permissions | 0.000015 | 0.000006 | 0.000003 | 0 | 0 | Opening tables | 0.000059 | 0.000039 | 0.000020 | 0 | 0 | query end | 0.000004 | | | 0.000002 | 0.000001 | 0 | 0 | closing tables | 0.000002 | 0.000001 | 0.000000 | 0 | 0 | freeing items | 0.000014 | 0.000010 | 0.000005 | 0 | 0 | | cleaning up | 0.000009 | 0.000006 | 0.000003 | | 0 | 0 | +-- +-+ 8 rows in set 1 warning (0.00 sec) mysql > show profile cpu,block io for query 12 +-- +-+ | Status | Duration | CPU_user | CPU_system | Block_ops_in | Block_ops_ Out | +-+-+ | starting | 0.000063 | 0.000042 | 0.000021 | 0 | | 0 | | checking permissions | 0.000006 | 0.000003 | 0.000002 | 0 | 0 | Opening tables | 0.000013 | 0.000009 | 0.000004 | 0 | 0 | init | 0.000028 | 0.000017 | 0.000008 | 0 | 0 | System lock | | | 0.000007 | 0.000004 | 0.000002 | 0 | 0 | optimizing | 0.000004 | 0.000002 | 0.000002 | 0 | 0 | statistics | 0.000014 | 0.000010 | 0.000004 | 0 | 0 | preparing | 0.000008 | 0.000005 | 0.000003 | 0 | 0 | Creating tmp table | 0.000028 | 0.000018 | 0.000009 | 0 | 0 | Sorting result | 0.000003 | 0.000002 | 0.000001 | 0 | | executing | 0.000002 | 0.000002 | 0.000001 | 0 | 0 | | Sending data | 0.364132 | 0.360529 | 0.002426 | 0 | 0 | Creating sort index | 0.000053 | 0.000034 | 0.000017 | 0 | 0 | end | 0.000004 | 0.000002 | 0.000002 | 0 | 0 | query end | | 0.000007 | 0.000005 | 0.000002 | 0 | 0 | removing tmp table | 0.000005 | 0.000003 | 0.000002 | 0 | 0 | query end | 0.000003 | 0.000002 | 0.000001 | 0 | 0 | | closing tables | 0.000006 | 0.000004 | 0.000002 | 0 | 0 | freeing items | 0.000023 | 0.000016 | 0.000007 | 0 | 0 | cleaning up | 0.000012 | 0.000007 | 0.000004 | 0 | 0 | +-+- +-+ 20 rows in set 1 warning (0.00 sec)
If any of the above four occurs, you need to optimize the query statement.
Global query log set global general_log=1; set global log_output='TABLE'
After that, the sql statements you write will be recorded in the general_log table in the mysql library, which can be viewed with the following command
Select * from mysql.general_log; mysql > select * from mysql.general_log + -+ | event_time | user_host | thread_id | server_id | command_type | argument | +-- +-- -+-+ | 2021-12-06 1153 localhost 53.457242 | root [root] @ localhost [] | 68 | 1 | Query | select * from mysql.general_log | +-- -+ 1 row in set (0.00 sec) Mysql lock
Read lock (shared lock): for the same data, multiple reads can be performed simultaneously without affecting each other.
Write lock (exclusive lock): when the current write operation is not completed, it blocks other write and read locks
Row lock: biased towards InnoDB engine, with high overhead and slow locking, deadlock will occur: the lock granularity is the smallest, the probability of lock conflict is the lowest, and the concurrency is high.
Table lock: biased towards myisam engine, with low overhead and fast locking; large lock granularity, the highest probability of lock conflict and the lowest concurrency
Test the table lock below
Use big_data; create table mylock (id int not null primary key auto_increment, name varchar (20) default') engine myisam; insert into mylock (name) values ('a'); insert into mylock (name) values ('b'); insert into mylock (name) values ('c'); insert into mylock (name) values ('d'); insert into mylock (name) values ('e'); select * from mylock; Lock Command lock table mylock read,book write;## read lock mylock write lock book show open tables # # shows which tables are locked and unlock tables;## unlocks the table lock: read lock # # cannot modify mysql > lock table mylock read;##1 Query OK, 0 rows affected (0.00 sec) mysql > select * from mylock # # 1 +-- + | id | name | +-+-+ | 1 | a | 2 | b | 3 | c | 4 | d | 5 | e | +-+ 5 rows in set (0.00 sec) mysql > update mylock set name='a2' where id=1 # # 1 ERROR 1099 (HY000): Table 'mylock' was locked with a READ lock and can't be updated # # cannot change the table locked by the current read lock # # cannot read other tables mysql > select * from book;##1 ERROR 1100 (HY000): Table' book' was not locked with LOCK TABLES
In order to distinguish between the two commands, 1 is treated as an operation on the original mysql command terminal, and 2 as a new mysql terminal.
Create a new mysql terminal command operation
# # create a new mysql terminal command operation mysql > update mylock set name='a3' where id=1; # # 2
A blocking operation is found.
Unlock the original mysql command terminal
Unlock tables;##1 Query OK, 1 row affected (2 min 1.46 sec) # # 2 Rows matched: 1 Changed: 1 Warnings: 0 # # 2
You'll find it blocked for more than two minutes.
Summary: when reading the lock table mylock: 1. Query operation: the current client (Terminal Command Operation 1) can query table mylock
Other client (Terminal Command Operation 2) can also query the table mylock 2.DML operation (add, delete and modify) the current client will fail and report an error ERROR 1099 (HY000): Table 'mylock' was locked with a READ lock and can't be updated other client will cause the mysql to fall into a blocking state until the current session releases the lock.
Table lock: write lock mysql > lock table mylock write; Query OK, 0 rows affected (0 sec) add write lock mysql > update mylock set name='a4'where id=1 to the current session mylock table; Query OK, 1 row affected (0 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql > select * from mylock +-select * from book; ERROR 1100 (HY000): Table 'book' was not locked with LOCK TABLES.
You will find that you cannot manipulate other tables, but you can manipulate locked tables.
Open a new client to test the locked table
Mysql > select * from mylock; 5 rows in set (2 min 30.92 sec)
It is found that the operation on the new client (add, delete, modify and check) the table locked by the write lock will fall into a blocking state
Work
Analysis table lock
Mysql > show status like 'table%' +-- +-+ | Variable_name | Value | +-+-+ | Table_locks_immediate | 194 | | Table_locks_waited | 0 | Table_open _ cache_hits | 18 | | Table_open_cache_misses | 2 | | Table_open_cache_overflows | 0 | +-- +-+ 5 rows in set (0.00 sec)
Row lock
Row locking mode of InnoDB
InnoDB implements the following two types of row locks.
Shared lock (S): also known as read lock, referred to as S lock, shared lock means that multiple transactions can share a lock for the same data, all can access the data, but can only be read and cannot be modified.
Exclusive lock (X): also known as write lock, referred to as X lock, exclusive lock cannot coexist with other locks. For example, if a transaction acquires an exclusive lock of a data row, other transactions can no longer acquire other locks of the row, including shared lock and exclusive lock. But the transaction that acquires an exclusive lock can read and modify the data.
For UPDATE, DELETE, and INSERT statements, InnoDB automatically adds an exclusive lock (X) to the dataset involved
For ordinary SELECT statements, InnoDB does not add any locks
You can add a shared lock or an exclusive lock to the recordset through the following statement.
Shared lock (S): SELECT * FROM table_name WHERE... LOCK IN SHARE MODE exclusive lock (X): SELECT * FROM table_name WHERE. FOR UPDATE
Since the row lock supports transactions, review it here
Business
A transaction is a set of logical processing units composed of SQL statements. A transaction has four attributes: ACID
Atomicity: a transaction is an atomic operation unit that either performs all or none of the operations on the data.
Consistent: data must be in a consistent state at the beginning and completion of a transaction. This means that all relevant data must be applied to the modification of the transaction to maintain data integrity; at the end of the transaction, all internal data structures (such as B-tree indexes or two-way linked lists) must also be correct.
Isolation: the database provides an isolation mechanism to ensure that transactions are executed in an "independent" environment that is not affected by external concurrent operations. This means that the intermediate state of the transaction process is invisible to the outside, and vice versa.
Durable: after a transaction is completed, its operation on the data is permanent and can be maintained even in the event of a system failure
Problems caused by concurrent transactions:
Missing updates, dirty reading, unrepeatable reading, phantom reading
The ACID attribute implies that an Atomicity transaction is an atomic operation unit whose modifications to the data either succeed or fail. Consistent) data must be in a consistent state at the beginning and completion of a transaction. Isolation (Isolation) database system provides a certain isolation mechanism to ensure that transactions run in an "independent" environment that is not affected by external concurrent operations. After the Durable transaction is completed, the modification of the data is permanent.
Problems caused by concurrent transaction processing
Problem meaning missing update (Lost Update) when two or more transactions select the same row, the value of the initial transaction modification is overwritten by the value of the subsequent transaction modification. Dirty Reads) when a transaction is accessing the data and making changes to the data that have not been committed to the database, another transaction accesses the data and then uses the data. Unrepeatable Non-Repeatable Reads A transaction reads the previously read data again at some time after it has read some data, only to find that it is inconsistent with the previously read data. Phantom Reads: a transaction re-reads previously queried data according to the same query conditions, only to find that other transactions insert new data that meets its query criteria.
Transaction isolation level
In order to solve the transaction concurrency problem mentioned above, the database provides a certain transaction isolation mechanism to solve this problem. The stricter the transaction isolation of the database, the less the side effects, but the greater the cost, because transaction isolation is essentially the use of transaction "serialization" to a certain extent, which is obviously contradictory to "concurrency".
There are four isolation levels of database, which are Read uncommitted, Read committed, Repeatable read and Serializable from low to high. These four levels can solve the problems of dirty writing, dirty reading, unrepeatable reading and phantom reading one by one.
Read uncommitted × √√√ Read committed × × √√ Repeatable read (default) × × √ Serializable × × ×
Note: √ may appear, but x will not.
The default isolation level for Mysql's database is Repeatable read, which can be viewed as follows:
Show variables like 'tx_isolation'
Row lock test table building, case preparation work
Create table test_innodb_lock (id int (11), name varchar (16), sex varchar (1)) engine = innodb default charset=utf8; insert into test_innodb_lock values (1); insert into test_innodb_lock values (3); insert into test_innodb_lock values (4); insert into test_innodb_lock values (5) Insert into test_innodb_lock values; create index idx_test_innodb_lock_id on test_innodb_lock (id) Create index idx_test_innodb_lock_name on test_innodb_lock (name); row lock test
Or open two terminal tests, turn off automatic transaction commit, because automatic transaction commit will automatically lock and release lock
Mysql > set autocommit=0; mysql > set autocommit=0
You will find that the query has no effect.
Update to the left
Mysql > update test_innodb_lock set name='100' where id=3; Query OK, 0 rows affected (0.00 sec) Rows matched: 1 Changed: 0 Warnings: 0
Update to the left
Stop the operation after updating the right
Mysql > update test_innodb_lock set name='340' where id=3; ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
You will find that it is blocked until the lock is released or the transaction (commit) is committed.
For the innodb engine, a DML operation on a row of data adds an exclusive lock to the row of the operation
Other transactions cannot execute this row, but you can manipulate the data of other rows.
An indexed row lock is upgraded to a table lock: if the data is not retrieved through the index condition, innodb locks all records in the table, with the same effect as the table lock
Remember to use indexes when operating: row locks are upgraded to table locks when the innodb engine index fails
Mysql > update test_innodb_lock set sex='2' where name=400; Query OK, 0 rows affected (0.00 sec) Rows matched: 2 Changed: 0 Warnings: 0
Notice here that name does not add single quotation marks to index invalidation
Mysql > update test_innodb_lock set sex='3' where id=3; Query OK, 1 row affected (23.20 sec) Rows matched: 1 Changed: 1 Warnings: 0
It is found that other row operations are also in a blocking state, which is due to the upgrade of row locks to table locks due to non-indexing.
Originally, only one row of data was locked, but the index failed due to forgetting to put single quotation marks on the name field, and the whole table was locked.
Gap lock
When we use a range condition instead of wanting to wait for a condition to retrieve data and request a shared or exclusive lock, there is a record that does not exist in that scope condition, called a gap, and innodb will also lock this gap, which is called a gap lock.
Mysql > select * from test_innodb_lock +-+ | id | name | sex | +-+ | 1 | 100 | 2 | 3 | 100 | 3 | 4 | 400 | 0 | 5 | 500 | 1 | 6 | 600 | 0 | 7 | 700 | 3 | 8 | 800 | 1 | 9 | 900 | | 2 | | 1 | 200 | 0 | +-+ has no data with an id of 2 |
Check the requisition of row locks
Mysql > show status like 'innodb_row_lock%' +-- +-+ | Variable_name | Value | +-+-+ | Innodb_row_lock_current_waits | 0 | Innodb_row_lock _ time | 284387 | Innodb_row_lock_time_avg | 21875 | | Innodb_row_lock_time_max | 51003 | | Innodb_row_lock_waits | 13 | +-+-+ 5 rows in set (0.00 sec) Innodb_row_lock_current_waits Number of locks waiting before Innodb_row_lock_time: total lock length from system startup to now Innodb_row_lock_time_avg: average time spent waiting each time Innodb_row_lock_time_max: time spent waiting for the longest time from system startup to now Innodb_row_lock_waits: total number of times the system has been waiting since the system was started to the present line lock summary
InnoDB storage engine due to the implementation of row-level locking, although the performance loss in the implementation of locking mechanism may be higher than table locks, but in terms of overall concurrency processing capacity, it is far more due to MyISAM table locks. When the system concurrency is high, the overall performance of InnoDB will have obvious advantages compared with MyISAM.
However, InnoDB's row-level lock also has its fragile side, and when we use it improperly, it may make the overall performance of InnoDB not only not better than MyISAM, but even worse.
Optimization recommendations:
As far as possible, all data retrieval can be done through the index to avoid upgrading from indexed row locks to table locks.
Reasonably design the index to minimize the scope of the lock.
Minimize index conditions and index ranges, and avoid gap locks
Try to control the transaction size and reduce the amount of locked resources and time length.
Low-level transaction isolation can be used whenever possible (but needs to be met at the business level)
After reading the above, have you mastered how to learn MySQL? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.