In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Just from a personal point of view to analyze the topic of this competition, involving some personal experience, learning and exchange.
Post the official git link:
Https://github.com/DBbrain/Diagnosis
2000CREATE TABLE `order` (`id` bigint (20) NOT NULL AUTO_INCREMENT, `name` varchar (32) COLLATE utf8_bin NOT NULL, `creator` varchar (24) COLLATE utf8_bin NOT NULL, `price` varchar (64) COLLATE utf8_bin NOT NULL, `create_ time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `status` tinyint (1) NOT NULL, PRIMARY KEY (`id`)) The data quantity of order_item table: 499760CREATE TABLE `order_ item` (`id` bigint (20) NOT NULL AUTO_INCREMENT, `name` varchar (32) COLLATE utf8_bin NOT NULL, `parent` bigint (20) NOT NULL, `status` int (11) NOT NULL, `type` varchar (12) COLLATE utf8_bin NOT NULL DEFAULT'0, `quantity`int (11) NOT NULL DEFAULT'1, `update_ time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`)) 1. Select analysis SELECT * FROM `order` o INNER JOIN order_item I ON i.parent = o.id ORDER BY o.status ASC, i.update_time DESC LIMIT 0, 20 o INNER JOIN order_item MySQL > explain SELECT * FROM `order` o INNER JOIN order_item I-> ON i.parent = o.id ORDER BY o.status ASC, i.update_time DESC LIMIT 0,20 +-+-+ -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | | +-+ -+-+ | 1 | SIMPLE | I | NULL | ALL | NULL | 497839 | 100.00 | Using temporary Using filesort | | 1 | SIMPLE | o | NULL | eq_ref | PRIMARY | PRIMARY | 8 | sql_optimization_match.i.parent | 1 | 100.00 | NULL | +- -+
Two tables inner join, according to the execution plan, drive the table to select order_item. At the same time, because the sort field comes from the two tables and the direction is different, the temporary table is written, and the index can not be used in the sort field, so the external sorting is caused.
Analyze the join field order_item.parent = order.idmysql > select count (distinct (parent)) from order_item / / low differentiation, ignore +-+ | count (distinct (parent)) | +-+ | 300 | +-+ mysql > select count (distinct (id)) from `order` / / highly differentiated, but there is already a primary key index +-+ on id | count (distinct (id)) | +-+ | 2000 | +-+ 1 row in set (2000 sec)
Order_item.parent has no index
Order.id has an index.
Analysis aggregate field ORDER BY o.status ASC, i.update_time DESCmysql > select count (distinct (status)) from `order` / / low differentiation +-+ | count (distinct (status)) | +-+ | 2 | +-+ mysql > select count (distinct (update_time)) from `order_ item` / / differentiation generally +-- + | count (distinct (update_time)) | +-- + | 32768 | +-- +
The order by fields of the two tables are sorted differently, and an external sort may be needed, while the original table has no index on the sort field.
Optimize
As you can see from the previous query plan, a full table scan of the order_item is performed, followed by an external sort. Because the two columns need to be sorted in sql semantics, the amount of data to be sorted out can be reduced in other ways, thus reducing the time consumption.
The sorting field status in the order table has only two different values. After trying to remove the status sorting field, the speed is obviously improved. At this time, the update_time field in order by can try to increase the index, and the differentiation degree also meets the requirements.
Status has only two columns, and you can use union all instead to prevent different tables in order by from being unable to use indexes because of different sort order.
Here is the official suggestion (it should not be automatically rewritten by ML), but this way of rewriting sql has some limitations and is applicable to scenarios. If the status type is not tinyint (1), and if new types are added in the future, sql needs to be rewritten constantly.
You can try to promote business transformation and re-optimize the index.
In addition, after sql rewriting, the index suggestion is to add federated index (update_time,parent). As can be seen from the above analysis, the differentiation of parent is low, and there is little difference in performance by adding federated index or only adding index to update_time.
# sql rewrites SELECT o. Rewriting I. * FROM ((SELECT o.id, i.id item_id FROM `order_ 1` o INNER JOIN order_item I ON i.parent = o.id WHERE o.status = 0 ORDER BY i.update_time DESC LIMIT 0,20) UNION ALL (SELECT o.id) I.id item_id FROM `order_ 1` o INNER JOIN order_item I ON i.parent = o.id WHERE o.status = 1 ORDER BY i.update_time DESC LIMIT 0,20) tmp INNER JOIN `order_ 1`o ON tmp.id = o.id INNER JOIN order_item I ON tmp.item_id = i.id ORDER BY o.status ASC, i.update_time DESC LIMIT 0 2. Add the index alter table order_item add index `update_ 1` (`update_ time`, `parent`) # execution Plan +- -+-+ order_item I ON tmp.ite | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Filtered | Extra | +-+-- -- +-+ | 1 | PRIMARY | | NULL | ALL | NULL | 40 | 100.00 | Using temporary Using filesort | | 1 | PRIMARY | o | NULL | eq_ref | PRIMARY | PRIMARY | 8 | tmp.id | 1 | 100.00 | NULL | 1 | PRIMARY | I | NULL | eq_ref | PRIMARY | PRIMARY | 8 | tmp.item_id | | 1 | 100.00 | NULL | | 2 | DERIVED | I | NULL | index | NULL | item_idx_1 | 12 | NULL | 20 | 100.00 | Using index | | 2 | DERIVED | o | NULL | eq_ref | | | PRIMARY | PRIMARY | 8 | sql_optimization_match.i.parent | 1 | 10.00 | Using where | 3 | UNION | I | NULL | index | NULL | item_idx_1 | 12 | NULL | 20 | 100.00 | Using index | | 3 | UNION | | | o | NULL | eq_ref | PRIMARY | PRIMARY | 8 | sql_optimization_match.i.parent | 1 | 10.00 | Using where | + -+ summary 1. If there are operations such as sorting and range comparison for fields with very low discrimination, they can be converted to union all;2. For sorted fields, try to use indexes to avoid filesort, and if it is inevitable, try to reduce the amount of sorted data before filesort 2. Update analysis update `order` set create_time = now () where id in (select parent from order_item where type = 2); # execution plan mysql > explain update `order_ 1` set create_time = now () where id in (select parent from order_item where type = 2) +- -+-+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | + -+ | 1 | UPDATE | order_1 | NULL | index | NULL | PRIMARY | 8 | NULL | 2000 | 100.00 | | Using where | | 2 | DEPENDENT SUBQUERY | order_item | NULL | ALL | NULL | 496836 | Using where | + -+
The condition of update is the way of in subquery. In explain, it is noted that select type is DEPENDENT SUBQUERY, which means that the outer query is done first, and the number of rows matched by the external query is N, then N subqueries will be carried out, and the efficiency is very low.
The common practice for subqueries is to convert to a linked table query join.
Optimize 1. The simplest subquery-> join optimization operation update `order` o inner join (select parent from `order_ item` where type = 2) tmp on o.id = tmp.parent set create_time = now () Mysql > explain update `order` o inner join (select parent from `order` where type = 2) tmp on o.id = tmp.parent set create_time = now ()\ row * * id: 1 select_type: SIMPLE table: order_item partitions: NULL type: ALLpossible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 497839 filtered: 10.00 Extra: Using where** 2.row * * id: 1 select _ type: UPDATE table: o partitions: NULL type: eq_refpossible_keys: PRIMARY key: PRIMARY key_len: 8 ref: sql_optimization_match.order_item.parent rows: 1 filtered: 100.00 Extra: NULL2 rows in set (0.00 sec)
The speed after the conversion is much faster than before, without dependence subquery, but it is still seconds. The driver table chooses order_item table, but it is basically a full table scan, which means that 49w rows of data and order table are used for join, which is still very expensive.
If you look at the original slow update, only the order table can be modified. If the condition is that id is within the scope of the subquery of order_item, repeated parent is meaningless to update. Therefore, you can group by the parent field once. Because there is an order_item.type = 2 condition in the subquery, you can aggregate the type field at the same time.
Because order_item has only primary key indexes, it is best to use indexes for equivalent judgment conditions and aggregation operations of order_item tables, so federated indexes can be established, and index order takes precedence over equivalent operations.
An additional note is that we need to create an index to see if the index field type is consistent with the equivalent type in sql.
Optimize the query of connected tables to add indexes: alter table `order_ item` add index idx_1 (type,parent); sql optimization: update `order` o inner join (select parent from `order_ item` where type ='2' group by type,parent) I on o.id = i.parent set create_time = now () Mysql > explain update `order` o inner join (select parent from `order_ item` where type ='2' group by type Parent) ion o.id = i.parent set create_time = now ()\ Graph * 1. Row * * id: 1 select_type: PRIMARY table: partitions: NULL type: ALLpossible_keys: NULL key: NULL Key_len: NULL ref: NULL rows: 571filtered: 100.00 Extra: NULL** 2.row * * id: 1 select_type: UPDATE table: o partitions: NULL type: eq _ refpossible_keys: PRIMARY key: PRIMARY key_len: 8 ref: i.parent rows: 1 filtered: 100.00 Extra: NULL** 3. Row * * id: 2 select_type: DERIVED Table: order_item partitions: NULL type: rangepossible_keys: idx_1 key: idx_1 key_len: 46 ref: NULL rows: 571 filtered: 100.00 Extra: Using where Using index for group-by
After optimization, the order_item table is used to drive the table, and the index idx_1 established above is also used here, and then the temporary table and order table are generated for join. Because of group by, order_item generates fewer result sets, so it is chosen to drive the table.
It is also important to note that two columns are used here in group by, this is to use the idx_1 index (although the number of rows returned by groiup by parent and group by type,parent is the same, there is still a big gap in the execution plan)
The execution time after optimization is in milliseconds.
Summary 1. In the choice of driving table, the small table always drives the large table, and the driven table will scan the whole table, so the index is usually added on the driven table. 2. If DEPENDENT SUBQUERY appears in the execution plan, it will certainly affect the execution efficiency of sql (at the same time, DEPENDENT SUBQUERY will also potentially cause a certain degree of lock magnification), the in + subquery mode is easy to cause, and the subquery can be optimized to join operation. 3. For join join table query, the less data is connected to the table, the higher the execution efficiency is. Therefore, the amount of data participating in join is reduced as much as possible without changing the semantics of sql. 4. About the index order: equivalence condition > group by > order by5. Note that the index field type is consistent with the data type in the criteria in sql. Final data
The process of calculating the degree of differentiation is omitted, and the degree of differentiation is given directly here.
Because some tables have a large number of rows, the discrimination is calculated by counting the value of distinct in the first 5000 rows (this can also be done in the generation environment, which can reduce the extra overhead of calculating differentiation). In extreme cases, some small tables may cause misjudgment, but tables with very few rows are not very meaningful.
# customer data quantity 1200000CREATE TABLE `customer` (`custkey` int (11) NOT NULL, / / differentiation OK `name` varchar (25) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, / / differentiation OK `address` varchar (40) NOT NULL, `nationkey` int (11) NOT NULL, / / low discrimination `phone` char (15) NOT NULL, / / differentiation OK `acctbal` decimal (15J 2) NOT NULL, `mktsegment` char (10) NOT NULL / / low differentiation `comment` varchar (117) NOT NULL, PRIMARY KEY (`custkey`), KEY `idx_ nationkey` (`nationkey`) # nation data quantity 25CREATE TABLE `nation` (`nationkey` int (11) NOT NULL, / / distinguishing degree OK `name` char (25) NOT NULL, `regionkey` int (11) NOT NULL, `comment`varchar (152) DEFAULT NULL, PRIMARY KEY (`nationkey`), KEY `nationkey` (`name`)) # orders data quantity 12000000CREATE TABLE `orders` (`orderkey` int (11) NOT NULL, `custkey` int (11) NOT NULL, / / differentiation degree OK `orderstatus` varchar (1) NOT NULL, `totalprice` decimal (15J 2) NOT NULL, / / differentiation degree OK `orderdate` date NOT NULL, `orderpriority` char (15) NOT NULL, `clerk` NOT NULL, / / differentiation OK `shippriority` int (11) NOT NULL, `comment` varchar (79) NOT NULL, PRIMARY KEY (`orderkey`) # region data quantity 5CREATE TABLE `region` (`regionkey` int (11) NOT NULL, `name` varchar (25) NOT NULL, `comment` varchar (152) DEFAULT NULL, PRIMARY KEY (`regionkey`)) 1. Select analysis select c.custkey, c.phone, sum (o.totalprice) totalprice from nation n inner join customer c on c.nationkey = n.nationkey inner join orders o on o.clerk = c.name where n.name = "CHINA" and c.mktsegment = "HOUSEHOLD" and c.phone like "28-520%" group by c.custkey, c.phone # execution Plan + -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | + -+- -+ | 1 | SIMPLE | n | NULL | ALL | PRIMARY | NULL | 25 | 10.00 | Using where Using temporary; Using filesort | | 1 | SIMPLE | c | ALL | NULL | 1189853 | 0.11 | Using where; Using join buffer (Block Nested Loop) | 1 | SIMPLE | o | NULL | ALL | NULL | 10963843 | 10.00 | Using where Using join buffer (Block Nested Loop) | +- -- +
Three tables [customer c]; [nation n]; [orders o]
Customer where condition: c.mktsegment = "HOUSEHOLD": lower differentiation, abandon ✔ c.phone like "28-520%": better differentiation, consider adding index aggregation condition: group by c.custkey: good differentiation, but already primary key, abandon ✔ c.phone: same as where Consider adding join condition: c.nationkey = n.nationkey: low discrimination, abandoning ✔ o.clerk = c.name: higher discrimination, consider adding index advice: add index `dx_1_ 0` (name) Add index `idx_1_ 1` (phone); the amount of nation data is 25. If you don't consider adding an index nation table, you can consider adding an index of add index `idx_1_ 0` (name). But it doesn't make much sense orders join condition: ✔ o.clerk = c.name: high degree of discrimination, consider adding index advice: add index `index 0` (clerk) optimize after adding three indexes according to the above analysis The execution plan is as follows. -+-+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | | filtered | Extra | +-- +-- | -+ | 1 | SIMPLE | | c | NULL | range | dx_1_0 | Idx_1_1 | idx_1_1 | 45 | NULL | 46 | 10.00 | Using index condition Using where; Using temporary Using filesort | | 1 | SIMPLE | n | NULL | eq_ref | PRIMARY | PRIMARY | 4 | dbaas.c.nationkey | 1 | 10.00 | Using where | 1 | SIMPLE | o | NULL | ALL | idx_1_0 | NULL | NULL | NULL | | 10963843 | 10.00 | Range checked for each record (index map: 0x2) | +-+-- | -+ Summary 1. In the case of inner join, we cannot determine the driver table, so we will choose to add indexes on all appropriate fields; 2. When there are many types of conditions in sql, we choose to add the equivalent condition and aggregation condition as the combined index, and the join condition to add the index separately. If the amount of data is too small, adding an index is of little significance and can be ignored. 4. Dbrain gives a combined index, and the performance of the two is basically the same. 2. Select analysis of select * from (select custkey, orderdate, sum (totalprice) as totalprice from orders group by custkey Orderdate) o where orderdate = "2019-08-01" # execution Plan +- -+- -- + | 1 | PRIMARY | | NULL | ref | | 3 | const | 10 | 100.00 | NULL | 2 | DERIVED | orders | NULL | ALL | NULL | 10963843 | 100.00 | Using temporary Using filesort | +-+- -+-+
With only one table involved, group by uses filesort,sql that doesn't look complicated, but produces a driver table.
Check sql and find that select * from (subquery). Extra nesting can be considered, and sql can be rewritten as
Select custkey, orderdate, sum (totalprice) as totalprice from orders where orderdate = "2019-08-01" group by custkey, orderdate
Index analysis
Where condition: ✔ orderdate = "2019-08-01": high degree of differentiation. Consider adding index aggregation condition: ✔ group by custkey, orderdate: both field areas have higher partition, consider adding index advice: equivalent condition takes precedence over aggregation condition add index `idx_2_ 0` (orderdate, custkey) optimize the use of optimized sql, increase federated index The execution plan is + -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +- -+-+ | 1 | SIMPLE | orders | NULL | ref | idx_2_0 | idx_2_0 | 3 | const | 1 | 100.00 | Using index condition | +-+- -+ if you add two separate indexes Add index `idx_2_ 1` (custkey) Add index `idx_2_ 2` (orderdate) The execution plan is + -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | + -+- -- + | 1 | SIMPLE | orders | NULL | ref | idx_2_2 | idx_2_2 | 3 | const | 1 | 100.00 | Using index condition Using temporary Using filesort | +-+- -+ uses filesort Only custkey can use indexes, so federated indexes are recommended. Conclusion 1. If the sql of a single table requires attention such as filesort in the execution plan, frequent nested subqueries will have a certain impact on performance, so you can consider sql rewriting. With regard to indexing, the equivalent condition should take precedence over aggregation, join, etc. 3. Select analysis select c.custkey, sum (o.totalprice) totalprice from customer c left join orders o on o.custkey = c.custkey where c.phone like "33-64%" and c.name like concat ("Customer#00003", "%") group by c.custkey under the premise that the first two sql indexes have been added The execution plan is + -+-+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | | +-+-+ | -+-+ | 1 | SIMPLE | c | NULL | range | PRIMARY Dx_1_0,idx_1_1 | idx_1_1 | 45 | NULL | 552 | 1.63 | Using index condition Using where; Using temporary; Using filesort | | 1 | SIMPLE | o | NULL | ALL | NULL | 10963843 | 100.00 | Using where Using join buffer (Block Nested Loop) | + -+
Indexes are already used in the customer table. Whether you need to add other indexes will be analyzed later.
The order table has scanned the whole table, scanning 12000000 rows of data, and may lack an index.
Two tables [customer c]; [order o]
Customer
Where condition: c.phone like "33-64%": the first select has added index c.name like concat ("Customer#00003", "%"). The first select has added index aggregation condition: group by c.custkey: good differentiation, but already primary key. Abandon join condition: o.custkey = c.custkey: good differentiation, but already primary key. Abandon advice: no suggestion
Order
Join condition: o.custkey = c.custkey: high discrimination Consider adding index advice: add index `index 0` (custkey) optimize the execution plan after adding index is: + +-+ | id | select _ type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +- -- + -- + | 1 | SIMPLE | c | NULL | range | PRIMARY Dx_1_0,idx_1_1 | idx_1_1 | 45 | NULL | 552 | 1.63 | Using index condition Using where; Using temporary Using filesort | | 1 | SIMPLE | o | NULL | ref | idx_3_0 | idx_3_0 | 4 | dbaas.c.custkey | 13 | 100.00 | NULL | +- -+ -after adding the index. The number of rows scanned by the order table has been greatly reduced, and the execution efficiency has been greatly improved. About [using where], [using index], [using index condition] The difference between [Using where & & Using index] (why sum up this? what I say on my side is to build two libraries, the data and the basic table structure are the same, but the index of the table in one library is added according to my own analysis, and the other library is an official suggestion. I found that when the execution efficiency is very high, the extra content of the two execution plans is different, and I wanted to solve the problem by google. But after reading the top three blogs, the content of the two articles is the same, which is completely different from the explanation of the third article. I have tried it myself, and the conclusion is given here. Finally, the test flow is attached): 4. Select analysis select c.custkey, c.phone from nation n inner join customer c on c.nationkey = n.nationkey where n.name = "CHINA" and exists (select 1 from orders o where o.custkey = c.custkey and o.orderdate = "1998-08-11") Under the premise of the existing index above The execution plan is +-+ -+-- + | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +-+ -+-+ | 1 | PRIMARY | n | NULL | ref | PRIMARY Idx_1_0 | idx_1_0 | 75 | const | 1 | 100.00 | Using index | 1 | PRIMARY | c | NULL | ALL | NULL | 1189853 | 10.00 | Using where Using join buffer (Block Nested Loop) | | 2 | DEPENDENT SUBQUERY | o | NULL | ref | idx_2_0,idx_3_0 | idx_2_0 | 7 | const Dbaas.c.custkey | 1 | 100.00 | Using index | +-- + -+-+
See DEPENDENT SUBQUERY, under the condition of in/exists + subquery. Often appear, what harm there is an explanation above, the emergence of this thing, it is necessary to find a way to rewrite sql. Since it is an exists + subquery, the optimization strategy is to rewrite it to join.
The most popular way to rewrite: first all inner join, and finally add where condition select c.custkey, c.phone from nation n inner join customer c on c.nationkey = n.nationkey inner join orders o on o.custkey = c.custkey where n.name = "CHINA" and o.orderdate = "1998-08-11" The official sql is rather complicated, but the things done are similar. I think a little more about trying to use group by to reduce the amount of data in join, and give the official answer. It doesn't explain much here (but it would be better to remove group by here) SELECT `t1`.`custkey`, `t1`.`phone`FROM (SELECT * FROM `dbaas`.`nation`AS `t`t`WHERE `t`..name` = 'CHINA') AS `t0`INNER JOIN `dbaas`.`customer`AS `t1`ON `t0`.`nationkey` = `t1`.`nationkey` INNER JOIN (SELECT `t2`.`custkey` FROM `dbaas.`orders` AS `t2`WHERE `t2`.`orderdate` =' 1998-08-11' GROUP BY `t2`.`custkey`) AS `t5`ON `t1`.`custkey` = `t5`custkey`
If the index is recommended, there is not too much here, and the condition fields already have corresponding indexes.
The execution plan after optimization is as follows: + -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +- -- +-+ | 1 | SIMPLE | n | NULL | ref | PRIMARY Idx_1_0 | idx_1_0 | 75 | const | 1 | 100.00 | Using index | | 1 | SIMPLE | o | NULL | ref | idx_2_0 Idx_3_0 | idx_2_0 | 3 | const | 20 | 100.00 | Using index | 1 | SIMPLE | c | NULL | eq_ref | PRIMARY | PRIMARY | 4 | dbaas.o.custkey | 1 | Using where | + -- +
You can see that the driver table of join chooses n and const respectively, and its performance is much better than that of DEPENDENT SUBQUERY
Summary 1. Not all complex join uses group by, which is related to data distribution. If group by does not significantly reduce the number of join rows, it is not necessary.
Mysql's query optimizer is a relatively complex logic, and the premise that it can work better is that the sql is written reasonably and has a proper index.
When we optimize sql, we usually first consider optimizing sql, and then increase the required indexes according to the optimized sql. (in the actual database development process, especially for 2B service providers, we will give priority to increasing the required indexes without business changes to try to solve the problem of slow query. If adding indexes can not solve the problem, then the business needs to be transformed accordingly.)
First of all, with regard to sql rewriting, this needs to be considered more, because the optimizer and actuator of mysql have done too many things, so I can't imagine if AI can automatically rewrite and optimize. (DBA is going to have another wave of laid-off people.) according to experience, consider improvement according to the outliers in the implementation plan, such as rewriting sub-query to join, etc., such as rewriting order by status to union all in the preliminary title, it does have a certain effect, but it is not a general method, so it is too flexible here.
Secondly, in contrast, in the case of known sql and table structure, it is more realistic to rely on AI to give index suggestions. There are some general rules for indexing, and there are a lot of introduction on the Internet.
1. Find out all the conditional fields and calculate the field differentiation. There is no need to index the fields with very low discrimination, and it is not meaningful to add the fields with very little data. If the condition is priority equivalent > group/order by > join, the joint index is established according to the degree of differentiation at the same priority; 3. Under the condition of aggregation, if there are too many rows after aggregation and the amount of table return is too large, mysql may not use these indexes; 4. There is no need to consider any index to drive the table, and the data driving the table must be in the result set of join. For inner join, which is unable to determine the driving table, you can consider adding indexes on the appropriate fields of both tables.
Keep pulling.
Now everyone is moving their business to the cloud, and the db intelligent diagnosis on the cloud will inevitably be a rigid demand in the future. It is not clear what AI can do, and who knows what will happen in the future.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.