Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

From sorting principle to sorting method in MYSQL

2025-01-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

This article refers to the MYSQL official documents, algorithm books, part of my own point of view may be wrong, if there is any mistake, please point out the common discussion

Please indicate the source of the reprint, thank you!

I. sorting algorithms that may be used in MYSQL sorting

From the interface between MYSQL official documents and source code, MYSQL uses BUFFER internal quick sorting algorithm and external multiplex merge sorting algorithm, and the corresponding interface functions are

Filesort () function. Note that filesort () is a general interface. Internal sorting actually calls std::stable_sort\ std::sort under save_index () and merges sorting.

Also included in the following interface may be merge_many_buff (), just like the meaning of using filesort in the execution plan, which can only represent the use of sorting, but

Whether or not to use tempfile sorting is not obvious, but this can be seen on trace but not online trace research is possible, and I will demonstrate it later.

Also note that using temporary only states that temporary tables are used to store intermediate results, so let's not discuss it here, just look at sorting.

The following is a brief introduction to the principles of the two algorithms

1. Buffer internal quick sorting algorithm

Quick sort is an algorithm of exchange sort and an upgraded version of bubble sort. Its principle is to adopt the idea of divide and conquer and to set a reference point for each exchange.

Put the data larger than this base point to one side, the larger data to the other side, and recursively complete it constantly. It is generally necessary to sort a large amount of data quickly.

Excellent efficiency, at the end of the article is a simple quick sorting implementation, if you are interested in the algorithm can refer to. But at least you can get in.

Three kinds of optimization

Small data optimization, because quick sorting is not optimal when the amount of data is small, other sorting algorithms such as insert sorting can be used.

Tail recursive optimization to reduce the use of stack

-- Datum point optimization, taking the intermediate value in the data as the datum point as far as possible, which can make quick sorting more optimized.

2. Merging and sorting of external disks

The ordered data files after internal quick sorting are constantly compared to get orderly files, and the number of files merged each time is the number of ways to merge.

Each R in the figure certainly represents an ordered disk file.

Figure 2 below: merge and sort (intercept data structure C language version)

Below figure 5 merge sort (intercept data structure C language version)

2. Related parameters of MYSQL

Sort_buffer_size:

Of course, the buffer of each sort is used for internal quick sorting. The larger the buffer, of course, the fewer physical files will be generated, but this

Parameter is session-level, too large will result in insufficient memory, the default is 256K. Note:

On Linux, there are thresholds of 256KB and

2MB where larger values may significantly slow down memory allocation, so you should consider staying

Below one of those values

Read_rnd_buffer_size:

In addition to MRR, it is also used for secondary sorting when sorting the sorted data according to primary key (ROW_ID) in a block manner.

The meaning can also be sequenced as much as possible by fetching data from the return table.

Max_length_for_sort_data:

The unit is bytes. If the field length of the row returned by the sort is about this value, use the second sort instead of the first sort. Default is 1024, minimum.

The value is 4, and if you increase this value, it is possible to use too much sorting at one time, resulting in high TEMPFILE space use and low CPU, why this is explained later.

Max_sort_length:

The unit is bytes. If the length of the sorted field exceeds this value, only the length is set with this parameter. Default is 1024, such as text field.

It is often greater than this value, and increasing this value will significantly improve the accuracy of sorting, but it also means higher BUFFER usage and TEMPFILE usage

Third, monitoring disk sorting

Sort_merge_passes: disk sort merging times, reducing the sort_buffer_size size will significantly reduce the Sort_merge_ passes value, and temporary files will also

There will be less, and the fifth part proves

Sys.statements_with_sorting view

Fourth, MYSQL secondary access sorting (original method) and primary access sorting (modified method)

1. Sort the second access

Reading data contains only sort key values and rowid (primary key) to sort buffer

-- Quick sort in buffer, and write the sort data in memory to tempfile if the buffer is full

-- repeat the above process until the internal quick sort is complete and generate multiple tmepfile files

The source code interface for multiplex merging and sorting is in merge_many_buff (), where two macros of MERGEBUFF,MERGEBUFF2 are defined.

This appears in the official document, so bring it up and explain it.

/ * Defines used by filesort and uniques * /

# define MERGEBUFF 7

# define MERGEBUFF2 15

If you are interested, you can take a closer look at the source code..

At the last merge, only rowid (priamry key) was added to the last file

Access the table data to the final file according to rowid (primary key), so that you can get the sorted data

Here is an optimization similar to MRR, which reads the data into read_rnd_buffer_size in blocks.

Sort the data in the access table according to rowid (primary key), in order to reduce the impact of random reads

But this is divided into blocks, which can only be reduced but not eliminated, especially when the amount of data is very large, because

Read_rnd_buffer_size only defaults to 256K.

The problem is the second access to the table data, once when reading the data, and the other through the sorted

Rowid (primary key) accesses data, and a large number of random accesses occur.

2. Sort one visit

This is simple. The second access sort is to put the sort key value and rowid (primary key) into sort buffer.

This is about putting all the required data fields in sort buffer, for example:

Select id,name1,name2 from test order by id

Here the id,name1,name2 is stored in sort buffer. Just sort it in this way. You don't need to go back to the table to get it.

Data, of course, the disadvantage is a larger sort buffer footprint, a larger tempfile footprint. So

Mysql uses max_length_for_sort_data to limit the default 1024, which refers to the id,name1,name2 field

The bytes length of the

Because you don't need to go back to the table, you only need to access the data once.

3. 5.7.3 Optimization of the sorting algorithm for the next visit

Use a method called pack optimization, which aims to compress NULL and reduce the overuse of sort buffer and tempfile by the one-access sorting algorithm

Original text:

Without packing, a VARCHAR (255)

Column value containing only 3 characters takes 255 characters in the sort buffer. With packing

The value requires only 3 characters plus a two-byte length indicator. NULL values require only

A bitmask.

But when I was doing MYSQL TRACE, I found that there was a unpack process, and every line and every field needed pack unpack.

It was subsequently proved

The sort method used is reflected in the execution plan that filesort is not easy to figure out, but we can use the trace method.

It's also said in the official document, but I used the trace method of MYSQLD to do it, and the effect is the same. Please refer to part 5 for details.

5. Prove the point

1. First, you need to prove whether secondary access sorting or primary access sorting is used, and whether pack is enabled.

The official document states

"filesort_summary": {

"rows":

"examined_rows":

"number_of_tmp_files": 0

"sort_buffer_size": 25192

"sort_mode":

}

Sort_mode:

: This indicates use of the original algorithm.

: This indicates use of the modified algorithm.

: This indicates use of the modified algorithm. Sort

Buffer tuples contain the sort key value and packed columns referenced by the query.

That is to say

Sort_key, rowid is the sort of secondary access

Sort_key, additional_fields is a sort of access

Sort_key, packed_additional_fields is an one-time access sorting + pack mode

All right, let's prove it. Use the test table.

Mysql > show create table testmer\ G

* * 1. Row *

Table: testmer

Create Table: CREATE TABLE `testmer` (

`seq` int (11) NOT NULL

`id1` int (11) DEFAULT NULL

`id2` int (11) DEFAULT NULL

`id3` int (11) DEFAULT NULL

`id4` int (11) DEFAULT NULL

PRIMARY KEY (`seq`)

KEY `id1` (`id1`, `id2`)

KEY `id3` (`id3`, `id4`)

) ENGINE=InnoDB DEFAULT CHARSET=latin1

1 row in set (0.01 sec)

Mysql > select * from testmer

+-+

| | seq | id1 | id2 | id3 | id4 | |

+-+

| | 1 | 1 | 2 | 4 | 4 |

| | 2 | 1 | 3 | 4 | 5 |

| | 3 | 1 | 2 | 4 | 4 |

| | 4 | 2 | 4 | 5 | 6 | |

| | 5 | 2 | 6 | 5 | 8 | |

| | 6 | 2 | 10 | 5 | 3 |

| | 7 | 4 | 5 | 8 | 10 | |

| | 8 | 0 | 1 | 3 | 4 | |

+-+

8 rows in set (0.01 sec)

When max_length_for_sort_data is 1024 and max_length_for_sort_data is 4 pairs, respectively

Select * from testmer order by id1

Generate trace file

The meaning is to use primary access sorting and secondary access sorting, because a small amount of data is also in sort_buffer.

Just sort it.

Sort once access:

Opt: filesort_execution: ending struct

Opt: filesort_summary: starting struct

Opt: rows: 8

Opt: examined_rows: 8

Opt: number_of_tmp_files: 0

Opt: sort_buffer_size: 34440

Opt: sort_mode: ""

Opt: filesort_summary: ending struct

Sort for one access + pack mode

Sort the second access:

Opt: filesort_execution: ending struct

Opt: filesort_summary: starting struct

Opt: rows: 8

Opt: examined_rows: 8

Opt: number_of_tmp_files: 0

Opt: sort_buffer_size: 18480

Opt: sort_mode: ""

Opt: filesort_summary: ending struct

Sort for secondary access

You can see the difference, which proves the role of max_length_for_sort_data.

In fact, this is just a call in the filesort () function. In fact, gdb can also be seen with a breakpoint.

Opt_trace_object (trace, "filesort_summary")

.add ("rows", num_rows)

.add ("examined_rows", param.examined_rows)

.add ("number_of_tmp_files", num_chunks)

.add ("sort_buffer_size", table_sort.sort_buffer_size ())

.add _ alnum ("sort_mode"

Param.using_packed_addons ()?

"":

Param.using_addon_fields ()?

":")

2. It is proved that reducing the sort_buffer_size size will significantly reduce the Sort_merge_ passes value, and temporary files will also

It will be less.

To complete this proof, I created a large table, reducing the first sort_buffer to use more

Sort by tempfile

Mysql > select count (*) from testshared3

+-+

| | count (*) |

+-+

| | 1048576 |

+-+

1 row in set (28.31 sec)

Mysql > set sort_buffer_size=50000

Query OK, 0 rows affected (0.00 sec)

Mysql > show variables like'% sort_buffer_size%'

+-+ +

| | Variable_name | Value |

+-+ +

| | sort_buffer_size | 50000 | |

+-+ +

Mysql > show status like'% Sort_merge%'

+-+ +

| | Variable_name | Value |

+-+ +

| | Sort_merge_passes | 0 | |

+-+ +

1 row in set (0.00 sec)

Mysql > explain select * from testshared3 order by id limit 1048570

+-- +

| | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |

+-- +

| | 1 | SIMPLE | testshared3 | NULL | ALL | NULL | NULL | NULL | NULL | 1023820 | 100.00 | Using filesort |

+-- +

Mysql > select * from testshared3 order by id limit 1048570

+-+

| | id |

+-+

| | 1 |

+-+

1 row in set (5 min 4.76 sec)

After completion

Mysql > show status like'% Sort_merge%'

+-+ +

| | Variable_name | Value |

+-+ +

| | Sort_merge_passes | 63 | |

+-+ +

1 row in set (0.21 sec)

Opt: number_of_tmp_files: 378

Number of temporary files 378

Then increase the sort_buffer_size.

Mysql > show variables like'% sort_buffer_size%'

+-+ +

| | Variable_name | Value |

+-+ +

| | sort_buffer_size | 262144 | |

+-+ +

Mysql > show status like'% Sort_merge%'

+-+ +

| | Variable_name | Value |

+-+ +

| | Sort_merge_passes | 0 | |

+-+ +

1 row in set (0.04 sec)

Or the same sentence.

Mysql > select * from testshared3 order by id limit 1048570

+-+

| | id |

+-+

| | 1 |

+-+

1 row in set (5 min 4.76 sec)

Mysql > show status like'% Sort_merge%'

+-+ +

| | Variable_name | Value |

+-+ +

| | Sort_merge_passes | 11 | |

+-+ +

Opt: number_of_tmp_files: 73

Number of temporary files 73

Be proved

3. Prove that there are pack and unpack operations, and that pack unpack is required for every line and field.

It would be nice to directly check whether the trace file has an interface

In fact, you can see the following operations in eight paragraphs

> Field::unpack

Field::unpack

Field::unpack

Field::unpack

Field::unpack

Field::unpack

Field::unpack "| wc-l

forty

[root@testmy tmp] # cat sortys2.trace | grep "> Field::pack" | wc-l

forty

Happens to be 5 (field) * 8 (line)

Of course, I then do the same test on a large table with only one field, since it is a field that uses the

All the fields sorted in a single access sort are just this field, so the number of pack and unpack should be

It is similar to the number of rows.

Mysql > select count (*) from testshared3

+-+

| | count (*) |

+-+

| | 1048576 |

+-+

1 row in set (28.31 sec)

Mysql > desc testshared3

+-+ +

| | Field | Type | Null | Key | Default | Extra | |

+-+ +

| | id | int (11) | YES | | NULL |

+-+ +

1 row in set (0.26 sec)

[root@testmy tmp] # cat mysqld11.trace | grep "> Field::unpack" | wc-l

1048571

We also get the same basically the same structure, which proves that every field in every row in an access sort needs

For pack and unpack operations, you can actually see many classes in the entire trace, for example, columns are taken out.

If you visit all the sorted fields at once, we will not explain them in detail here.

6. Source code GDB discovers internal sorting by calling the std::stable_sort std::sort of the STD container for sorting

(gdb) n

250 if (param- > sort_length)

< 10) (gdb) list 245 than quicksort seems to be somewhere around 10 to 40 records. 246 So we're a bit conservative, and stay with quicksort up to 100 records. 247 */ 248 if (count sort_length < 10) 251 { 252 std::sort(m_sort_keys, m_sort_keys + count, 253 Mem_compare(param->

Sort_length))

254 return

The source code on this part of mysql

Click (here) to collapse or open

/ *

Std::stable_sort has some extra overhead in allocating the temp buffer

Which takes some time. The cutover point where it starts to get faster

Than quicksort seems to be somewhere around 10 to 40 records.

So we're a bit conservative, and stay with quicksort up to 100 records.

, /

If (count sort_length sort_length))

Return

}

Std::sort (m_sort_keys, m_sort_keys + count)

Mem_compare_longkey (param- > sort_length))

Return

}

/ / Heuristics here: avoid function overhead call for short keys.

If (param- > sort_length sort_length))

Return

}

Std::stable_sort (m_sort_keys, m_sort_keys + count)

Mem_compare_longkey (param- > sort_length))

Finally, the code for quick sorting is attached.

With sorted data, it is 13pyrmus, 3pyrus, 2pjr, 34pr, 5pr, 102pr, 90pr, 20pr 2.

After the sorting is completed, it is as follows:

Gaopeng@bogon:~/datas$. / a.out

Sort result:2 2 3 5 9 13 20 34 90 102

Click (here) to collapse or open

/ *

> File Name: qsort.c

> Author: gaopeng QQ:22389860

> Mail: gaopp_200217@163.com

> Created Time: Fri 06 Jan 2017 03:04:08 AM CST

* * /

# include

# include

Int partition (int * kjinint low,int high)

{

Int point

Point = k [low]; / / datum point, using the first value of the array, which can actually be optimized

While (low

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report