Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to de-weight MySQL

2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article is to share with you about how to remove MySQL. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

Question raised

The source table t_source is structured as follows:

Item_id int, created_time datetime, modified_time datetime, item_name varchar (20), other varchar (20)

Request:

1. There are 1 million pieces of data in the source table, of which 500000 created_time and item_name are duplicated.

two。 Write the deduplicated 500000 data to the target table.

3. Repeat multiple pieces of data from created_time and item_name, and you can keep any of them without rule restrictions.

Experimental environment

Linux virtual machine: CentOS release 6.4bot 8G physical memory (MySQL configuration 4G); 100g mechanical hard disk; dual physical CPU dual core, a total of four processors; MySQL 8.0.16.

Set up test tables and data

-- create source table create table t_source (item_id int, created_time datetime, modified_time datetime, item_name varchar (20), other varchar (20));-- create target table create table t_target like tasking source;-- generate 1 million test data, of which 500000 created_time and item_name repeat delimiter / / create procedure sp_generate_data () begin set @ I: = 1 While @ i select count (*), count (distinct created_time,item_name) from t_source +-+-+ | count (*) | count (distinct created_time Item_name) | +-+-- + | 1000001 | 500000 | +-+-- + 1 row in set (1.92 sec) Skillful use of index and variable 1. Indexed comparison test

(1) use related subqueries

Truncate tasking target; insert into t_target select distinct T1. * from t_source T1 where item_id in (select min (item_id) from t_source T2 where t1.created_time=t2.created_time and t1.item_name=t2.item_name)

This statement doesn't work for a long time, just take a look at the execution plan.

Mysql > explain select distinct T1. * from t_source T1 where item_id in-> (select min (item_id) from t_source T2 where t1.created_time=t2.created_time and t1.item_name=t2.item_name) +- -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +- -+-+-- + | 1 | PRIMARY | T1 | NULL | ALL | NULL | NULL | NULL | NULL | 997282 | 100.00 | Using where Using temporary | | 2 | DEPENDENT SUBQUERY | T2 | NULL | ALL | NULL | NULL | NULL | NULL | 997282 | Using where | +-+-- -+-+ 2 rows in set 3 warnings (0.00 sec)

Both the main query and related subqueries are full table scans, with a total of 1 million * 1 million data rows scanned. No wonder the results cannot be obtained.

(2) use table join

Truncate tweak target; insert into t_target select distinct t1.* from t_source T1, (select min (item_id) item_id,created_time,item_name from t_source group by created_time,item_name) T2 where t1.item_id = t2.item_id

This method takes 14 seconds, and the query plan is as follows:

Mysql > explain select distinct t 1.* from t_source T1, (select min (item_id) item_id,created_time,item_name from t_source group by created_time,item_name) T2 where t1.item_id = t2.item_id +-- + -+-+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | + -+ | 1 | PRIMARY | T1 | NULL | | ALL | NULL | NULL | NULL | NULL | 997282 | 100.00 | Using where | Using temporary | | 1 | PRIMARY | | NULL | ref | 5 | test.t1.item_id | 10 | 100.00 | Distinct | 2 | DERIVED | t_source | NULL | ALL | NULL | NULL | NULL | NULL | 997282 | 100.00 | Using temporary | + -+-- + 3 rows in set 1 warning (0.00 sec)

The inner query scans 1 million rows of the tSource table, creates a temporary table, finds the minimum item_id after duplication, and generates the export table derived2, which has 500000 rows.

MySQL automatically creates an index auto_key0 of the item_id field on the export table derived2.

The outer query also scans 1 million rows of data in the tsource table. When linking with the exported table, item_id each row of the t_source table, use the auto_key0 index to find the matching rows in the exported table, and optimize the distinct operation at this time, and stop looking for the same value after finding the first matching row.

(3) use variables

Set @ aVERV: 1000-01-01 00 set'; set @ FRO; truncate tweak target; insert into t_target select item_id,created_time,modified_time,item_name,other from (select t0. Stamped if (@ a=created_time and @ brangitempenname.graded) f, @ aRV createdtimeflags / bRV itemized name from (select * from t_source order by created_time,item_name) t0) t1 where goal 1)

This method takes 13 seconds, and the query plan is as follows:

Mysql > explain select item_id,created_time,modified_time,item_name,other-> from-> (select t0, where, if (@ a=created_time and @ bonomitementnameparentific) f, @ apura, create, create, timememo, borel, where, name-> from-> (select * from t_source order by created_time,item_name) t0) T1 where frang1 +- +-+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +-+ -+-+ | 1 | PRIMARY | | NULL | ref | 4 | const | 10 | 100.00 | NULL | 2 | DERIVED | NULL | ALL | NULL | NULL | NULL | NULL | 997282 | 100.00 | NULL | 3 | DERIVED | t_source | | NULL | ALL | NULL | NULL | NULL | NULL | 997282 | 100.00 | Using filesort | + -+ 3 rows in set 5 warnings (0.00 sec)

The innermost query scans 1 million rows of the t _ source table and uses file sorting to generate the export table derived3.

The second layer query scans 1 million rows of derived3, generates the export table derived2, completes the comparison and assignment of variables, and automatically creates an index auto_key0 on the exported column f.

The outermost layer scans the derived2 with the auto_key0 index to get the deduplicated result row.

Compared with method 2 above, the total number of scan lines remains the same, all 2 million lines. There is only a slight difference, this time the automatically generated index is on the constant column f, while the automatically generated index of the table association is on the item_id column, so the query time is almost the same.

So far, we have not created any indexes on the source table. No matter which way you write, you need to sort the created_time and item_name fields to check duplicates, so it's natural to think that if you build a federated index on these two fields, you can take advantage of the orderly nature of the index itself to eliminate additional sorting, thereby improving query performance.

two。 Establishing a joint index on created_time and item_name-- establishing a joint index of created_time and item_name fields create index idx_sort on t_source (created_time,item_name,item_id); analyze table t_source

(1) use related subqueries

Truncate tasking target; insert into t_target select distinct T1. * from t_source T1 where item_id in (select min (item_id) from t_source T2 where t1.created_time=t2.created_time and t1.item_name=t2.item_name)

It takes 19 seconds, and the query plan is as follows:

Mysql > explain select distinct T1. * from t_source T1 where item_id in-> (select min (item_id) from t_source T2 where t1.created_time=t2.created_time and t1.item_name=t2.item_name) +-+ -+-+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | + +-- -+-+ | 1 | PRIMARY | T1 | NULL | ALL | NULL | 997281 | 100.00 | Using where Using temporary | | 2 | DEPENDENT SUBQUERY | T2 | NULL | ref | idx_sort | idx_sort | 89 | test.t1.created_time Test.t1.item_name | 2 | 100.00 | Using index | +-+- -+-- + 2 rows in set 3 warnings (0.00 sec)

The tSource table of the outer query is the driven table and needs to scan 1 million rows.

For the item_id that drives each row of the table, two rows of data are queried through the idx_sort index.

(2) use table join

Truncate tweak target; insert into t_target select distinct t1.* from t_source T1, (select min (item_id) item_id,created_time,item_name from t_source group by created_time,item_name) T2 where t1.item_id = t2.item_id

It takes 13 seconds, and the query plan is as follows:

Mysql > explain select distinct t1.* from t_source T1,-> (select min (item_id) item_id,created_time,item_name from t_source group by created_time,item_name) T2-> where t1.item_id = t2.item_id +- +-+-+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | + -+-+ | 1 | PRIMARY | T1 | | NULL | ALL | NULL | NULL | NULL | NULL | 997281 | 100.00 | Using where | Using temporary | | 1 | PRIMARY | | NULL | ref | 5 | test.t1.item_id | 10 | 100.00 | Distinct | 2 | DERIVED | t_source | NULL | index | idx_sort | idx_sort | 94 | NULL | 997281 | 100.00 | Using index | + -+-- + 3 rows in set 1 warning (0.00 sec)

Compared with no index, although the subquery changes from a full table scan to a full index scan, it still needs to scan 1 million rows of records. Therefore, the improvement in query performance is not obvious.

(3) use variables

Set @ aVERV: 1000-01-01 00 set'; set @ FRO; truncate tweak target; insert into t_target select item_id,created_time,modified_time,item_name,other from (select t0. Stamped if (@ a=created_time and @ brangitempenname.graded) f, @ aRV createdtimeflags / bRV itemized name from (select * from t_source order by created_time,item_name) t0) t1 where goal 1)

This time takes 13 seconds, and the query plan is exactly the same as when there is no index. It can be seen that the index has no effect on this way of writing. Can you eliminate nesting and use only one layer of query to get the results?

(4) use variables and eliminate nested queries

Set @ aVera: 1000-01-01 00 set; set @ bazaar; truncate tweak target; insert into t_target select * from t_source force index (idx_sort) where (@ astatory createdcalendar time or @ baked calendar itemized name) and (@ a:=created_time) is not null and (@ b:=item_name) is not null order by created_time,item_name

It takes 12 seconds, and the query plan is as follows:

Mysql > explain select * from t_source force index (idx_sort)-> where (@ aura creative itemized or @ baked itemized name) and (@ a:=created_time) is not null and (@ b:=item_name) is not null-> order by created_time,item_name +- -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +- -+ | 1 | SIMPLE | t_source | NULL | index | NULL | idx_sort | 94 | NULL | 997281 | Using where | + -- + 1 row in set 3 warnings (0.00 sec)

This statement has the following characteristics:

The nested subquery is eliminated, only one full index scan of the t _ source table is needed, and the query plan has been optimized.

There is no need for distinct to double check.

Variable judgment and assignment only appear in the where clause.

The index is used to eliminate filesort.

Before MySQL 8, this statement was the best solution for single-threaded deduplication. After careful analysis of this statement, it is found that it skillfully makes use of the logical query processing steps and index characteristics of the SQL statement. The logical steps for a SQL query are:

1. Perform Cartesian product (cross join)

two。 Apply ON filter (connection condition)

3. Add external lines (outer join)

4. Apply where filter

5. Grouping

6. Apply cube or rollup

7. Apply having filter

8. Working with select lists

9. Apply the distinct clause

10. Apply the order by clause

11. Apply the limit clause

The logical execution steps of each query statement are a subset of these 11 steps. For example, this query statement is executed in the following order: force to find rows of data by index idx_sort-> apply where filter-> process select list-> apply order by clause.

In order for variables to be assigned and compared in the sort order of created_time and item_name, the rows of data must be looked up in the order of index. The force index (idx_sort) hint here serves this purpose, and must be written in this way to make the entire duplicate check statement valid. Otherwise, because the table is scanned before sorting, the order in which variables are assigned cannot be guaranteed, and the correctness of the query results cannot be guaranteed. The order by clause is also not negligible, otherwise MySQL will use a full table scan instead of a full index scan, even if prompted by force index, resulting in incorrect results. The index also ensures the order of created_time,item_name and avoids file sorting. The force index (idx_sort) hint and the order by clause are indispensable, and the index idx_sort is just right here to kill two birds with one stone.

Before the query statement starts, initialize the variable to an impossible value in the data, and then enter the where clause to judge from left to right. First compare the values of variables and fields, and then assign the values of created_time and item_name to the variables, and process them line by line in the order of created_time and item_name. Item_name is a string type, (@ b:=item_name) is not a valid Boolean expression, so write it as (@ b:=item_name) is not null.

Finally, I would like to add that the writing of "insert into t_target select * from t_source group by created_time,item_name;" is ignored here because it is restricted by "sql_mode='ONLY_FULL_GROUP_BY'".

Second, using window function

The new window function in MySQL 8 makes the troublesome deduplication operation easy.

Truncate tweak target; insert into t_target select item_id, created_time, modified_time, item_name, other from (select *, row_number () over (partition by created_time,item_name) as rn from t_source) t where rn=1

This statement takes only 12 seconds to execute, and the writing is clear and easy to understand. The query plan is as follows:

Mysql > explain select item_id, created_time, modified_time, item_name, other-> from (select *, row_number () over (partition by created_time,item_name) as rn-> from t_source) t where rn=1 +- +-+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +-+ -+-+ | 1 | PRIMARY | | NULL | ref | 8 | const | 10 | NULL | | 2 | DERIVED | t_source | NULL | ALL | NULL | NULL | NULL | NULL | 997281 | 100.00 | Using filesort | +-- -+- -+ 2 rows in set 2 warnings (0.00 sec)

The query performs a full table scan of the tSource table and sorts the table by partition fields created_time and item_name with filesort. The outer query retains a piece of data from each partition. Because you can keep any of the multiple pieces of data that repeat created_time and item_name, you don't need to use the order by clause in oevr.

From the point of view of the execution plan, the window function deduplicating statements does not seem to be better than eliminating the variables of nested queries, but this method is actually the fastest to execute.

For a description of MySQL window functions, see "https://dev.mysql.com/doc/refman/8.0/en/window-functions.html"."

III. Multithreaded parallel execution

A single duplicate check statement has been optimized previously, but it is still executed in a single-threaded manner. Can multiprocessors be used to allow deduplication operations to be executed in parallel with multiple threads, thus further improving the speed? For example, my experimental environment is a 4-processor, if you use four threads to execute the SQL at the same time, it should theoretically be nearly four times the performance improvement.

1. Data fragmentation

When generating test data, created_time uses the method of adding one second to each record, that is, the maximum and minimum time difference is 500000 seconds, and the data is evenly distributed, so the data is divided into four parts on average.

(1) query the created_time boundary values of 4 data.

Mysql > select date_add (interval 125000 second) dt1,-> date_add (2017-01-01) dt2) dt2,-> date_add (2017-01-01) second) dt3,-> max (created_time) dt4-> from t_source +-+ | dt1 | dt2 | dt3 | dt4 | +- -- + | 2017-01-02 10:43:20 | 2017-01-03 21:26:40 | 2017-01-05 08:10:00 | 2017-01-06 18:53:20 | +- -+ 1 row in set (0.00 sec)

(2) check the number of records of each data and confirm the average distribution of the data

Mysql > select case when created_time > = '2017-01-01'-> and created_time

< '2017-01-02 10:43:20' ->

Then '2017-01-01-01'-> when created_time > = '2017-01-02 10 purl 43 and created_time 20'-> and created_time

< '2017-01-03 21:26:40' ->

Then '2017-01-02 10 Performing 43 and created_time 20'-> when created_time > = '2017-01-03 21 purge 26 40'-> and created_time

< '2017-01-05 08:10:00' ->

Then '2017-01-03 21 end min_dt 26 case when created_time 40'-> else '2017-01-05 08 end min_dt-> case when created_time > =' 2017-01-01'- > and created_time

< '2017-01-02 10:43:20' ->

Then '2017-01-02 10 Suzhou 43 and created_time 20'-> when created_time > = '2017-01-02 10 purl 4315 20'-> and created_time

< '2017-01-03 21:26:40' ->

Then '2017-01-03 21 and created_time 26 40'-> when created_time > = '2017-01-03 21 14 14 14 40'-> and created_time

< '2017-01-05 08:10:00' ->

Then '2017-01-05 08 then'-> else'01-01-06 18 purl 53 then 20'- > end max_dt,-> count (*)-> from t_source-> group by case when created_time > = '2017-01-01'-> and created_time

< '2017-01-02 10:43:20' ->

Then '2017-01-01-01'-> when created_time > = '2017-01-02 10 purl 43 and created_time 20'-> and created_time

< '2017-01-03 21:26:40' ->

Then '2017-01-02 10 Performing 43 and created_time 20'-> when created_time > = '2017-01-03 21 purge 26 40'-> and created_time

< '2017-01-05 08:10:00' ->

Then '2017-01-03 21 end 26 case when created_time 40'-> else '2017-01-05 08 end-> case when created_time > =' 2017-01-01'- > and created_time

< '2017-01-02 10:43:20' ->

Then '2017-01-02 10 Suzhou 43 and created_time 20'-> when created_time > = '2017-01-02 10 purl 4315 20'-> and created_time

< '2017-01-03 21:26:40' ->

Then '2017-01-03 21 and created_time 26 40'-> when created_time > = '2017-01-03 21 14 14 14 40'-> and created_time

< '2017-01-05 08:10:00' ->

Then '2017-01-05 08 then'-> else'01-01-06 18 Fringe 53 15 20'- > end +-+ | min_dt | max_dt | count (*) | +- -2017 -+-+ 4 rows in set (4.86 sec)

The union of four pieces of data should cover the entire source data set, and there is no duplication between the data. In other words, the created_time of four pieces of data should be continuous and mutually exclusive, continuously ensuring that all data are processed, and mutual exclusion ensures that there is no need for a second duplicate check. In fact, this is similar to the concept of time range partitioning, it may be better to use a partitioned table, except that the step of rebuilding the table is omitted.

two。 Establish a stored procedure for duplicate checking

With the above information, we can write four statements to deal with all the data. To make calling the interface as simple as possible, establish the following stored procedure.

Delimiter / / create procedure sp_unique (I smallint) begin set @ aVERV 1000-01-01 00 smallint 00mm; set @ b Rod'; if (I = date_add ('2017-01-01-01 minute interval (iMust1) * 125000 second) and created_time

< date_add('2017-01-01',interval i*125000 second) and (@a!=created_time or @b!=item_name) and (@a:=created_time) is not null and (@b:=item_name) is not null order by created_time,item_name; else insert into t_target select * from t_source force index (idx_sort) where created_time >

= date_add ('2017-01-01-01-01-01) and created_time explain select * from t_source force index (idx_sort)-> where created_time > = date_add (' 2017-01-01 * 125000 second)-> and created_time

< date_add('2017-01-01',interval 1*125000 second) ->

And (@ astatcreatedcreative time or @ baked itemized name)-> and (@ a:=created_time) is not null-> and (@ b:=item_name) is not null-> order by created_time,item_name +- -+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +-+ -+-- + | 1 | SIMPLE | t_source | NULL | range | idx_sort | idx_sort | 6 | NULL | 498640 | 100.00 | Using index condition | + -+-+ 1 row in set 3 warnings (0.00 sec)

The MySQL optimizer scans the index range and optimizes the query using index condition push-down (ICP).

3. Parallel execution

The following uses the shell background process and MySQL Schedule Event to implement parallelism, respectively.

(1) shell background process

Create a duplicate_removal.sh file with the following contents:

#! / bin/bashmysql-vvv-u root-p123456 test-e "truncate t_target" & > / dev/null date'+% H:%M:%S'for y in {1.. 4} do sql= "call sp_unique ($y)" mysql-vvv-u root-p123456 test-e "$sql" & > par_sql1_$y.log & donewaitdate'+% HV% MVO% S'

Execute script file

. / duplicate_removal.sh

The execution output is as follows:

[mysql@hdp2~] $. / duplicate_removal.sh14:27:3014:27:35

This method takes 5 seconds, and the four procedure calls executed in parallel take 4.87 seconds, 4.88 seconds, 4.91 seconds, and 4.73 seconds, respectively.

[mysql@hdp2~] $cat par_sql1_1.log | sed'/ ^ $/ d'mysql: [Warning] Using a password on the command line interface can be insecure.-call sp_unique (1)-Query OK 124999 rows affected (4.87 sec) Bye [mysql@hdp2~] $cat par_sql1_2.log | sed'/ ^ $/ d'mysql: [Warning] Using a password on the command line interface can be insecure.-call sp_unique (2)-Query OK 125000 rows affected (4.88 sec) Bye [mysql@hdp2~] $cat par_sql1_3.log | sed'/ ^ $/ d'mysql: [Warning] Using a password on the command line interface can be insecure.-call sp_unique (3)-Query OK 125000 rows affected (4.91 sec) Bye [mysql@hdp2~] $cat par_sql1_4.log | sed'/ ^ $/ d'mysql: [Warning] Using a password on the command line interface can be insecure.-call sp_unique (4)-Query OK, 125001 rows affected (4.73 sec) Bye [mysql@hdp2~] $

As you can see, the execution time of each process is 4.85. Because it is executed in parallel, the total process execution time is the slowest 4.91 seconds, 2.5 times faster than that of a single thread.

(2) MySQL Schedule Event

Establish an event history log table

-- used to view event execution time and other information such as create table t_event_history (dbname varchar) not null default', eventname varchar (128) not null default', starttime datetime (3) not null default '1000-01-01 00VOV 00L, endtime datetime (3) default null, issuccess int (11) default null, duration int (11) default null, errormessage varchar (512) default null, randno int (11) default null)

Create an event for each concurrent thread

Delimiter / / create event ev1 on schedule at current_timestamp + interval 1 hour on completion preserve disable do begin declare r_code char (5) default '00000 jobs; declare r_msg text; declare v_error integer; declare v_starttime datetime default now (3); declare v_randno integer default floor (rand () * 100001); insert into t_event_history (dbname,eventname,starttime,randno) # job name values (database (),' ev1', vicious starttimejets Randno) Begin # exception handling segment declare continue handler for sqlexception begin set v_error = 1; get diagnostics condition 1 r_code = returned_sqlstate, r_msg = message_text; end; # here is the actual called user program procedure call sp_unique (1); end Update t_event_history set endtime=now (3), issuccess=isnull (v_error), duration=timestampdiff (microsecond,starttime,now (3)), errormessage=concat ('error=',r_code,', message=',r_msg), randno=null where starttime=v_starttime and randno=v_randno; end// create event ev2 on schedule at current_timestamp + interval 1 hour on completion preserve disable do begin declare r_code char (5) default' 00000mm; declare r_msg text; declare v_error integer; declare v_starttime datetime default now (3) Declare v_randno integer default floor (rand () * 100001); insert into t_event_history (dbname,eventname,starttime,randno) # job name values (database (), 'ev2', vicious starttimekeeper vandno); begin # exception handling segment declare continue handler for sqlexception begin set v_error = 1; get diagnostics condition 1 r_code = returned_sqlstate, r_msg = message_text; end; # here is the actual user program procedure call sp_unique (2) End; update t_event_history set endtime=now (3), issuccess=isnull (v_error), duration=timestampdiff (microsecond,starttime,now (3)), errormessage=concat ('error=',r_code,', message=',r_msg), randno=null where starttime=v_starttime and randno=v_randno; end// create event ev3 on schedule at current_timestamp + interval 1 hour on completion preserve disable do begin declare r_code char (5) default' 00000mm; declare r_msg text; declare v_error integer; declare v_starttime datetime default now (3) Declare v_randno integer default floor (rand () * 100001); insert into t_event_history (dbname,eventname,starttime,randno) # job name values (database (), 'ev3', vicious starttimekeeper vandno); begin # exception handling segment declare continue handler for sqlexception begin set v_error = 1; get diagnostics condition 1 r_code = returned_sqlstate, r_msg = message_text; end; # here is the actual user program procedure call sp_unique (3) End; update t_event_history set endtime=now (3), issuccess=isnull (v_error), duration=timestampdiff (microsecond,starttime,now (3)), errormessage=concat ('error=',r_code,', message=',r_msg), randno=null where starttime=v_starttime and randno=v_randno; end// create event ev4 on schedule at current_timestamp + interval 1 hour on completion preserve disable do begin declare r_code char (5) default' 00000mm; declare r_msg text; declare v_error integer; declare v_starttime datetime default now (3) Declare v_randno integer default floor (rand () * 100001); insert into t_event_history (dbname,eventname,starttime,randno) # job name values (database (), 'ev4', vicious starttimekeeper vandno); begin # exception handling segment declare continue handler for sqlexception begin set v_error = 1; get diagnostics condition 1 r_code = returned_sqlstate, r_msg = message_text; end; # here is the actual user program procedure call sp_unique (4) End; update t_event_history set endtime=now (3), issuccess=isnull (v_error), duration=timestampdiff (microsecond,starttime,now (3)), errormessage=concat ('error=',r_code,', message=',r_msg), randno=null where starttime=v_starttime and randno=v_randno; end//

In order to record the execution time of each event, the logic of manipulating the log table is added to the event definition, because only one more insert is executed in each event, and a update,4 event executes a total of 8 very simple statements, the impact on the test is negligible. The execution time is accurate to milliseconds.

Trigger event execution

Mysql-vvv-u root-p123456 test-e "truncate tasking target witch alter event ev1 on schedule at current_timestamp enable;alter event ev2 on schedule at current_timestamp enable;alter event ev3 on schedule at current_timestamp enable;alter event ev4 on schedule at current_timestamp enable;"

This command line triggers four events in sequence, but does not wait for the previous one to finish before the next one, but executes down immediately. This can be seen from the output of the command:

[mysql@hdp2~] $mysql-vvv-u root-p123456 test-e "truncate tasking targets alter event ev1 on schedule at current_timestamp enable;alter event ev2 on schedule at current_timestamp enable;alter event ev3 on schedule at current_timestamp enable;alter event ev4 on schedule at current_timestamp enable "mysql: [Warning] Using a password on the command line interface can be insecure.-truncate t_target-Query OK, 0 rows affected (0.06 sec)-alter event ev1 on schedule at current_timestamp enable-Query OK 0 rows affected (0.02 sec)-alter event ev2 on schedule at current_timestamp enable-Query OK, 0 rows affected (0.00 sec)-alter event ev3 on schedule at current_timestamp enable-Query OK 0 rows affected (0.02 sec)-alter event ev4 on schedule at current_timestamp enable-Query OK, 0 rows affected (0.00 sec) Bye [mysql@hdp2~] $

View the event execution log

Mysql > select * from test.t_event_history +- + | dbname | eventname | starttime | endtime | issuccess | duration | errormessage | randno | +- -+-+ | test | ev1 | 2019-07-31 14140 38HD 04.000 | 2019-07-31 1438 NULL 09.389 | 1 | 5389000 | NULL | NULL | test | 2019-07-31 1438 test | 2019-07-31 1438 test | 1 | 5344000 | NULL | NULL | test | 2019-07-31 1438 38 | 2019-07-31 1438: 09.230 | 1 | 4230000 | NULL | NULL | | test | ev4 | 2019-07-31 14test 38RH 05.000 | 2019-07-31 14JV 38test 09.344 | 1 | 4344000 | NULL | NULL | +-- -+ 4 rows in set (0.00 sec)

As you can see, the execution of each process is 4.83 seconds, and because it is executed in parallel, the total execution is the slowest 5.3 seconds, and the optimization effect is almost the same as that of the shell background process.

Thank you for reading! This is the end of the article on "how to re-MySQL". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report