Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Using sysbench to stress test MySQL (2)

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

After the first test yesterday, it was just the beginning.

The main idea of my next sysbench stress test is to adjust according to the existing configuration, to be able to achieve the goal of continuous optimization and stress testing, rather than simply comparing the difference of the number of connections in different orders of magnitude, so you will see some problem troubleshooting, some problem solving, and some may not be related to the stress test.

The number of pressure test connections can't run up to 300.

I set the max_connections to 3000, but during the stress test, 300 threads will not be able to run. This question is very typical.

The error thrown by sysbench is as follows:

FATAL: mysql_stmt_prepare () failed

FATAL: MySQL error: 1461 "Can't create more than max_prepared_stmt_count statements (current value: 16382)"

FATAL: `thread_init' function failed: / usr/local/share/sysbench/oltp_common.lua:282: SQL API error

The error log message for MySQL is as follows:

2017-03-14T15:01:57.839154Z 348 [Note] Aborted connection 348 to db: 'sysbenchtest' user:' root' host: 'localhost' (Got an error reading communication packets)

2017-03-14T15:01:57.839185Z 346 [Note] Aborted connection 346 to db: 'sysbenchtest' user:' root' host: 'localhost' (Got an error reading communication packets)

The two don't seem to have much to do with each other, so some information can be misleading.

According to the error message, the current parameter max_prepared_stmt_count setting is 16382, which is the default value after installation.

Mysql > show variables like 'max_prepared_stmt_count'

+-+ +

| | Variable_name | Value |

+-+ +

| | max_prepared_stmt_count | 16382 | |

+-+ +

The parameter of packet is set to 4m, which is also the default value.

Mysql > show variables like'% pack%'

+-+ +

| | Variable_name | Value |

+-+ +

| | max_allowed_packet | 4194304 | |

| | slave_max_allowed_packet | 1073741824 | |

+-+ +

Whether it is caused by the parameter max_allowed_packet or not, we can simply simulate it.

Set global max_allowed_packet=33554432

Then continue to run the sysbench script:

Sysbench/ home/sysbench/sysbench-1.0.3/src/lua/oltp_read_write.lua-mysql-user=root-mysql-port=3306-mysql-socket=/home/mysql/s1/s1.sock-mysql-host=localhost-mysql-db=sysbenchtest-tables=10-table-size=5000000-threads=200 report-interval=5-time=10 run

As a result, the same error was thrown, which indirectly proved that the problem had nothing to do with this parameter, so I restored the original setting.

Then we continue to adjust this parameter, max_prepared_stmt_count, from 16382 to 30000

Set global max_prepared_stmt_count=30000

Then run 200 threads again, you can see that there is no problem, during the run, we can use show global status to see the change of this value, the main meaning of this parameter is to deal with a large number of new prepared statements in an instant, through the max_prepared_stmt_count variable to control, how this value is calculated, but also need to dig carefully.

Mysql > show global status like 'Prepared_stmt_count'

+-+ +

| | Variable_name | Value |

+-+ +

| | Prepared_stmt_count | 18200 | |

+-+ +

After 200 threads have been executed, 300 will continue, and so on, up to 1000 threads.

When executing 300 threads, I grabbed the value of this parameter and found that it was about to overflow.

Mysql > show global status like'% stmt%'

+-+ +

| | Variable_name | Value |

+-+ +

| | Prepared_stmt_count | 27300 | |

+-+ +

10 rows in set (0.00 sec)

So I made a simple calculation.

200Thread parameter value is 18200

The value of the 300thread parameter is 27300

Through a simple calculation, we can see that 100 threads correspond to a parameter value of 9100. According to this parameter setting, I want to run 500threads, and the parameter value of 30000 is certainly not enough. I soon verified my idea and threw out a mistake. So I adjusted the parameter value to 0, and there was no problem with 100000 threads.

This is the TPS test results corresponding to 50,300,500 threads respectively, and QPS is basically 20 times larger than TPS.

The number of pressure test connections can't run up to 1000.

Then when I continued to test 1000 threads, I found that I couldn't run.

When it reached 1000, the following error was thrown.

FATAL: unable to connect to MySQL server on socket'/ home/mysql/s1/s1.sock', aborting...

FATAL: error 1135: Can't create a new thread (errno 11); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug

At this time, check the usage of the resources as follows:

# ulimit-a

. . .

Open files (- n) 1024

Pipe size (512 bytes,-p) 8

Max user processes (- u) 1024

Virtual memory (kbytes,-v) unlimited

File locks (- x) unlimited

It is worth mentioning that in RedHat 6, when / etc/security/limits.conf sets the user to *, it will not take effect at all, so you need to make a specific user.

For example, the following configuration will not take effect and needs to be specified as root

* soft nproc 65535

* hard nproc 65535

Once again, two points can be extended: one is that after modifying the resource settings, the existing MySQL service needs to be restarted, because the resource setting is still the old value, how to check it. You can use pidof to view the settings under / proc.

# cat / proc/ `pidof mysqld` / limits | egrep "(processes | files)"

Max processes 1024 256589 processes

Max open files 15000 15000 files

The second point is that in RedHat 6, we actually need to set another parameter file, / etc/security/limits.d/90-nproc.conf.

Make the following configuration in this file, which is effective for all users.

* soft nproc 65535

Restart the MySQL service after modification, and there will be no problem if you start the test again, indicating that the error in this place is still closely related to the parameter nproc, but there is still no problem with open files 1024 for now.

This is a preliminary result of the test at that time. You can see the TPS of thread 505505501000, and there seems to be no improvement in thread 1000.

Baseline optimization at thread 1000

The thread reached 1000, and our benchmark test also had a phased result, that is, there is no problem at least supporting 1000 connections, but the test results are still not satisfactory, at least in terms of numbers.

What is the problem? first of all, the configuration of the machine is poor, which is an indisputable fact, so the results you see are quite different from those of many high-end online ones.

Another point is the optimization of the default parameter level, we use the Lua template to read and write both. A large number of binlog are generated in the process of stress testing, but for InnoDB, we need to clarify several possibilities on IO, one is the efficiency of brushing data, the other is the size of redo, and some existing optimization improvements. Let's talk about it briefly.

MySQL has a thread similar to LGWR in Oracle, which defaults to 16m in 5.7m and 8m in previous versions. Because of the high refresh frequency, this parameter generally does not need to be adjusted.

Mysql > show variables like 'innodb_log_buffer_size'

+-+ +

| | Variable_name | Value |

+-+ +

| | innodb_log_buffer_size | 16777216 | |

For the size of redo, it is currently 50m, and there is also a parameter innodb_log_files_in_group of group, which defaults to 2, that is, two groups, which can be understood as two sets of redo logs in Oracle.

Mysql > show variables like 'innodb_log%'

+-+ +

| | Variable_name | Value |

+-+ +

| | innodb_log_buffer_size | 16777216 | |

| | innodb_log_checksums | ON |

| | innodb_log_compressed_pages | ON |

| | innodb_log_file_size | 50331648 | |

| | innodb_log_files_in_group | 2 | |

| | innodb_log_group_home_dir |. / | |

| | innodb_log_write_ahead_size | 8192 | |

+-+ +

In this stress test, frequently writing binlog is bound to refresh redo with a very high frequency.

We capture the status of an InnoDB in the test:

Mysql-e "show engine innodb status\ G" | grep-A12 "Log sequence"

Log sequence number 39640783274

Log flushed up to 39640782426

Pages flushed up to 39564300915

Last checkpoint at 39562272220

0 pending log flushes, 0 pending chkp writes

93807 log i/o's done, 198.50 log i/o's/second

--

BUFFER POOL AND MEMORY

--

Total large memory allocated 26386366464

Dictionary memory allocated 555380

Buffer pool size 1572768

Free buffers 1048599

To do a simple calculation, we can see that the value of Log sequence number minus the value of Last checkpoint at is about 70m.

Mysql > select 39640783274-39562272220

+-+

| | 39640783274-39562272220 | |

+-+

| | 78511054 |

+-+

In fact, there is no absolute concept of how large the redo file is. In Percona's proposal, it can be set to 1G or 2G in stress testing, with a maximum setting of 4G, because it will directly affect the efficiency of recovery.

Resize redo

Resizing the redo still needs to be paid special attention to. At this point, MySQL may improve later, and the redo modification in Oracle is worth learning from. Like the undo in 5. 7 has been truncated, gradually stripped out, are a little bit of improvement.

How to modify the size of redo, to stop the library normally, and then delete the default two redo files, just to be on the safe side, you can back it up first, then modify the redo file parameters of the parameter file, start MySQL, of course, you will not recognize the redo file at the beginning and will automatically create two new ones.

The following result is representative and temporarily modifies the TPS case when the redo is 100m. The blue part in the lower left corner is the original 10-minute TPS situation, and the red part is the TPS situation in more than 2 hours. You can see that it is basically stable and is originally higher than the original TPS with the same number of threads.

How to optimize the follow-up changes? there is more to share.

references

Http://www.ceus-now.com/mysql-innodb-page-cleaner-settings-might-not-be-optimal/

Https://www.percona.com/blog/2013/02/04/cant_create_thread_errno_11/

Https://www.percona.com/blog/2006/07/03/choosing-proper-innodb_log_file_size/

Https://www.percona.com/blog/2011/07/09/how-to-change-innodb_log_file_size-safely/

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report