In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The following mainly brings you the whole process of MySQL parameter pressure test. I hope these contents can bring you practical use, which is also the main purpose of this article that I edit the whole process of MySQL parameter pressure test. All right, don't talk too much nonsense, let's just read the following.
1. Test background
First of all, let's take a look at the official explanations of these two parameters:
Sync_binlog
Controls how often the MySQL server synchronizes the binary log to disk. Controls how often MySQL server synchronizes binlog to disk sync_binlog=0: Disables synchronization of the binary log to disk by the MySQL server. Instead, the MySQL server relies on the operating system to flush the binary log to disk from time to time as it does for any other file. This setting provides the best performance, but in the event of a power failure or operating system crash, it is possible that the server has committed transactions that have not been synchronized to the binary log. Turn off MySQL server binlog synchronization to disk, flushed to disk by OS from time to time, this is the best performance of MySQL, but when power outage or system crash, things that have been submitted by MySQL server may not be synchronized to binlog, resulting in loss of things. Sync_binlog=1: Enables synchronization of the binary log to disk before transactions are committed. This is the safest setting but can have a negative impact on performance due to the increased number of disk writes. In the event of a power failure or operating system crash, transactions that are missing from the binary log are only ina prepared state. This permits the automatic recovery routine to roll back the transactions, which guarantees that no transaction is lost from the binary log. It is safest to synchronize binlog to disk before things are committed, but there are some negative performance impacts due to increased disk writes. When the power is off or the system crashes, what is lost from the binlog is only in preparation. This allows the automatic recovery program to rollback things, ensuring that nothing is lost in binlog. Sync_binlog=N, where N is a value other than 0 or 1: The binary log is synchronized to disk after N binary log commit groups have been collected. In the event of a power failure or operating system crash, it is possible that the server has committed transactions that have not been flushed to the binary log. This setting can have a negative impact on performance due to the increased number of disk writes. A higher value improves performance, but with an increased risk of data loss. After N binlog commit groups are collected, binglog is synchronized to disk. When the power is off or the system crashes, things that have been submitted may not have been refreshed to binlog. This configuration increases disk writes and has some negative performance impacts. The higher the N value, the better the database performance, but the more likely the data is to be lost. For the greatest possible durability and consistency in a replication setup that uses InnoDB with transactions, use these settings: to maximize the persistence and consistency of replication settings that use InnoDB, use the following settings: sync_binlog=1.innodb_flush_log_at_trx_commit=1.CautionMany operating systems and some disk hardware fool the flush-to-disk operation. They may tell mysqld that the flush has taken place, even though it has not. In this case, the durability of transactions is not guaranteed even with the recommended settings, and in the worst case, a power outage can corrupt InnoDB data. Using a battery-backed disk cache in the SCSI disk controller or in the disk itself speeds up file flushes, and makes the operation safer. You can also try to disable the caching of disk writes in hardware caches. Many operating systems and some disk hardware deceive the operation of flushing disks. They might tell sqmyld flush that it happened, even though it didn't happen. In this case, even with the recommended settings, the durability of the transaction is not guaranteed, and in the worst case, a power outage can corrupt InnoDB data. Using a battery-supported disk cache in the SCSI disk controller or the disk itself can speed up file refresh and make operations more secure. You can also try to disable disk write caching in hardware caching.
Innodb_flush_log_at_trx_commit
Controls the balance between strict ACID compliance for commit operations and higher performance that is possible when commit-related I/O operations are rearranged and done in batches. You can achieve better performance by changing the default value but then you can lose up to a second of transactions in a crash. Controlling commit operations strictly adheres to the balance between ACID and improved performance, which is possible when batch rescheduling and performing commit-related I _ The default value of O operations. By changing the default values, you can achieve better performance, but in the event of a crash, you may lose up to one second of transaction The default value of 1 is required for full ACID compliance. With this value, the contents of the InnoDB log buffer are written out to the log file at each transaction commit and the log file is flushed to disk. The default value of 1 is required for ACID. the contents of the InnoDB log buffer are written to the log file when each transaction is committed, and the log file is flushed to disk. With a value of 0, the contents of the InnoDB log buffer are written to the log file approximately once per second and the log file is flushed to disk. No writes from the log buffer to the log file are performed at transaction commit. Once-per-second flushing is not guaranteed to happen every second due to process scheduling issues. Because the flush to disk operation only occurs approximately once per second, you can lose up to a second of transactions with any mysqld process crash. At a value of 0, the contents of the InnoDB log buffer are written to the log file about once per second, and the log file is flushed to disk. Writing to the log file from the log buffer is not performed when the transaction is committed. Due to process scheduling problems, a refresh per second cannot be guaranteed. Since disk refresh operations only occur about once a second, you can lose up to one second of transactions in any mysqld process, With a value of 2, the contents of the InnoDB log buffer are written to the log file after each transaction commit and the log file is flushed to disk approximately once per second. Once-per-second flushing is not 100% guaranteed to happen every second, due to process scheduling issues. Because the flush to disk operation only occurs approximately once per second, you can lose up to a second of transactions in an operating system crash or a power outage. When the value is 2, the contents of the InnoDB log buffer are written to the log file after each transaction is committed, and the log file is flushed approximately once per second. Due to process scheduling problems, a refresh per second cannot be 100% guaranteed to occur per second. Because disk refresh operations occur about once a second, transactions can be lost up to 1 second in the event of an operating system crash or power outage. InnoDB log flushing frequency is controlled by innodb_flush_log_at_timeout, which allows you to set log flushing frequency to N seconds (where N is 1... 2700, with a default value of 1). However, any mysqld process crash can erase up to N seconds of transactions.DDL changes and other internal InnoDB activities flush the InnoDB log independent of the innodb_flush_log_at_trx_commit setting.InnoDB crash recovery works regardless of the innodb_flush_log_at_trx_commit setting. Transactions are either applied entirely or erased entirely.For durability and consistency ina replication setup that uses InnoDB with transactions:If binary logging is enabled, set sync_binlog=1.Always set innodb_flush_log_at_trx_commit=1.CautionMany operating systems and some disk hardware fool the flush-to-disk operation. They may tell mysqld that the flush has taken place, even though it has not. In this case, the durability of transactions is not guaranteed even with the recommended settings, and in the worst case, a power outage can corrupt InnoDB data. Using a battery-backed disk cache in the SCSI disk controller or in the disk itself speeds up file flushes, and makes the operation safer. You can also try to disable the caching of disk writes in hardware caches.
two。 Test configuration
Here, TPCC and MySQL are on two machines respectively, in order to eliminate the impact of the occupation of MySQL service resources during the operation of TPCC.
IP:172.16.101.54
CPU:2core
Memory:8G
Software: tpcc-mysql-master
IP:172.16.101.55
CPU:2core
Memory:6G
Software: MySQL5.7.21
Parameter configuration of other parts of MySQL:
Key_buffer_size = 8Msort_buffer_size = 2Mread_buffer_size = 2Mread_rnd_buffer_size = 4Mqueryroomcachesize0Mqueryroomcachesized type = offmax_allowed_packet = 64Mmyisam_sort_buffer_size=128Mtmp_table_size=32Mtable_open_cache = 2000thread_cache_size = 8wait_timeout = 300max_connections = 8000max_user_connections=0innodb_thread_concurrency = 4 transaction_isolation = READ-COMMITTEDgtid-mode = onenforce-gtid-consistency = truelog_slave_updates = oninnodb_buffer_pool_size = 5120Minnodb_log _ buffer_size = 16Minnodb_lock_wait_timeout = 100
3. Testing process
The test is mainly divided into two cases, the tps and qps of MySQL server are measured by tpcc, and each case is tested three times, taking the average, and making tables and line charts.
Innodb_flush_log_at_trx_commit = 1 & & sync_binlog = 1
Or
Innodb_flush_log_at_trx_commit = 0 & & sync_binlog = 0
4. Test result
First of all, to make it clear, because there is only such a low-configuration CVM on hand, so the test results are not so reliable, so just refer to the test ideas and process, do not pay too much attention to the test results, .
(1) TpmC
That is, the number of things per minute
Number of threads 48163264128256512102415362048i=1&s=1864.4331411.3672020.6672465.8672666.5332666.6332614.3332431.2672011.2331507.7331266.1i=0&s=02761.3833160.052835.8172510.6832366.0172303.252367.5172251.8171834.5831439.6841166.3
From the picture, we can see:
When the concurrency is 0-32, the MySQL TpmC of double 0 is much larger than that of double 1.
After concurrency > 32, the number of TpmC between the two is about the same.
(2) slow queries,open tables,Queries per second avg and other indicators in the case of double 1
From the picture, we can see:
As concurrency increases, so does slow queries.
When the concurrency reaches about 512, the open tables also reaches the set value of the parameter table_open_cache = 2000.
As the concurrency becomes larger and larger, the QPS becomes smaller and smaller, which is not obvious at first and is in balance for a time, knowing that the concurrency is too large and the performance of QPS is degraded.
For the above pressure test of MySQL parameters of the whole process, we do not think it is very helpful. If you need to know more, please continue to follow our industry information. I'm sure you'll like it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.