In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Parallel replication has existed for many years, but it is not common in practical application scenarios. Luckily this time, we happened to have a customer whose write workload was very heavy, but it was difficult to keep up with, in which case I recommend that it use parallel slave threads.
So, how do you measure whether parallel replication plays a role in the customer's scenario? How much help can it bring to the customer's business? Let's take a look at it together.
In the customer business scenario, the slave_parallel_workers is 0, obviously I should increase it, but how much is the increase? 1 or 10, which we will explain in another article, first of all, in the scenario of this article, we adjusted the slave_parallel_workers to 40.
At the same time, we also made the following changes to slave:
Slave_parallel_type = LOGICAL_CLOCK;slave_parallel_workers = 40 ON slaveable preserveable slave order = slave
Forty threads sounds like a lot, but it depends on the specific workload, and if the transaction is independent, it may come in handy.
Next, let's look at which threads are working:
Mysql > SELECT performance_schema.events_transactions_summary_by_thread_by_event_name.THREAD_ID AS THREAD_ID Performance_schema.events_transactions_summary_by_thread_by_event_name.COUNT_STAR AS COUNT_STARFROM performance_schema.events_transactions_summary_by_thread_by_event_nameWHERE performance_schema.events_transactions_summary_by_thread_by_event_name.THREAD_ID IN (SELECT performance_schema.replication_applier_status_by_worker.THREAD_IDFROM performance_schema.replication_applier_status_by_worker) +-+ | THREAD_ID | COUNT_STAR | +-+-+ | 25882 | 442481 | | 25883 | 433200 | 25884 | 426460 | 25885 | 419772 | | 25886 | 413751 | | 25887 | 407511 | 25888 | 401592 | | 25889 | 395169 | | 25891 | 380657 | 25892 | 371923 | 25893 | 362482 | 25894 | 351601 | 25895 | 339282 | 25896 | 325148 | | 25897 | | | 310051 | | 25898 | 292187 | | 25899 | 272990 | | 25900 | 252843 | | 25901 | 232424 | +-+-+ |
From the above code, we can see which threads are working, but do these threads really speed up replication? can Slave write more at the same time?
Let's take a look at replication lag:
We can see that the big lag is coming down very quickly, is this because the number of threads has increased? Or is it because the job to generate multiple plug-ins is complete and there are no more writes? The replication delay did not reach 0 because the Slave deliberately delayed for an hour.)
Fortunately, we have other charts to look at in PMM, such as showing InnoDB Row actions:
Slave inserts more rows than before, so how many rows are actually inserted? Let's create a new chart to view
How many rows are inserted per hour. In PMM, we already have all this information, and we just need to create a new chart using the following query:
Increase (mysql_global_status_innodb_row_ops_total {instance= "$host", operationalization = "read"} [1h])
The results show that:
From the figure, we can see a significant increase in the number of rows inserted per hour, from about 50Mil to 200-400Mil per hour. We can conclude that increasing the number of slave_parallel_workers does help.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.