In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "SQL optimization actual combat record in MySQL". In daily operation, I believe that many people have doubts on the problem of SQL optimization actual combat record in MySQL. The editor has consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "SQL optimization actual combat record in MySQL". Next, please follow the editor to study!
Background
This SQL optimization is done for table queries in javaweb.
Partial network architecture diagram
Business brief description
N machines send the business data to the server, and the server program stores the data to the MySQL database. The javaweb program in the server displays the data on the web page for users to view.
Original database design
Separation of master and slave of windows single machine
Sub-table sub-library, by year, by day table
About 20w of data per table
Original query efficiency
3-day data query 70-80s
target
3-5s
Business defect
You cannot use sql paging, you can only use java for paging.
Problem troubleshooting
Slow or in the foreground and slow in the background.
If you have configured druid, you can view the sql execution time and uri request time directly in the druid page.
Use System.currentTimeMillis to calculate the time difference in the background code.
Conclusion: background is slow, and query sql is slow.
What's wrong with sql?
Sql splicing is too long, up to 3000 rows, some even up to 8000 rows, mostly union all operations, and unnecessary nested queries and queries for unnecessary fields
Using explain to view the execution plan, the index is used in only one field in the where condition except time.
Note: due to the optimization, the previous sql really can not be found, here can only YY.
Query optimization
Remove unnecessary fields
The effect is not that obvious.
Remove unnecessary nested queries
The effect is not that obvious.
Decompose sql
Decompose the operation of union all, for example (a union all sql is also very long)
Select aa from bb_2018_10_01 left join... On.. Left join.. On.. Where.. Union all select aa from bb_2018_10_02 left join... On.. Left join.. On.. Where.. Union all select aa from bb_2018_10_03 left join... On.. Left join.. On.. Where.. Union all select aa from bb_2018_10_04 left join... On.. Left join.. On.. Where..
Decompose the sql as above into several sql to execute, and finally summarize the data, which is about 20s faster.
Select aa from bb_2018_10_01 left join... On.. Left join.. On.. Where..
Asynchronously execute the decomposed sql
Using the operation of java asynchronous programming, the decomposed sql is executed asynchronously and the data is finally summarized. CountDownLatch and ExecutorService are used here. The sample code is as follows:
/ / get all days in the time period List days = MyDateUtils.getDays (requestParams.getStartTime (), requestParams.getEndTime ()); / / days length int length = days.size (); / / initialize the merge collection and specify a size to prevent the array from crossing the bounds List list = Lists.newArrayListWithCapacity (length); / initialize the thread pool ExecutorService pool = Executors.newFixedThreadPool (length) / / initialize the counter CountDownLatch latch = new CountDownLatch (length); / / query the time of day and merge for (String day: days) {Map param = Maps.newHashMap () / / param assembly query criteria pool.submit (new Runnable () {@ Override public void run () {try {/ / mybatis query sql / / summarize the results list.addAll (query results) } catch (Exception e) {logger.error ("getTime exception", e);} finally {latch.countDown ();}) } try {/ / wait for all queries to end latch.await ();} catch (InterruptedException e) {e.printStackTrace ();} / / list is a summary collection / / if necessary, you can assemble the business data you want, calculate something, if not, it will be gone.
The result is 20-30s faster.
Optimize MySQL configuration
Here is an example of my configuration. With skip-name-resolve, it is 4-5s faster. Other configurations determine on their own.
[client] port=3306 [mysql] no-beep default-character-set=utf8 [mysqld] server-id=2 relay-log-index=slave-relay-bin.index relay-log=slave-relay-bin slave-skip-errors=all # skip all errors skip-name-resolve port=3306 datadir= "D:/mysql-slave/data" character-set-server=utf8 default-storage-engine=INNODB sql-mode= "STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER NO_ENGINE_SUBSTITUTION "log-output=FILE general-log=0 general_log_file=" WINDOWS-8E8V2OD.log "slow-query-log=1 slow_query_log_file=" WINDOWS-8E8V2OD-slow.log "long_query_time=10 # Binary Logging. # log-bin # Error Logging. Log-error= "WINDOWS-8E8V2OD.err" # entire database * connections (users) number of max_connections=1000 # per client connection * error allowed number of max_connect_errors=100 # table descriptor cache size It can reduce the number of file opening / closing times, the * size of request packets that the table_open_cache=2000 # service can handle and the * request size that the service can handle (necessary when working with large BLOB fields) # the independent size of each connection. Dynamically increase the size of max_allowed_packet=64M # when sorting occurs, each thread allocates sort_buffer_size=8M # when full federation occurs, allocate in each thread how many threads are reserved in join_buffer_size=8M # cache to reuse thread_cache_size=128 # this allows the application to give the thread system a prompt to give the number of threads that are eager to be run at the same time. Thread_concurrency=64 # query cache query_cache_size=128M # only results less than this setting will be buffered # this setting is used to protect query buffers and prevent a large result set from overwriting all other query results query_cache_limit=2M # InnoDB uses a buffer pool to hold indexes and raw data # here the larger the setting, the fewer disks you need to access the data in the table. # on a stand-alone database server, you can set this variable to 80% of the server's physical memory size # do not set it too large, otherwise, the page change of the operating system may be caused by physical memory competition. Innodb_buffer_pool_size=1G # the number of IO threads used to synchronize IO operations # this value is hard-coded to 4 under Unix, but it may perform better at a large value on Windows disk Ibino. The number of threads allowed by innodb_read_io_threads=16 innodb_write_io_threads=16 # within the InnoDb core. The value depends on the scheduling of the application, hardware, and operating system. An excessively high value may cause thread mutex bumps. Innodb_thread_concurrency=9 # 0 means that the log is written to the log file only about every second and the log file is flushed to disk. # 1, InnoDB will fsync the transaction log to disk after each commit. # 2 means that the log is written to the log file after each commit, but the log file is only flushed to the size of the buffer used by innodb_flush_log_at_trx_commit=2 # to buffer log data on disk only about every second. Innodb_log_buffer_size=16M # the size of each log file in the log group. The total number of innodb_log_file_size=48M # files in the log group. Innodb_log_files_in_group=3 # how long should an InnoDB transaction wait for a lock to be approved before being rolled back. # InnoDB automatically detects transaction deadlocks and rolls back transactions in its own lock table. # if you use the LOCK TABLES instruction, or use a transaction-safe storage engine other than InnoDB in the same transaction # then a deadlock may occur without InnoDB noticing. In this case, the timeout value is very helpful to solve this problem. Innodb_lock_wait_timeout=30 # enable timing event_scheduler=ON
How long will it be approved? # InnoDB automatically detects transaction deadlocks and rolls back transactions in its own lock table. # if you use the LOCK TABLES instruction, or use a transaction-safe storage engine other than InnoDB in the same transaction # then a deadlock may occur without InnoDB noticing. In this case, the timeout value is very helpful to solve this problem. Innodb_lock_wait_timeout=30# enables timing event_scheduler=ON
Based on the business, plus the screening criteria
Fast 4-5s
Create a joint index of the fields in the where condition except for the time condition
The effect is not that obvious.
Use inner join to associate index conditions in where conditions
I am very surprised at this one. The original sql,b is the index
Select aa from bb_2018_10_02 left join... On.. Left join.. On.. Where b = 'xxx'
There should be a summary result of the execution of union all,union all one by one before. Modify to
Select aa from bb_2018_10_02 left join... On.. Left join.. On.. Inner join (select 'xxx1' as b2 union all select' xxx2' as b2 union all select 'xxx3' as b2 union all select' xxx3' as b2) t on b = t.b2
The result is 3-4s faster.
Performance bottleneck
According to the above operation, the query efficiency in 3 days has reached about 8s, which is no longer fast. Check the cpu utilization and memory utilization of mysql is not high, in the end why the search is so slow, 3 days at most 60w data, associated with some dictionaries, will not be so. Continue to according to the information provided on the Internet, a series of coquettish operation, basically useless, helpless.
Environmental comparison
Due to the analysis of sql optimization has been ok, just imagine whether it is a disk read and write problem. The optimized programs are deployed in different on-site environments. One has ssd and the other has no ssd. It is found that there is a great disparity in query efficiency. Tested by software, it is found that the read and write speed of ssd is 700-800M/s, and that of ordinary mechanical hard disk is 70-80M/s.
Optimization results and conclusions
Optimization result: achieve the expectation.
Optimization conclusion: sql optimization is not only the optimization of sql itself, but also depends on its own hardware conditions, the impact of other applications, as well as the optimization of its own code.
At this point, the study of "SQL optimization actual combat record in MySQL" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.