In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
The first step。 Enable mysql slow query
Method 1: modify the configuration file
The configuration file of Windows:Windows is my.ini, which is usually under the installation directory of MySQL or c:\ Windows.
The configuration file of Linux:Linux is my.cnf, which is usually under / etc
Add a few lines to my.ini:
[mysqlld] long_query_time=2 # 5.0,5.1 and other versions configure the following options log-slow-queries= "mysql_slow_query.log" # 5.5 and above the following options slow-query-log=On slow_query_log_file= "mysql_slow_query.log" log-query-not-using-indexes
The first sentence used to define the number of seconds checked is a slow query. What I define here is 2 seconds.
The second sentence is used to define the path of the slow query log (because it is windows, it does not involve permission issues)
The third sentence is to record the query that does not use the index.
Step 2: check the status of slow queries
Method 2: start slow query through MySQL database
The above configuration requires a restart of the mysql server process mysqld to take effect. But in many cases, especially in the product operating environment, you don't want to restart the mysql server for every change, and you also want to be able to record it at a specific time. MySQL5.1 provides us with more flexible runtime control so that you can selectively record or not record certain slow queries without having to restart the mysql server.
In MySQL5.1, global variables slow_query_log and slow_query_log_file are provided to flexibly control enable/disable slow query. At the same time, you can set the time through long_query_time
# / / enable slow query recording # Note: if the slow_query_log global variable is set, log_slow_queries will implicitly change mysql > set global slow_query_log=ON
Unfortunately, MySQL5.0 does not provide similar global variables for flexible control, but we can avoid recording some query statements by setting the long_query_time large enough. such as
Mysql > set global long_query_time = 3600 * MySQL > set global log_querise_not_using_indexes = ON
MySQL5.0, in the case of no service, the way not to log is to turn the log file into a symbolic link (symbolic link) of / dev/null. Note: you only need to run FLUSH LOGS after the change to make sure that MYSQL releases the current log file descriptor and re-logs the log to / dev/null
Unlike MySQL5.0, MySQL5.1 can change the journaling behavior at run time to log to database tables. Just set the mysql global variable log_output to TABLE. MySQL records the log in tables mysql.gengera_log and mysql.slow_ log, respectively. However, we recommend that you log it in a journal file.
Mysql > show variables like 'log_output'\ GVariable_name: log_outputValue: FILEmysql > set global log_output='table'
Defects and Review
Although documented slow query can help you optimize the product. But there are still several deficiencies in the current version of MySQL.
In 1.MySQL5.0 version, the time granularity of long_query_time is not fine enough, and the minimum value is 1 second. For web scripts with high concurrency performance, the appearance of 1 second does not make much sense. That is, fewer queries appear for 1 second. It was not until mysql5.1.21 that a finer-grained long_query_time setting of 2. 2. All queries executed by the server cannot be logged to a slow log. Although the MySQL normal log records all queries, they are recorded before the query is parsed. This means that ordinary logs cannot contain information such as execution time, table locking time, number of rows checked, and so on. 3. If the log_queries_not_using_indexes option is turned on, the slow query log will be filled with too many junk log records, and these fast and efficient full table scan queries (small tables) will flush out the really useful slow queries records. Queries such as select * from category are also recorded.
The microslow-patch patch allows you to use finer time granularity and record all executed sql statements. However, you have to compile MySQL yourself to use this patch. For the sake of stability testing, we recommend that you apply this patch in the development test environment to enjoy the convenience of this patch. Try not to do this in the operating environment.
The second step. Verify whether slow query is enabled
Execute the following SQL statement to view the status of the mysql slow query
The execution result will print the information such as whether to open the slow query, the number of seconds of the slow query, the slow query log and so on.
/ * View slow query time * / show variables like "long_query_time"; default 10s / * View slow query configuration * / show status like "% slow_queries%"; / * View slow query log path * / show variables like "% slow%"
Step 3: perform a slow query operation
In fact, it is difficult to execute a meaningful slow query, because when you test yourself, even if you query the sea scale with 200000 items of data, you only need 0. 5%. A few seconds. We can replace it with the following statement:
SELECT SLEEP (10)
Step 4: check the number of slow queries
Use the following sql statement to see how many slow queries have been executed:
Show global status like'% slow%'
Configuration of mysql logs:
Note: these daily files will only be generated when mysql is restarted # record all sql statements log=E:/mysqllog/mysql.log # record database startup and shutdown information, and log-error=E:/mysqllog/myerror.log # record all sql statements except select statements in the log Can be used to recover the data file log-bin=E:/mysqllog/bin # record slow query sql statement log-slow-queries=E:/mysqllog/slow.log # slow query time long_query_time=2
Step 4: analyze the slow query log
Method 1: through tool analysis
MySQL comes with mysqldumpslow tools for analyzing slow query logs. In addition, there are some useful open source tools. Such as MyProfi (download address: http://sourceforge.net/projects/myprofi/), mysql-log-filter and, of course, mysqlsla
The following is a description of the common parameters of mysqldumpslow. For more information, you can apply mysqldumpslow-help query.
-s indicates how to sort, c, t, l, r are sorted by the number of records, time, query time, and the number of records returned (from large to small), and ac, at, al, and ar represent the corresponding flashbacks. -t, which means top n, means how many previous pieces of data are returned. -g, a regular matching pattern can be written later, which is case-insensitive.
The next step is to use the slow query tool mysqldumpslow that comes with mysql to analyze (under the bin directory of mysql). My log file here is named host-slow.log.
List the 10 sql statements with the most records
Mysqldumpslow-s c-t 10 host-slow.log
List the 10 sql statements that return the most recordsets
Mysqldumpslow-s r-t 10 host-slow.log
Return the first 10 sql statements with left concatenation in time
Mysqldumpslow-st- t 10-g "left join" host-slow.log
Using mysqldumpslow command, we can clearly get all kinds of query statements we need, which is of great help to the monitoring, analysis and optimization of MySQL query statements.
Method 2: directly analyze the mysql slow query log
The log section is as follows:
# Time: 121017 17:38:54 # User@Host: root [root] @ localhost [127.0.0.1] # Query_time: 3.794217 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 4194304 SET timestamp=1350466734; select * from wei where text='orange'; # Time: 121017 17:46:22 # User@Host: root [root] @ localhost [127.0.0.1] # Query_time: 3.819219 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 4194304 SET timestamp=1350467182 Select * from wei where text='xishizhaohua'
As a matter of fact, if you locate the slow query statement, you can already finish it. Execute the explain or desc command to view the slow query statement, as shown below:
The problem is obvious, and the solution is obvious, indexing.
Mysql > create index text_index on wei (text); Query OK, 4194304 rows affected (1 min 58.07 sec) Records: 4194304 Duplicates: 0 Warnings: 0
Then it takes a lot less time to perform the query operation.
Mysql > select * from wei where text='orange'; +-+ | id | text | +-+-+ | 4103519 | orange | +-+-+ 1 row in set (0.33 sec)
Slow Query log, although it helps you record those SQL statements that have been executed. But it's not everything, and it may not be as meaningful as you think. It only tells you which sentences are slow, but why slow? The specific reasons still need to be analyzed and debugged by yourself. Maybe you just need to change a more effective sql statement, maybe you just need to simply add an index, but it's also possible that you need to adjust the plan of your application. For example, the above statement is obvious, it examines more than 6 million rows of data. Unfortunately, not every statement is so obvious. There may be other reasons, such as:
* the table is locked, causing the query to be in an isostatic state. Lock_time shows the time that the query waits for the lock to be translated * the data or index is not cached. It is common to start the server for the first time or the server is not tuned * backup the database, Icano slows down * maybe other queries are run at the same time, reducing the current query.
Therefore, do not be too nervous about a record in the log file, but should review it rationally to find out the real reason. If there are frequent slow query, you need to pay special attention. If it appears individually, you can do some routine tests. We suggest that statistics and the formation of a benchmark report, comparative exclusion, is more useful than random bumping. I hope you will not waste too much time and energy in this part.
Summary
The above is the whole content of this article. I hope the content of this article has a certain reference and learning value for everyone's study or work. Thank you for your support. If you want to know more about it, please see the relevant links below.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.