In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
If you're just completely itching to start using this tool, here's what you need:
1 、 a MySQL database to store query analysis data in.
2. Pt-query-digest.You may as well just get the whole Percona Toolkit while you're at it:)
3. A slow query log from a MySQL server (see The Slow Query Log for info on getting one)
4 、 a webserver with PHP
The first first
[root@xiaoya anemometer] # wget percona.com/get/percona-toolkit.rpm [root@xiaoya ~] # yum install-y percona-toolkit.rpm
Then,Pt-query-digest reference configuration is given
[code collapse= "false"] pt-query-digest-- user= database user name-- password= database password-- review h = database IP and domain name, history h = database IP and domain name, database IP and domain name, no-report-- limit=0%-- filter= "\ $event-> {Bytes} = length (\ $event-> {arg}) and\ $event-> {hostname} =\ "$HOSTNAME\" Database log file address [/ code]
Next,Grammar and important options
Pt-query-digest [OPTIONS] [FILES] [DSN]
-- create-review-table when you use the-- review parameter to output the analysis results to a table, it is automatically created if there is no table.
-- create-history-table when you use the-- history parameter to output the analysis results to a table, it is automatically created if there is no table.
-- filter matches and filters the input slow query according to the specified string and then analyzes it
-- limit limits the percentage or number of output results. The default value is 20, that is, the output of the slowest 20 statements. If it is 50%, it is sorted by the proportion of the total response time from the largest to the lowest, and the total output reaches the 50% position.
-- host MySQL server address
-- user mysql user name
-- password mysql user password
-- history saves the analysis results to a table, and the analysis results are more detailed. The next time you use-history, if the same statement exists and the time interval of the query is different from that in the history table, it will be recorded in the data table. You can compare the historical changes of a certain type of query by querying the same CHECKSUM.
-- review saves the analysis results to the table. This analysis only parameterizes the query conditions, and it is relatively simple to query one record for each type of query. The next time you use-- review, if the same statement parsing exists, it will not be recorded in the data table.
-- the output type of output analysis results. The values can be report (standard analysis report), slowlog (Mysql slowlog), json, json-anon. Generally, report is used for ease of reading.
-- when does since start to analyze. The value is a string. It can be a specified time point in a "yyyy-mm-dd [hh:mm:ss]" format, or a simple time value: s (seconds), h (hours), m (minutes), d (days). For example, 12 hours means that statistics started 12 hours ago.
-- until deadline. With-since, you can analyze slow queries over a period of time.
Install anemometer
First up, grab the anemometer code from github. Navigate to the document root of your web server and snag a copy of the Box Anemometer code.
[root@xiaoya ~] # git clone git://github.com/box/Anemometer.git anemometer
Or, if you have 9418 port closed:
[root@xiaoya ~] # git clone
Then change your current working directory to the anemometer directory:
[root@xiaoya ~] # cd anemometer
Next, you should connect to the MySQL database you're looking to store the analysis data in and issue the following command:
[root@xiaoya anemometer] # mysql-uroot-p123456
< install.sql [root@xiaoya anemometer]# mysql -uroot -p123456 -e "grant ALL ON slow_query_log.* to BY '123456';" Next, grab that slow query log file you have (mine's called "slow.log"!), and run pt-query-digest on it: NOTE: I'm using a BASH 3.0 shell here on my MySQL database server! This is so the "$HOSTNAME" variable properly replaces with "db.example.com") [root@xiaoya ~]# rpm -qa|grep percona-toolkit percona-toolkit-2.2.16-1.noarch For pt-query-digest version < 2.2 pt-query-digest --user=anemometer --password=superSecurePass \ --review h=db.example.com,D=slow_query_log,t=global_query_review \ --review-history h=db.example.com,D=slow_query_log,t=global_query_review_history \ --no-report --limit=0% \ --filter=" \$event->{Bytes} = length (\ $event- > {arg}) and\ $event- > {hostname} =\ "$HOSTNAME\"\ / var/lib/mysql/db.example.com-slow.log
For pt-query-digest version > = 2.2
[root@xiaoya ~] # pt-query-digest-- user=anemometer-- password=123456-- review statistics queryreview-- history statistics queryreview-- history statistics querylog book tallglobalroomquerycalendar history-- no-report-- limit=0%-- filter= "\ $event- > {Bytes} = length (\ $event- > {arg}) and\ $event- > {hostname} =\" $HOSTNAME\ "/ usr/local/mysql/var/slow.log
You may see an error like above, that's okay! TODO: explain what the options above are doing.
Pipeline process 11 (aggregate fingerprint) caused an error: Argument "57A" isn't numeric in numeric gt (>) at (eval 40) line 6, line 27.Pipeline process 11 (aggregate fingerprint) caused an error: Argument "57B" isn't numeric in numeric gt (>) at (eval 40) line 6, line 28.Pipeline process 11 (aggregate fingerprint) caused an error: Argument "57C" isn't numeric in numeric gt (>) at (eval 40) line 6, line 29.View the data!
Now, navigate to the document root of your web server and copy the sample config so you can edit it:
[root@xiaoya conf] # cd / home/wwwroot/anemometer/conf [root@xiaoya conf] # cp sample.config.inc.php config.inc.php
The sample config explains every setting you may want to change in it. At the very least, make sure you set the Datasource to the MySQL database you're storing the analyzed digest information in:
Conf ['datasources'] [' localhost'] = array ('host' = >' db.example.com', 'port' = > 3306,' db' = > 'slow_query_log',' user' = > 'anemometer',' password' = > 'superSecurePass',' tables' = > array ('global_query_review' = >' fact', 'global_query_review_history' = >' dimension'))
In addition, the "explain" plugin is enabled by default in the current release and you'll need to setup the username and password it uses to an account that has privileges to explain queries on a given schema on a host. For example, if you're digesting slow logs that primarily contain queries from the "world" database on db.example.com, you'll need to ensure that the user account you put into the following section of the config has the necessary privileges on the "world" database on db.example.com. To do this, scroll down in the sample config to the section containing the plugins configuration and change the 'user' and' password' parameters to an appropriate account:
Conf ['plugins'] = array (... 'explain' = > function ($sample) {$conn [' user'] = 'anemometer'; $conn [' password'] = 'superSecurePass'; return $conn;},)
Finally,you can look through
Http://10.0.0.5/anemometer/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.