In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Reference article:
Http://ourmysql.com/archives/1359?utm_source=tuicool&utm_medium=referral
Official: https://github.com/box/Anemometer
Single node Anemometer monitoring
1 install anemometer
# cd / data/www/web3# git clone https://github.com/box/Anemometer.gitanemometer & & cd anemometer
2 create a table and user name
# mysql-uroot-proot '192.168.2.11,' port' = > 3306, 'db' = >' slow_query_log', 'user' = >' anemometer', 'password' = >' 123456', 'tables' = > array (' global_query_review' = > 'fact',' global_query_review_history'= > 'dimension') 'source_type' = >' slow_query_log') $conf ['datasources'] [' 192.168.2.12'] = array ('host' = >' 192.168.2.12', 'port' = > 3306,' db' = > 'slow_query_log',' user' = > 'anemometer',' password' = > '123456', 'tables' = > array (' global_query_review' = > 'fact') 'global_query_review_history'= >' dimension'), 'source_type' = >' slow_query_log') Conf ['plugins'] = array (... Omit the code. $conn ['user'] =' anemometer'; $conn ['password'] =' 123456;. Omit the code.
# / etc/init.d/nginx restart restart Nginx
Chrome view http://192.168.2.11/ as shown in the following figure
5. The following is my own script for writing pt analysis slow query logs
(the individual provided by anemometer is not used to it, so he wrote a simpler one.)
The vim / home/scripts/pt-digest.sh content is as follows:
#! / bin/bash# I just wrote the configuration to death here If you don't feel too good, you can refer to other articles to separate the connection configuration of the database # SQL_DATADIR= "/ usr/local/mariadb/var" # the directory where the slow log is stored (basename) SLOW_LOG_FILE=$ (mysql-uroot-proot-e "show global variables like'slow_query_log_file'" | tail-n1 | awk'{print $2}') # get the local machine IP address IP_ADDR=$ (/ sbin/ifconfig | grep'inet addr' | egrep '172. | 192.' | awk' {print $2}' | awk-F ":"'{print $2}') cp $SQL_DATADIR/$SLOW_LOG_FILE/tmp/ # analyze the log and store it in slow_query_log database / usr/local/bin/pt-query-digest-user=anemometer-password=123456-host=$IP_ADDR\-review h=$IP_ADDR History globalization queryreview\-- history hobbies query IPDRPerspectiveDobblicqueryProductionHistory\-- no-report-- limit=0%-- filter= "\ $event- > {Bytes} = length (\ $event- > {arg}) and\ $event- > {hostname} =\" $HOSTNAME\ "/ tmp/$SLOW_LOG_FILE rm-f / tmp/$SLOW_LOG_FILE
After debugging, add the following command to crontab to collect slow query logs regularly and store them in the database
59 23 * / bin/bash / home/scripts/pt-digest.sh > / dev/null
In this way, you can automatically analyze and collect slow query logs every day.
In addition, the slow log is recommended to be segmented by day, so that repeated analysis can be avoided when pt-query-digest is used for SQL slow log statistics. The script for day-by-day segmentation of slow queries is as follows:
The following is the slow query log segmentation script for Tips:
The following is a script (/ home/scripts/mysql_log_rotate) that polls cutting mySQL slow queries and error logs:
"/ usr/local/mariadb/var/localhost-slow.log"/ usr/local/mariadb/var/localhost_err" {create 660 mariadb mariadb # the file permissions and the master group need to modify the dateext notifempty daily maxage 60 rotate 30 missingok olddir / usr/local/mariadb/var/oldlogs # directory according to their own situation. If the directory does not exist, create it yourself first. And modify the owner to mariadb postrotate if / usr/local/mariadb/bin/mysqladminping-uroot-proot & > / dev/null Then / usr/local/mariadb/bin/mysqladminflush-logs-uroot-proot fi endscript}
Configure another CRONTAB:
00 00 * (/ usr/sbin/logrotate-f / home/scripts/mysql_log_rotate > / dev/null 2 > & 1)
In this way, daily slow query logs and error logs are automatically stored in the / usr/local/mariadb/var/oldlogs/ directory.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.