In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "what are the necessary knowledge points of MySQL log files for back-end development". In daily operation, I believe that many people have doubts about the necessary knowledge points of MySQL log files for back-end development. Xiaobian consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful for you to answer the doubts of "what are the necessary knowledge points of MySQL log files for back-end development?" Next, please follow the editor to study!
Preface
Log files record various types of activities that affect MySQL databases. The common log files in MySQL databases are error logs, binary logs, slow query logs and query logs. Let's introduce them respectively.
Error log
The error log file records the startup, operation, and shutdown process of MySQL.
Mysql > show variables like 'log_error' +-+-+ | Variable_name | Value | +-+-+ | log_error | / var/log/mysqld.log | +- -+-+ 1 row in set (0.03 sec)
You can see the path and filename of the error log. By default, the filename of the error file is the hostname of the server, hostname.err. It's just that what I set up here is / var/log/mysqld.log, and the error log address can be added in / etc/my.cnf.
# Recommended in standard MySQL setup sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid
When the MySQL database does not start properly, the first file that must be looked for is the error log file, which records the error message and can help us find the problem.
Slow query log
The slow log is used to record SQL statements whose response time exceeds the threshold, so we can set a threshold to log all SQL statements that run beyond this value in the slow log file. This threshold can be set by the parameter long_query_time, which defaults to 10 seconds.
Start slow query log
By default, the MySQL database does not start the slow log. You need to manually set this parameter to ON, and then start it.
Mysql > show variables like "slow%" +-- +-- + | Variable_name | Value | +- -- +-- + | log_slow_admin_statements | OFF | | log_slow_slave_statements | OFF | | slow_launch_time | 2 | | slow_query_log | OFF | | slow_query_log_file | / var/lib/mysql/iz2zeaf3cg1099kiidi06mz-slow.log | +-- | -+-+ 5 rows in set (0.00 sec) mysql > set global slow_query_log='ON' Query OK, 0 rows affected (0.00 sec) mysql > show variables like "slow_query_log" +-- +-- + | Variable_name | Value | +- -+ | slow_query_log | ON | | slow_query_log_file | / var/lib/mysql/iz2zeaf3cg1099kiidi06mz-slow.log | +-+ 2 rows in set (0.00 sec) |
However, using set global slow_query_log='ON' to open the slow query log is only valid for the current database. If the MySQL database is restarted, it will become invalid. So if you want to take effect permanently, you need to modify the configuration file my.cnf (as do other system variables), as follows:
[mysqld] slow_query_log=1
Then restart MySQL to enable slow log recording, and the path of the log file is the path corresponding to the above slow_query_log_file.
Set threshold
Mysql > show variables like 'long_query_time' +-+-+ | Variable_name | Value | +-+-+ | long_query_time | 10.000000 | +-+-+ 1 row in set (10.000000 sec)
The threshold defaults to 10 seconds, and we can modify the threshold size, such as (of course, this is still valid for the current database):
Mysql > set global long_query_time=0.05; Query OK, 0 rows affected (0.00 sec)
When the long_query_time threshold is set, the MySQL database records all SQL statements whose run time exceeds that value, but it is not recorded if the run time is exactly equal to long_query_time. Set long_query_time to 0 to capture all queries
Parameter log_queries_not_using_indexes
Another parameter related to slow log is log_queries_not_using_indexes
If you run a SQL statement that does not use an index, the MySQL database also logs the SQL statement to the slow query log file. First make sure that log_queries_not_using_indexes is turned on
Mysql > show variables like 'log_queries_not_using_indexes' +-- +-+ | Variable_name | Value | +-+-+ | log_queries_not_using_indexes | ON | +- -+-+ 1 row in set (0.12 sec)
For example, no index is used for query:
Mysql > explain select * from vote_record_memory where vote_id = 323 +-+ | id | select_type | table | | type | possible_keys | key | key_len | ref | rows | Extra | +-+ -+-+ | 1 | SIMPLE | vote_record_memory | ALL | NULL | 149272 | Using where | +- -+ 1 row in set (1.56 sec)
You can see that a full table scan has been performed; then go to the log log file to see that the SQL has been marked as slow SQL because it does not use an index.
# Time: 180817 11:42:59 # User@Host: root [root] @ [117.136.86.151] Id: 2625 # Query_time: 0.016542 Lock_time: 0.000112 Rows_sent: 142Rows_examined: 149272 SET timestamp=1534477379; select * from vote_record_memory where vote_id
Put log records in a table
MySQL5.1 can start by putting the log records of slow queries into a table called slow_log under the mysql database
| | slow_log | CREATE TABLE `slow_ log` (`query_ time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_ host` mediumtext NOT NULL, `query_ time`time NOT NULL, `lock_ time` time NOT NULL, `rows_ sent` int (11) NOT NULL, `rows_ examined` int (11) NOT NULL, `db`varchar (512) NOT NULL, `last_insert_ id`int (11) NOT NULL, `insert_ id`int (11) NOT NULL, `server_ id` int (10) unsigned NOT NULL | `sql_ text`mediumtext NOT NULL, `thread_ id` bigint (21) unsigned NOT NULL) ENGINE=CSV DEFAULT CHARSET=utf8 COMMENT='Slow log' |
The parameter log_output specifies the format of the slow query output. The default is file. You can set it to table, which will be changed to the slow_log above.
Mysql > show variables like "log_output"; +-+-+ | Variable_name | Value | +-+-+ | log_output | FILE | +-+-+ 1 row in set (0.19 sec)
However, in most cases, this is not necessary, which not only has a great impact on performance, but also MySQL 5.1 already supports microsecond-level information when recording slow queries to files. However, recording slow queries to tables results in a time granularity of only seconds, while second-level slow query logs do not make much sense.
Slow query log analysis tool
Mysqldumpslow command
When more and more SQL queries are recorded in the slow log file, it is not easy to look at the log file directly. MySQL provides the mysqldumpslow command solution:
[root@iz2zeaf3cg1099kiidi06mz mysql] # mysqldumpslow iz2zeaf3cg1099kiidi06mz-slow.log Reading mysql slow query log from iz2zeaf3cg1099kiidi06mz-slow.log Count: 1 Time=60.02s (60s) Lock=0.00s (0s) Rows=149272.0 (149272), root [root] @ [117.136.86.151] select * from vote_record_memory Count: 1 Time=14.85s (14s) Lock=0.00s (0s) Rows=0.0 (0) Root [root] @ [117.136.86.151] CALL add_vote_memory (N) Count: 1 Time=1.72s (1s) Lock=0.00s (0s) Rows=0.0 (0), root [root] @ [117.136.86.151] INSERT into vote_record SELECT * from vote_record_memory Count: 1 Time=0.02s (0s) Lock=0.00s (0s) Rows=142.0 Root [root] @ [117.136.86.151] select * from vote_record_memory where vote_id = N
For more information about the mysqldumpslow command, see:
Https://github.com/luisedware/Archives/issues/7
Pt-query-digest tool
Pt-query-digest is the most powerful tool for analyzing MySQL query logs. It can analyze binlog,Generallog,slowlog or MySQL protocol data captured by show processlist or tcpdump. It is more specific and more perfect than mysqldumpslow. The following is an example of using pt-query-digest:
/ / analyze the slow query file pt-query-digest slow.log > slow_report.log directly
The tool can print out the analysis report of the query and output the analysis results to the file. The analysis process is to parameterize the conditions of the query statement first, and then group the queries after parameterization to calculate the execution time of each query. Times, proportion and so on, we can find out the problems and optimize them with the help of the analysis results.
For more information about the installation and use of pt-query-digest, please refer to:
Www.ywnds.com/?p=8179
Query log
The view log records all requests to the MySQL database, regardless of whether the requests were executed correctly or not. Defaults to hostname .log
Mysql > show variables like "general_log%" +-+-- + | Variable_name | Value | +-+- -- + | general_log | OFF | | general_log_file | / var/lib/mysql/iz2zeaf3cg1099kiidi06mz.log | +- -- + 2 rows in set (0.24 sec)
The query log is not started by default and must be opened first.
Mysql > set global general_log='ON'; Query OK, 0 rows affected (0.05sec) mysql > show variables like "general_log%" +-+-- + | Variable_name | Value | +-+- -- + | general_log | ON | | general_log_file | / var/lib/mysql/iz2zeaf3cg1099kiidi06mz.log | +- -- + 2 rows in set (0.11 sec)
Binary log
Binary logs record all operations that make changes to the database, but do not include operations such as select and show, because such operations do not modify the data itself, and if you also want to record select and show operations, you have to use query logs instead of binary logs.
In addition, binary also includes information such as the time and time to perform database change operations. Binary logs have the following main functions:
Recovery: the recovery of some data requires binary logs. For example, when the complete files of a database are restored, we can restore point-in-time through binary logs.
Replication: synchronizes a remote MySQL database (usually slave or standby) with a MySQL database (usually master or primary) in real time by copying and executing binary logs
Audit: users can audit the information in the binary log to determine whether there is an injection attack on the database
Open binary log
The binary log can be started by configuring the parameter log-bin [= name]. If name is not specified, the default binary log file name is the hostname and the suffix is the sequence number of the binary log
[mysqld] log-binmysql > show variables like 'datadir' +-+-+ | Variable_name | Value | +-+-+ | datadir | / var/lib/mysql/ | +-- -+ 1 row in set (0.00 sec)
Mysqld-bin.000001 is the binary log file, while mysqld-bin.index is the binary index file. To manage all the binlog files, MySQL creates an additional index file, which records all the binlog files used by MySQL in sequence. If you want to customize the name of the index file, you can set the log_bin_index=file parameter.
-rw-rw---- 1 mysql mysql 120 Aug 21 16:42 mysqld-bin.000001-rw-rw---- 1 mysql mysql 20 Aug 21 16:42 mysqld-bin.index
View binary log files
For binary log files, unlike error log files, slow query log files can be viewed with commands such as cat,head, tail, etc., it needs to be viewed through the tool mysqlbinlog provided by MySQL. Such as:
[root@iz2zeaf3cg1099kiidi06mz mysql] # mysqlbinlog mysqld-bin.000001 / *! 50530 SET @ @ SESSION.PSEUDO_SLAVE_MODE=1*/; / *! 40019 SET @ @ session.max_insert_delayed_threads=0*/; / *! 50003 SET @ OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER / *! * / # at 4 # 180821 16:42:53 server id 1 end_log_pos 120 CRC32 0x3e55be40 Start: binlog v 4, server v 5.6.39-log created 180821 16:42:53 at startup # Warning: this binlog is either in use or was not closed properly. BINLOG 'jdB7Ww8BAAAAdAAAAHgAAAABAAQANS42LjM5LWxvZwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAACN0HtbEzgNAAgAEgAEBAQEEgAAXAAEGggAAAAICAgCAAAACgoKGRkAAUC+ VT4=' / *! * /; DELIMITER; # End of log file ROLLBACK/* added by mysqlbinlog * /; / *! 50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; / *! 50530 SET @ @ SESSION.PSEUDO_SLAVE_MODE=0*/
Binary log file configuration parameters
The following is a brief introduction to several important configuration parameters of the following binary log files
Max_binlog_size
You can limit the size of a single binlog file by the max_binlog_size parameter (default 1G)
Binlog_cache_size
When using a transaction's table storage engine (such as the InnoDB storage engine), all uncommitted (uncommitted) binary logs are recorded in a buffer, and when the transaction commits (committed), the binary logs in the cache are written directly to the binary log file, and the size of the buffer is determined by binlog_cache_size, and the default size is 32K.
In addition, binlog_cache_size is session-based (session), and when each thread starts a transaction, MySQL automatically allocates a cache of binlog_cache_size size
Mysql > show variables like 'binlog_cache_size'; +-+ | Variable_name | Value | +-+-+ | binlog_cache_size | 32768 | +-+-+ 1 row in set (0.00 sec)
Sync_binlog
By default, binary logs are not synchronized to disk every time you write. The parameter sync_binlog = [N] indicates that each write buffer is synchronized to disk. If N is set to 1, that is, sync_binlog = 1 means that the binary log is written to disk synchronously, then the write operation does not have to write the binary log to the buffer of the operating system mentioned above.
Binlog_format
The binlog_format parameter is very important because it affects the format in which binary logs are recorded, which are divided into three formats:
1. Statement: the logical SQL statement that records the log
2. Row: record the row changes of the table
3. Mixed: under this format, mysql uses statement format to record binary log files by default, but in some cases ROW format is used. There are the following situations:
The storage engine of the table is NDB, and all DML operations on the table are recorded in ROW format.
Uncertain functions such as UUID (), USER (), CURRENT_USER (), FOUND_ROW (), ROW_COUNT () and so on are used.
The INSERT DELAY statement is used.
User-defined functions (UDF) are used.
A temporary table (temporary table) is used.
At this point, the study of "what are the necessary knowledge points of MySQL log files for back-end development" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.