In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "analyzing the log information record table of the mysql system library". In the daily operation, I believe that many people have doubts in analyzing the log information record table of the mysql system library. The editor has consulted all kinds of materials and sorted out the simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "analyzing the log information record table of the mysql system library". Next, please follow the editor to study!
1. Log information record table
1.1. Overview of log information
The log system of MySQL includes: general query log, slow query log, error log (recording error messages when MySQL Server starts, running, and stopping), binary log (logical log recording data changes during MySQL Server running), relay log (recording master database data change log obtained from library IO thread from master database), DDL log (recording metadata change information when DDL statement is executed. Only write to file is supported in 5.7. write to innodb_ddl_ log table is supported in 8.0. note that ddl log is different from online ddl's alter log, so don't get confused). In MySQL 5.7, only general query log and slow query log support writing to the table (and writing to files), and other log types only support writing to files in MySQL 5.7, so General query log and slow query log tables are mainly introduced for log system tables below. By default, all logs for other platforms except error logs on Windows are not enabled by default (DDL logs are created only when needed, and there is no user-configurable option). By default, all logs are written in the datadir directory, but you can change the path by using the path parameters corresponding to each log. General query log:general_log_file=/home/mysql/data/mysqldata1/mydata/localhost.logerror log:log_error=/home/mysql/data/mysqldata1/log/error.logslow query log:slow_query_log_file=/home/mysql/data/mysqldata1/slowlog/slow-query.logbinary log:log_bin_basename=/home/mysql/data/mysqldata1/binlog/mysql-bin 、 Log_bin_index=/home/mysql/data/mysqldata1/binlog/mysql-bin.indexrelay log:relay_log_basename=/home/mysql/data/mysqldata1/relaylog/mysql-relay-bin, relay_log_index=/home/mysql/data/mysqldata1/relaylog/mysql-relay-bin.index by default All logs are written to disk files, but general query log and slow query log can be saved to table mysql.general_log and mysql.slow_ log tables through log_output=TABLE settings (DDL log can be configured in 8.0, can be printed to the error log, or saved in table innodb_ddl_log).
By default, binary log scrolls automatically according to the size set by the max_binlog_size parameter, and relay log scrolls automatically according to max_relay_log_size or max_binlog_size (if max_relay_log_size is not set, it scrolls according to max_binlog_size size). Other log types do not scroll and always use the same file, so when other log types grow too large, you need to cut them by yourself.
Generally use mv file file.bak; and then execute the refresh command. The refresh command can log in to the instance and use the flush logs command to refresh and generate new log files, but this command refreshes all log types. For specific log types, you can use: flush binary logs; refresh binary log, flush error logs; refresh error log, flush general logs; refresh ordinary query log, flush slow logs; refresh slow query log, flush relay logs Refresh relay logs, flush engine logs; refresh any flushable logs related to the storage engine.
You can also use Server's flush tables; statement or flush table with read lock; statement.
Refresh operations can also be implemented using the options of some command-line tools, such as the flush-logs option of using the mysqladmin command, or the flush-logs option and-- master-data option of mysqldump. The log table implementation has the following characteristics: usually, the main purpose of the log table is to provide an access interface for the program to view the operation of SQL in Server, so it is more convenient for log records to be stored in a table than in disk files, because these log records can be accessed remotely in the table without the need to log in to the operating system to access disk files. Log tables can use CREATE TABLE,ALTER TABLE and DROP TABLE statements, but only if you need to close the table using the corresponding switches and cannot operate during use (for example, set global general_log=0, and then manipulate the general_log table). General_log and slow_ log tables default to the CSV engine, using a comma-separated format to store log records, and CSV data files can be easily imported into other programs for processing, such as excel spreadsheets. The log table can be modified to MyISAM, but the use of the table must be stopped before modification. The legitimate engines are CSV and MyISAM, and other engines do not support it.
To disable logging tables for corresponding DDL statement operations, you can use the following steps (take slow query tables as an example, slow_log and general_log tables operate in a similar way).
SET @ old_log_state = @ @ global.general_log;SET GLOBAL general_log = 'OFF';ALTER TABLE mysql.general_log ENGINE = MyISAM;SET GLOBAL general_log = @ old_log_state
You can use TRUNCATE TABLE to clear logging.
You can use RENAME TABLE to archive log tables, and the old and new tables do an atomic name exchange operation, as follows:
Use mysql;DROP TABLE IF EXISTS general_log2;CREATE TABLE general_log2 LIKE general_log;RENAME TABLE general_log TO general_log_backup,general_log2 TO general_log
Matters needing attention
You can use the CHECK TABLE statement.
You cannot use LOCK TABLE statements.
INSERT,DELETE and UPDATE statements cannot be used, and log table record changes are maintained internally by Server and cannot be operated manually.
The setting of the FLUSH TABLES WITH READ LOCK and read_only system variables has no effect on the log table. Log tables can always be written within Server.
Data changes to the log table are not recorded to the binlog, so they are not copied to the slave library.
You can use FLUSH TABLES or FLUSH LOGS statements to refresh log tables or log files.
Log tables do not support partitioned tables.
The mysqldump dump contains statements to recreate these tables to restore the log table structure after the dump file is reloaded, but the records in the log table are not dumped.
PS:MySQL query logs, error logs, etc., are recorded in clear text, so users' plaintext password information may be recorded in these logs. You can use the rewrite plug-in to record them in the original format. For more information, please see the link:
Https://dev.mysql.com/doc/refman/5.7/en/plugin-types.html#query-rewrite-plugin-type
Https://dev.mysql.com/doc/refman/5.7/en/rewriter-query-rewrite-plugin.html
1.2. Detailed explanation of log table
1.2.1. General_log
This table provides information about the execution records of querying ordinary SQL statements to find out exactly what SQL is executed on the server by the client (of course, you can also use the enterprise version of the audit log audit plug-in to record it, which will not be discussed in this article, and interested children's shoes will study for themselves). The information in this table is recorded when the SQL execution starts, rather than waiting for the SQL execution to finish.
The following is the content of the information stored in the table.
Root@localhost: (none) 07:25:50 > set global log_output='TABLE';Query OK, 0 rows affected (0.00 sec) root@localhost: (none) 07:26:20 > set global general_log=1;Query OK, 0 rows affected (0.01 sec) root@localhost: (none) 07:26:32 > select * from mysql.general_log + -+ | event_time | user_host | thread_id | server_id | command_type | argument | +-- +-- -+-- + | 2018-06-19 19localhost 26VR 32.891371 | root [root] @ localhost [] | 3 | 3306102 | Query | show databases | | 2018-06-19 19vis26 Query 42.012064 | root [root] @ localhost [] | 3 | 3306102 | Query | select * from mysql.general_log | +- -+ -- + 2 rows in set (0.00 sec) root@localhost: (none) 07:26:42 > select connection_id () +-+ | connection_id () | +-+ | 3 | +-+ 1 row in set (0.00 sec)
Table field meaning.
Event_time: the value of the log_timestamps system variable at the moment the query log is recorded into the table, marking when the query log record is entered into the database.
User_host: indicates the source of the query log record, with user name and hostname information.
Thread_id: represents the process_id when the query log record is executed.
Server_id: represents the database instance ID that executed the query.
Command_type: represents the command type of the query, usually query.
Argument: represents the text of the SQL statement that executes the query.
Mysqld writes statements to the query log in the order in which requests are received (this may be different from the order in which they are executed).
In the master-slave replication architecture.
When the statement-based log format is used on the master database, the slave database will record these statements in its own query log after replaying these statements (you need to enable query logging in the slave database). When the binlog recorded in the statement format is imported into the database after parsing using the mysqlbinlog command, if the instance enables query logging, these parsing statements will also be recorded in the query log.
When the row-based log format is used on the master library, these statements are not counted in the query log of the slave library after the data changes are replayed from the slave library.
When using the mixed-based log format on the master library, if the master library is recorded in statement format, the statement will be recorded in its own query log after replaying these data changes from the slave library (query logging function needs to be enabled from the slave library). If the master library is converted to row format when recording binlog, it will also be copied in the same row format. These statements are not recorded in your own query log after the data changes are replayed from the library.
Query logs can use the system variable sql_log_off variable to dynamically turn off query logging for the current session or for all sessions (similar to the sql_log_bin system variable). The query log switch general_log variable and the query disk log file path general_log_file variable can be dynamically modified (if the query log is already open, use the general_log_file variable to modify the query log path to close the old query log and open the new query log). When query log is enabled, the query log will be maintained to the destination specified by the system variable log_output.
If the query log is enabled, the query log file will be reopened when Server restarts. If the query log exists, it will be reopened directly. If the query log does not exist, it will be recreated. If you need to dynamically archive the query log during Server runtime, you can follow the following command (linux or unix platform).
Shell > mv host_name.log host_name-old.logshell > mysqladmin flush-logsshell > mv host_name-old.log backup-directory# on Windows, please use rename directly instead of mv command
You can also disable the query log function with statements when Server is running, then use external commands to archive, and then re-enable query logs, so that you do not need to use the flush-logs command to refresh log files. This method is suitable for any platform, as shown below:
SET GLOBAL general_log = 'OFF';# renames the log file externally with logging disabled; for example, from the command line. Then enable logging again: SET GLOBAL general_log = 'ON';# this method works for any platform and does not require a server restart.
By default, a statement executed in Server with a user password will be rewritten by Server and then written to the query log. If you need to record a plaintext password, you need to use the-- low-raw option to start Server (using this option will bypass the password rewriting function). It is generally not recommended to record password plaintext information in the query log because it is not secure, but if necessary Make your own judgment (for example, when you need to query the original statement information to troubleshoot the problem).
If the password is specified as a hash value in the statement with a password, the password string will not be rewritten, for example, CREATE USER 'user1'@'localhost' IDENTIFIED BY PASSWORD' not-so-secret'; will be recorded for the original reason, but if the PASSWORD keyword CREATE USER 'user1'@'localhost' IDENTIFIED BY' not-so-secret' is removed Will be rewritten as: CREATE USER 'user1'@'localhost' IDENTIFIED WITH' mysql_native_password' AS'in the query log
Some syntax errors in SQL are not logged to the query log by default, and starting Server with the-- low-raw option records all the original SQL statements.
The timestamp information in the query log table is derived from the system variable log_timestamps (including the value of the timestamp in both the slow query log file and the error log file). The timestamp value can be converted from the local system time zone to any desired time zone (modifying the session-level time_zone variable value) by using the CONVERT_TZ () function or by setting up the session.
1.2.2. Slow_log
The table provides SQL for which the query execution time exceeds the long_query_time setting, or for which the index is not used (parameter log_queries_not_using_indexes=ON needs to be turned on) or administrative statements (parameter log_slow_admin_statements=ON needs to be enabled).
The following is the content of the information stored in the table.
Root@localhost: test 08:46:04 > set global long_query_time=0;Query OK, 0 rows affected (0.01 sec) root@localhost: test 08:55:14 > set global slow_query_log=1;Query OK, 0 rows affected (0.01 sec) # disconnect meeting and reconnect root@localhost: (none) 08:56:12 > use testDatabase changedroot@localhost: test 08:56:13 > show tables +-+ | Tables_in_test | +-+ | customer | | product | | shares | | test | | transreq | +-+ 5 rows in set (sec) root@localhost: test 08:56:16 > select * from test +-- +-+ | a | b | c | d | f | +-- +-- + | 1 | 1 | 1 | 1 | 1 | 2 | 2 | 2 | 2 | 2 | 3 | 3 | 3 | 3 | 3 | 3 | | | 4 | 4 | 4 | 4 | 4 | 4 | | 5 | 5 | 4 | 4 | 5 | 5 | +-- +-- + 5 rows in set (0.01 sec) root@localhost: test 08:56:18 > select * from mysql.slow_log | +- -+ | start_time | user_host | query_time | lock_time | rows_sent | rows_examined | | db | last_insert_id | insert_id | server_id | sql_text | thread_id | +-- | -+- + | 2018-06-19 20 localhost 56 root [root] @ localhost [] | 0 | 0 | 3306102 | select @ @ version_comment limit 1 | 4 | 2018-06-19 20 localhost 5612.258551 | root [root] @ localhost [] | 00 | 0 | 0 | 0 | 3306102 | select USER () | 4 | 2018 | -06-19 20 localhost 56 localhost 13.975382 | root [root] @ localhost [] | 00 | 0 | 0 | 3306102 | SELECT DATABASE () | 4 | 2018-06-19 2056 localhost 13.975627 | root [root] @ localhost [] | 00000095 | 000.000000 | 1 | 0 | test | 0 | 0 | 3306102 | Init DB | 4 | 2018-06-19 2056 | : 16.277207 | root [root] @ localhost [] | 00test | 0 | 0 | 3306102 | show tables | 4 | | 2018-06-19 20 test 56 show tables | 18.936831 | root [root] @ localhost [] | 0000localhost 00.000694 | 0000localhost 00.000400 | 5 | test | 0 | 0 | 3306102 | select * from test | 4 | +- -+- -+ 6 rows in set (0.00 sec)
Table field meaning.
Start_time: the value of the log_timestamps system variable when the slow query log is recorded to the table.
User_host: a value with username and hostname (IP) format, used to mark the access source.
Query_time: the total execution time of the slow query statement.
Lock_time: the time that the slow query statement holds the lock.
Rows_sent: the number of data records that the slow query statement finally returns to the client.
Rows_examined: the number of records checked by the slow query statement in the storage engine.
Db: the default library name when the slow query statement is executed.
Last_insert_id: usually 0.
Insert_id: usually 0.
Server_id: the server id that produces the slow query statement.
Sql_text: the statement text of the slow query log.
Thread_id: the thread process_id that generates the slow query log.
The slow query log contains SQL statements that take longer to execute than the seconds set by the long_query_time system variable, and SQL statements that need to check that the number of rows exceeds the value set by the min_examined_row_limit system variable (which is 0 by default, which means there is no limit on the number of rows to check). The minimum and default values for long_query_time are 0 and 10 (in seconds), respectively. This value can be specified as microseconds (using decimals), but microsecond units are only valid for recording to a file. For slow query statements recorded in a table, microseconds are not supported and microseconds are ignored.
By default, the slow log does not record administrative statements or statements that do not use indexes, but you can use the log_slow_admin_statements and log_queries_not_using_indexes system variables to change the default behavior so that MySQL Server counts both administrative statements and statements that do not use indexes in the slow log.
The time for the statement to acquire the initial lock in the slow query log is not counted in the execution time, including the time range: after the lock is acquired, and after the statement execution is completed, before the lock is released. The slow query statement is then written to the slow query log. Therefore, the order of recording in the slow query log may not be the same as the order of statements received by MySQL Server (execution order), because there may be statements that execute first before releasing all locks, and some statements that execute later release all locks first.
Slow query logs are not enabled by default. To enable it, you can set it using-- slow_query_log = 1. To specify the slow log file name, you can use-- slow_query_log_file = file_name. To specify the slow log output destination, you can use-- log-output=FILE | TABLE | NONE to set it.
If slow logging is enabled, but no name is specified, it is named host_name-slow.log by default under datadir, and if the error is reported in the table with-- log-output=TABLE, the path set by slow_query_log_file = file_name is invalid.
To dynamically modify the slow log file name, you can use slow_query_log=0 to close the slow log file, use slow_query_log_file=new_file_name to specify a new slow log file name, and then use slow_query_log=1 to re-enable the slow log file.
If mysqld uses the-- log-short-format option during startup, MySQL Server writes less slow query information to the slow query log.
If the log_slow_admin_statements=1 setting is used, MySQL Server records the following administrative statements in the slow query log:
ALTER TABLE,ANALYZE TABLE,CHECK TABLE,CREATE INDEX,DROP INDEX,OPTIMIZE TABLE and REPAIR TABLE
If the log_queries_not_using_indexes=1 setting is used, MySQL Server records any query statements that do not use indexes in the slow query log.
When these queries are recorded, the slow query log may grow rapidly. At this point, you can set the log_throttle_queries_not_using_indexes system variable to limit the rate at which these unindexed statements are counted in the slow query log (note: this variable limits the number of unindexed statements within 60 seconds, not the time limit). By default, this variable is 0, indicating that there is no rate limit. When restriction is enabled, after the first query that does not use an index is executed, a 60-second time window opens, in which no other unindexed queries are logged to the slow log, and after the wait time window ends, Server records a summary information indicating how many times and the total time spent on these times of execution. Then enter the next 60-second window.
MySQL Server determines whether the statement needs to be counted in the slow query in the following order:
Determine whether the parameter log_slow_admin_statements is enabled, if enabled, determine whether the statement is an administrative statement, if so, count the slow query, if not, proceed to the next round of judgment. If the parameter is not enabled, proceed to the next step of determination.
Determine whether the execution time of the query statement exceeds long_query_time seconds, if so, count the slow query; if not, determine whether the log_queries_not_using_indexes parameter is enabled; if the parameter is enabled and the statement does not use an index, it will be counted as the slow query, otherwise proceed to the next step of judgment.
If the min_examined_row_limit variable sets a non-zero value, it is determined whether the number of check rows of the statement exceeds the value set by the variable, if so, it is counted as a slow query, and if it does not exceed, the slow query is not recorded.
The timestamp of slow log records is controlled by the log_timestamps system variable.
By default, the slave library in the replication schema will not write the slow query generated by replaying binlog into its own slow query log. If you need to record the slow query statement of replaying binlog from the library into the slow query log, you need to enable the variable log_slow_slave_statements=1.
The password in the statement written to the slow query log is rewritten by the server and does not appear in plain text. If you need to record the original statement, you need to use the-- log-raw option.
2. Mixed list
Since this series does not introduce the audit_log_filter, audit_log_user table, firewall plug-in firewall_users, firewall_whitelis table of the enterprise version authentication plug-in, only one servers hybrid table is not long enough for another issue, all of which we force into this issue, mainly the information used by the federated engine, if you are not interested, you can skip the follow-up of this issue directly.
2.1. Servers
This table provides information about querying connection combinations (IP, port, account, password, database name and other information of the remote instance, as detailed in the following examples). This connection combination information is usually used in the federated engine (of course, it can also be used as a way to save connection combinations in the database, and it is easy to maintain). The information in this table needs to be created using create server.
Before introducing the meaning of the other field, let's take a look at two ways to create the dederated engine.
# Syntax:CREATE SERVER server_name FOREIGN DATA WRAPPER wrapper_name OPTIONS (option [, option]...) option: {HOST character-literal | DATABASE character-literal | USER character-literal | PASSWORD character-literal | SOCKET character-literal | OWNER character-literal | PORT numeric-literal} # specify the complete connection combination CONNECTION=scheme://user_name [: password] @ host_name [: port_num] / db_name/tbl_name directly using the CONNECTION option
The following is the content of the information stored in the table.
Root@localhost Tue Jun 5 01:12:05 2018 01:12:05 [(none)] > CREATE SERVER fedlink_ip-> FOREIGN DATA WRAPPER mysql-> OPTIONS (USER 'test',PASSWORD' test', HOST '127.0.0.1, PORT 3306, DATABASE' test_table',Owner 'test_table1') Query OK, 1 row affected (0.00 sec) root@localhost Tue Jun 5 01:12:10 2018 01:12:10 [(none)] > CREATE SERVER fedlink_socket-> FOREIGN DATA WRAPPER mysql-> OPTIONS (USER 'test',PASSWORD' test', SOCKET'/ data/mysql/mysql3306/data/mysql.sock', PORT 3306, DATABASE 'test_table',Owner' test_table2') Query OK, 1 row affected (0.00 sec) root@localhost Tue Jun 5 01:12:10 2018 01:12:10 [(none)] > CREATE SERVER fedlink_socket_ip-> FOREIGN DATA WRAPPER mysql-> OPTIONS (USER 'test',PASSWORD' test', HOST '127.0.0.1 Japanese socket' / data/mysql/mysql3306/data/mysql.sock', PORT 3306, DATABASE 'test_table',Owner' test_table3') Query OK, 1 row affected (0.00 sec) root@localhost Tue Jun 5 01:12:10 2018 01:12:10 [(none)] > select * from mysql.servers +- -+-+ | Server_name | Host | Db | Username | Password | Port | Socket | Wrapper | Owner | +- -- + | fedlink_socket_ip | 127.0.0.1 | test_table | test | test | 3306 | / data/mysql/mysql3306/data/mysql.sock | mysql | test_table3 | | fedlink_socket | | test_table | test | test | 3306 | / data/mysql/ Mysql3306/data/mysql.sock | mysql | test_table2 | | fedlink_ip | 127.0.0.1 | test_table | test | test | 3306 | | mysql | test_table1 | +-- -- + 3 rows in set (0.00 sec) # if you want to delete a connection combination record You can use the following statement: root@localhost Tue Jun 5 01:10:41 2018 01:10:41 [(none)] > drop SERVER fedlink Query OK, 1 row affected (0.00 sec) root@localhost Tue Jun 5 01:11:30 2018 01:11:30 [(none)] > drop SERVER fedlink_socket; Query OK, 1 row affected (0.00 sec) root@localhost Tue Jun 5 01:11:55 2018 01:11:55 [(none)] > drop SERVER fedlink_socket_ip;Query OK, 1 row affected (0.00 sec)
Examples of two ways to use the federated engine to read and write remote instance data.
# create remote instance user root@localhost Tue Jun 5 00:23:45 2018 00:23:45 [(none)] > grant all on *. * to test@'%' identified by 'test';Query OK, 0 rows affected (0.00 sec) # create a library for storing remote instance tables root@localhost Tue Jun 5 00:24:06 2018 00:24:06 [(none)] > create database test_table Query OK, 1 row affected (0.00 sec) root@localhost Tue Jun 5 00:30:50 00:30:50 [(none)] > use test_tableDatabase changed# create remote instance tables test_table1 and test_table2 root@localhost Tue Jun 5 00:31:03 2018 00:31:03 [test_table] > CREATE TABLE test_table1 (- > id INT (20) NOT NULL AUTO_INCREMENT,-> name VARCHAR (32) NOT NULL DEFAULT'',-> other INT (20) NOT NULL DEFAULT'0' > PRIMARY KEY (id),-> INDEX name (name),-> INDEX other_key (other)->) Query OK, 0 rows affected (0.06 sec) root@localhost Tue Jun 5 00:31:09 00:31:09 [test_table] > CREATE TABLE test_table2 (- > id INT (20) NOT NULL AUTO_INCREMENT,-> name VARCHAR (32) NOT NULL DEFAULT'',-> other INT (20) NOT NULL DEFAULT'0),-> PRIMARY KEY (id),-> INDEX name (name),-> INDEX other_key (other)->) Query OK, 0 rows affected (0.00 sec) # create a library for federated engine tables root@localhost Tue Jun 5 00:31:16 2018 00:31:16 [test_table] > create database federated Query OK, 1 row affected (0.00 sec) root@localhost Tue Jun 5 00:31:22 2018 00:31:22 [test_table] > use federatedDatabase changed# creates a connection string combination using create server mode The record will be saved to the mysql.servers table root@localhost Tue Jun 5 00:31:25 2018 00:31:25 [federated] > CREATE SERVER fedlink-> FOREIGN DATA WRAPPER mysql-> OPTIONS (USER 'test',PASSWORD' test', HOST '127.0.0.1 records, PORT 3306, DATABASE' test_table') Query OK, 1 row affected (0.03 sec) # View the records in the mysql.servers table root@localhost Tue Jun 5 00:31:37 2018 00:31:37 [federated] > select * from mysql.servers +-+ | Server_name | Host | Db | Username | Password | Port | Socket | Wrapper | Owner | + -+ | fedlink | 127.0.0.1 | test_table | test | test | 3306 | | mysql | | +- -+ 1 row in set (0.00 sec) # using create server connection string combination Create federated engine table root@localhost Tue Jun 5 00:32:12 2018 00:32:12 [federated] > CREATE TABLE federated1 (- > id INT (20) NOT NULL AUTO_INCREMENT,-> name VARCHAR (32) NOT NULL DEFAULT'',-> other INT (20) NOT NULL DEFAULT '0mm,-> PRIMARY KEY (id),-> INDEX name (name),-> INDEX other_key (other)->)-> ENGINE=FEDERATED-> CONNECTION='fedlink/test_table1' Query OK, 0 rows affected (0.04 sec) root@localhost Tue Jun 5 00:32:17 2018 00:32:17 [federated] > show create table federated1 | | Table | Create Table |... | federated1 | CREATE TABLE `federated1` (`id`federated1` (`id` int (20) NOT NULL AUTO_INCREMENT, `name` varchar (32) NOT NULL DEFAULT'', `other`int (20) NOT NULL DEFAULT '0mm, PRIMARY KEY (`id`), KEY `name` (`name`), KEY `federated` (`other`) ENGINE=FEDERATED DEFAULT CHARSET=utf8 CONNECTION='fedlink/test_table1' |. 1 row in set (0.00 sec) # insert data into federated engine table federated1 The same data can then be queried in both the federated engine table and the remote instance table root@localhost Tue Jun 5 00:32:58 2018 00:32:58 [federated] > insert into federated1 (name) values ('federated1') Query OK, 1 row affected (0.00 sec) root@localhost Tue Jun 5 00:33:42 2018 00:33:42 [federated] > select * from federated1 +-+ | id | name | other | +-+ | 1 | federated1 | 0 | +-+ 1 row in set (0.00 sec) root@localhost Tue Jun 5 00:33:49 2018 00:33:49 [federated] > select * from test_table.test_table1 +-+ | id | name | other | +-+ | 1 | federated1 | 0 | +-+ 1 row in set (0.00 sec) # create a federated engine using a complete connection string in CONNECTION mode Table root@localhost Tue Jun 5 00:32:32 2018 00:32:32 [federated] > CREATE TABLE federated2 (- > id INT (20) NOT NULL AUTO_INCREMENT -> name VARCHAR (32) NOT NULL DEFAULT'',-> other INT (20) NOT NULL DEFAULT '0mm,-> PRIMARY KEY (id),-> INDEX name (name),-> INDEX other_key (other)->)-> ENGINE=FEDERATED-> CONNECTION='mysql://test:test@127.0.0.1:3306/test_table/test_table2' Query OK, 0 rows affected (0.00 sec) # insert data into the federated engine table federated2, and then query the same data in both the federated engine table and the remote instance table root@localhost Tue Jun 5 00:34:08 2018 00:34:08 [federated] > insert into federated2 (name) values ('federated2'); Query OK, 1 row affected (0.00 sec) root@localhost Tue Jun 5 00:34:16 2018 00:34:16 [federated] > select * from test_table.test_table2 +-+ | id | name | other | +-+ | 1 | federated2 | 0 | +-+ 1 row in set (0.00 sec) root@localhost Tue Jun 5 00:34:22 2018 00:34:22 [federated] > select * from federated2 +-+ | id | name | other | +-+ | 1 | federated2 | 0 | +-+ 1 row in set (0.00 sec) root@localhost Tue Jun 5 00:34:28 2018 00:34:28 [federated] > select * from mysql.servers +-+ | Server_name | Host | Db | Username | Password | Port | Socket | Wrapper | Owner | + -+ | fedlink | 127.0.0.1 | test_table | test | test | 3306 | | mysql | | +- -+ 1 row in set (0.00 sec) # similar to using socket If you use socket, the way create server connection combinations are created refer to "sample Table record contents"
Table field meaning.
Server_name: the unique identification of the connection combination (that is, the name. When you delete the connection combination record using drop server, you can delete the combination record by directly specifying the server_name that exists in the table, such as drop server server_name;).
Host: the remote hostname (IP or domain name) in the connection combination, which corresponds to the HOST in create server and the host_name in the CONNECTION connection combination string.
Db: the database name of the remote instance in the connection combination, corresponding to the DATABASE in create server and the db_name in the CONNECTION connection combination string.
Username: the user name of the remote instance of the connection combination, corresponding to the USER in create server and the user_name in the CONNECTION connection combination string.
Password: the remote instance user password of the connection combination, corresponding to the PASSWORD in create server and the password in the CONNECTION connection combination string.
Port: the remote instance port of the connection combination, corresponding to the PORT in create server and the port_num in the CONNECTION connection combination string.
Socket: the socket path of the local instance of the connection combination, corresponding to the SOCKET in the create server and the host_name in the CONNECTION connection combination string.
Wrapper: similar to a protocol name, corresponding to WRAPPER in create server and scheme in CONNECTION connection combination string.
PS:
CONNECTION string mode does not add records to the mysql.servers table.
At this point, the study on "analyzing the log information record table of mysql system library" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.