In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
MYSQL Database Learning Series 5
5. MySQL operation and maintenance practice
5.1-MySQL log system
What is a journal?
log (log) is a file that records events sequentially.
records what happens when a computer program is running
Various uses of
O analyze the characteristics of service requests, traffic, etc.
O wait.
server log
O record the special events during the start and run of the process to help analyze the problems encountered by the MySQL service
O grab specific SQL statements according to the requirements to track the business SQL of possible performance problems
transaction log
O record all changes made by the application to the data
O can be used for data synchronization between instances
Server Log Service error Log
Transaction log binary log
Service error log
records important messages during the startup and operation of an instance
configuration parameters
Olog_error = / data/mysql_data/node-1/mysql.log
content is not all error messages.
if the mysqld process cannot start properly, check the error log first.
Slow query log
records SQL statements whose execution time exceeds a certain threshold
configuration parameters
Slow_query_log = 1
Slow_query_log_file = / data/mysql_data/node-1/mysql-slow.log
Long_query_time = 5
is used to analyze SQL that may have performance problems in the system.
Comprehensive query log
If is enabled, all SQL statements in the system will be recorded.
configuration parameters
General_log = 1
General_log_file = / data/mysql_data/node-1/mysql-slow.log
is occasionally used to help analyze system problems, which has an impact on performance
Query log output and file switching
log output parameters
Log_output= {file | table | none}
if the log file is too large, you can truncate and switch new files periodically
Flush log
Storage engine transaction log
The partial storage engine has redo logs (redo log)
WAL (Write Ahead Log) mechanism storage engines such as InnoDB, TokuDB, etc.
logs are persisted first with transaction commit to ensure that abnormal recovery does not lose data.
log sequential write performance is better.
InnoDB transaction logs are alternately reused by two sets of files
Binary log binlog
binlog (binary log)
records the contents of SQL statements or data logic changes caused by data changes.
MySQL service layer records, independent of storage engine
The main roles of binlog are:
O restore data based on backup
O Mining and analyzing SQL statements
Turn on binlog
Main parameters of
Log_bin = c:/tmp/mylog/mysql-bin
Sql_log_bin = 1
Sync_binlog = 1
View binlog
Show binary logs
Binlog management
Main parameters of
Max_binlog_size = 100MB
Expire_logs_days = 7
Binlog always generates new files and does not reuse
Manually clean up binlog
Purge binary logs to 'mysql-bin.000009'
Purge binary logs before '2016-4-2 21 purl 0040'
View binlog content
log
Show binlog events in 'mysql-bin.000011'
Show binlog events in 'mysql-bin.000011' from 60 limit 3
mysqlbinlog tool
Mysqlbinlog c:/tmp/mylog/mysql-bin.000001
-- start-position |-- stop-position
Binlog format
Main parameters of
Binlog_format = {ROW | STATEMENT | MIXED}
Mysqlbinlog-- base64-output=decode-rows-v c:/tmp/mylpg/mysql-bin.000001
5.2-MySQL data backup
Basic Index-backup purpose
data disaster preparedness
O deal with hardware failure data loss
makes a mirror library for service.
O need to transfer data, statistical analysis, etc.
O need to establish a mirror for online data
Basics-backup content
data
O data file or text format data
Basic knowledge-cold backup and hot backup
cold backup
O close the database service and make a full copy of the data file
hot backup
O back up the database without affecting the database read and write service
Basics-physical backup and logical backup
physical backup
O copy data in the form of a data page
logical backup
O Export to naked data or SQL (insert) statements
Basics-Local backup and remote backup
local backup
O backup locally on the database server
remote backup
O remote connection to the database for backup
Full backup of
O back up the complete database
incremental backup
O back up only the data that has been modified since the last backup
Basics-backup cycle
Factors to consider:
database size (determines backup time)
recovery speed requirements (fast or slow)
backup method (full or increment)
Common tools and usage
mysqldump-logical backup, hot backup
xtrabackup-physical backup, hot backup
Lvm/zfs snapshot-physical backup
mydumper-logical backup, hot backup
cp-physical backup, cold backup
Common tools and usage-mysqldump
The command line tool that comes with MySQL officially
Main examples:
demonstrates the use of mysqldump to back up tables, libraries and instances
Back up all databases
Mysqldump-uroot-p123456-- socket=/var/run/mysqld/mysqld.sock-- all-databases > / dbbackup/all_db.sql# backup the specified database
Mysqldump-uroot-p123456-- socket=/var/run/mysqld/mysqld.sock-- databases db2 > / dbbackup/db2.sql# back up a single table
Mysqldump-uroot-p123456-- socket=/var/run/mysqld/mysqld.sock db2 T1 > / dbbackup/db2_t1.sql# restore table
Mysql > source / dbbackup/db2_t1.sql
demonstrates using mysqldump to make a consistent backup
Mysqldump-- single-transaction-uroot-p123456-- all-databases > / dbbackup/add_db_2.sql
demonstrates using mysqldump to remotely back up a database
Mysqldump-utest-ptest-h292.168.0.68-P3306-- all-databases > / dbbackup/remote_bakall.sql
demo uses mysqldump to export data to csv format
Mysqldump-uroot-p123456-- single-transaction-- fields-terminated-by=, db1-T / tmp
Features:
supports speed-limited backup to avoid impact on business.
supports stream backup
supports backup file compression and encryption
Principle of xtrabackup backup
crash-recovery function based on InnoDB
Users are allowed to read and write during backup, and write requests generate redo logs
copies data files from disk
copies away all redo logs generated during backup in real time from InnoDB redo log file
recovery time data file + redo log = consistent data
Practical script innobackupex
open source Perl script that encapsulates calls to xtrabackup and a series of related tools and OS operations, and finally completes the backup process
supports backing up tables for InnoDB and other engines
Basic process of innobackupex backup
Start xtrabackup_log-> copy .ibd; ibdata1-> FLUSH TABLE WITH READ LOCK-> copy .FRM; MYD; MYI; misc files-> Get binary log position-> UNLOCK TABLES-> stop and copy xtrabackup_log
Innobackupex usage
Full backup of
Innobackupex-user=root-password=123456-defaults-file=/etc/mysql/my.cnf / dbbackup
incremental backup
Innobackupex-- user=root-- password=123456-- defaults-file=/etc/mysql/my.cnf-- incremental--incremental-dir / dbbackup/2016-4-3-3-13-14-24-24-32 / dbbackup
streaming backup
Innobackupex-user=root-password=123456-defaults-file=/etc/mysql/my.cnf-stream=xbstream / dbbackup/ > / dbbackup/stream.bak
parallel backup
Innobackupex-user=root-password=123456-defaults-file=/etc/mysql/my.cnf-parallel=4 / dbbackup/
limited backup
Innobackupex-user=root-password=123456-defaults-file=/etc/mysql/my.cnf-throttle=10 / dbbackup/
compressed backup
Innobackupex-- user=root-- password=123456-- defaults-file=/etc/mysql/my.cnf-- compress--compress-thread 4 / dbbackup/
How to make backup strategy
Factors to be considered
Are all databases innodb engine tables-> backup mode, hot backup or cold backup
data size-> logical backup or physical backup, full or increment
database is full of local disk space-> backup to local or remotely
needs multi-block recovery-> backup frequency is small or day.
5.3-MySQL data recovery
hardware failure (such as disk damage)
artificial deletion (e. G. mistakenly deleted data, hacked)
business rollback (for example, game bug needs to be rolled back)
Normal requirements for (such as deploying mirror library and viewing data at a certain time in history)
Necessary conditions for data recovery
valid backup
Idea of data recovery
latest backup + binlog restore to failure point in time (applicable to various data loss scenarios)
mines the binlog between the last backup to the point of failure to obtain the relevant SQL statement, constructs the reverse SQL statement and applies it to the database (only for recording loss, and the binlog must be in row format)
Reverse SQL statement
Example:
T1 (id primary key, an int)
Reverse the SQL statement:
Insert into t (id, a) values (1,1)-> delete T1 where id=1 and A1 update T1 set 5 where id=1-> update T1 set A1 where id=1 delete from T1 where id=1-> insert into t (id, a) values (1,1)
Database recovery tools and commands
mysqldump backup-> source restore
binlog backup-> mysqlbinlog restore
Explain with detailed examples
recovers some mistakenly deleted data
restore erroneous deletion of tables and libraries
Restore erroneous deletion of data
Case: misoperation, delete data and forget to bring complete condition, execute delete from user where age > 30 [and sex=male]
Requirements: restore deleted data
Recovery prerequisite: complete database operation log (binlog)
Delete from user where sex='female'
First of all, we need to find the information in binlog.
Mysqlbinlog-vv mysql-bin.000001# finds the sql statement and then writes the reverse sql statement
Restore mistakenly deleted tables and libraries
Case: business is hacked and tables are deleted (drop teble user)
Requirements: restoring tables
Premise: full binlog since backup + backup
Innobackupex-- apply-log / dbbackup/filename# to view the location of binlog
Cat xtrabackup_binlog_info# View end Point
Mysqlbinlog-vv filename
Mysqlbinlog-vv-start-position=2556990-stop-position=2776338
Mysqlbinlog-vv-start-position=2556990-stop-position=2776338 | mysql-uroot-p123456-- sock=/dbbackup/mysql_3309/mysqld.sock
Course summary
recovery is already a very difficult task, try to avoid doing it. We need to be data guards, not firefighters. (the online control authority should be strictly controlled, the data change operation should be tested in advance, and the backup should be made during the operation.)
effective backup (+ binlog) is the top priority, and regular database backup is necessary.
backup is the basis of all data recovery.
5.4-MySQL online deployment
MySQL online deployment
Factors to consider:
version selection, 5.1,5.5 or 5.6?
Branch selection, official Community Edition? Percona server? Mariadb?
installation mode, package installation? Binary package installation? Source code installation?
path configuration, parameter configuration (template and standardization as far as possible)
one instance multiple libraries or multiple instances single library?
Binary installation MySQL
download package
Extract the and put it in the specified directory (for example, / usr/local)
puts the MySQL directory into PATH
initializes the instance, edits the configuration file, and starts
account security settings
Compile and install MySQL
Download the MySQL source installation package for
installs necessary packages (make cmake bison-devel ncurses-devel build-essential)
Cmake configures MySQL compilation options to customize the features that need to be installed
make & & make install
initializes the instance, edits the configuration file, and starts
account security settings
MySQL upgrade
downloads the MySQL5.6 installation package and configures the MySQL5.6 installation package installation path
closes the MySQL5.5 instance, modifies some parameters, and starts using MySQL5.6 software
executes mysql_upgrade script under MySQL5.6 path
verifies that the upgrade was successful
MySQL multi-instance installation
deploys mysql software
edits multiple configuration files and initializes multiple instances
starts the MySQL instance
MySQL multi-instance deployment
Why multi-instance deployment?
makes full use of system resources
resource isolation
service, module isolation
Summary of MySQL online installation
chooses the appropriate version and branch according to the requirements. It is recommended to use or upgrade to a higher version 5.5 or 5.6.
If you need to customize the MySQL function of , you can consider compiling and installing it, otherwise it is recommended to install it with binary package, which is more convenient.
chooses whether to deploy multiple MySQL instances or a single instance according to the machine configuration. If the machine configuration is very good, it is recommended to deploy multiple instances.
5.5-MySQL master-slave replication
MySQL master-slave replication
one Master and one Slave
master master replication
has one master and multiple slaves
multi-master and one slave
cascade replication
MySQL master-slave replication purpose
real-time disaster recovery for failover
separates reads and writes and provides query services.
backup to avoid affecting the busin
MySQL master-slave replication deployment
Necessary conditions for master-slave deployment
master library opens binlog log (set log-bin parameter)
master and slave server-id are different
The slave server can connect to the master library
Master-slave deployment steps:
backup restore (mysqldump or xtrabackup)
authorization (grant replication slave on.)
configure replication and start (change master to)
to view master-slave replication information (show slave status\ G)
Problems with MySQL replication
Existing problems
Data may be lost after the host goes down
There is only one sql thread in the slave library, and the master database is under heavy writing pressure, so replication is likely to be delayed.
Solution:
semi-synchronous replication
parallel replication
MySQL semi-sync (semi-synchronous replication)
Semi-synchronous replication
5.5integrates into MySQL and exists as a plug-in, requiring separate installation
ensures that the binlog is transferred to at least one slave library after the transaction is committed
does not guarantee that the slave library will run out of binlog for this transaction.
The performance of decreases to some extent, and the response time is longer.
network exception or slave downtime, jam the master library until timeout or recovery from the slave library
MySQL asynchronous replication
MySQL semi-sync (semi-synchronous replication)
Configure MySQL semi-synchronous replication
Just once:
Main library:
INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so'
From the library:
INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so'
Dynamic Settings:
Main library:
SET GLOBAL rpl_semi_sync_master_enabled=1; SET GLOBAL rpl_semi_sync_master_timeout=N; master delay switching async
From the library:
SET GLOBAL rpl_semi_sync_slave_enabled=1
Configure MySQL parallel replication
Parallel replication
Added in Community Edition 5.6
parallelism refers to multithreaded apply binlog from the library.
library-level parallel application binlog, the same database changes or serial (version 5.7 parallel replication based on transaction groups)
Set up
Set global slave_parallel_workers=10; sets the number of SQL threads to 10
Cascade replication
A-> B-> C
Add parameters to B: log_slave_updates B will record A's binlog in its own binlog log.
Replication monitoring
Query slave library status:
Show slave status\ G
Handling of replication errors
Common: 1062 (primary key conflict) 1032 (record does not exist) Resolution: manual processing or: skip replication error set global sql_slave_skip_counter=1
Summary
MySQL master-slave replication is the foundation of MySQL high availability and high performance (load balancing).
is simple, flexible and can be deployed in a variety of ways. Different replication structures can be deployed according to different business scenarios.
There are also some problems with MySQL master-slave replication. Replication enhancements can be deployed as needed to solve the problem.
Replication status should be monitored at all times during replication. Replication errors or delays may affect the system.
MySQL replication is a basic skill that MySQL database engineers must know.
5.6-MySQL daily operation and maintenance
DBA operation and maintenance work
Daily life
data import, data modification, table structure change
plus permissions, problem handling, etc.
database selection, deployment, design, monitoring, backup, optimization, etc.
Guide data and matters needing attention
The final form of data (csv, sql text or directly imported into a library)
derived data method (mysqldump, select into outfile)
Matters needing attention for derived data
O Export to csv format requires file permissions and can only be imported locally in the database.
O avoid locking libraries to lock tables (mysqldump uses-- single-transaction option does not lock tables)
O avoid the impact on the business, try to do it in the mirror library
Data modification and matters needing attention
Remember to make a backup before you modify the
opens a transaction to do, and then submit after modification and check.
avoids modifying a large amount of data at once and can modify it in batches.
avoids doing during peak business hours.
Matters needing attention for table structure change
does it during the trough.
Will there be locks for structural changes in tables? (5.6includes online ddl function)
uses pt-online-schema-change to complete table structure changes
O can avoid master-slave delay
O can avoid excessive load and speed limit
Add authority and precautions
only gives the minimum permissions that meet the requirements.
changes passwords when avoiding authorization
avoids giving super permissions to application accounts
Problem handling (slow database?)
Where is the slowness of database?
show processlist View mysql connection Information
to view system status (iostat, top, vmstat)
Summary
The daily work of is relatively simple, but any operation may affect online services.
combines different environments and different requirements to choose the most appropriate method to deal with.
The daily work of should be stable, not fast, and ensuring online stability is the greatest responsibility of DBA.
5.7-MySQL parameter tuning
Why adjust the parameters?
The configuration and performance of different servers are different.
Different business scenarios of have different requirements for data.
The default parameter of MySQL is only a reference value and is not suitable for all applications.
What do we need to know before optimizing?
Configuration related to server
business-related situation
Configuration related to MySQL
What needs to be paid attention to on the server
hardware condition
operating system version
CPU, network card power saving mode
Server numa Settings
raid card cache
Disk scheduling policy-write back
Since the data is written to cache, the data is asynchronously brushed from cache to the storage medium.
Disk scheduling policy-write through
data is written to both cache and storage media to return a successful write.
Write Back VS Write Through
write Back performs better than Write Through
Write Through is more secure than Write Back
RAID
RAID Redundant Array of Independent Disks
O bare devices are rarely used in production environments, and raid cards are usually used to RAID one or more disks.
The ORAID card will reserve a piece of memory to ensure efficient data storage and reading.
O the common RAID types are: RAID1, RAID0, RAID10 and RAID5
RAID0 VS RAID1
RAID 0-Block Striped. No Mirror. No Parity.
RAID 1-Block Mirrored. No Stripe. No Parity.
RAID5 VS RAID10
RAID 5-Block Striped. Distributed Parity. (at least three disks, each with two data blocks and one check block)
RAID 10-Block Mirrored. (RAID1 every two disks, and then do RAID0 according to the group, at least four disks)
How to ensure data Security in RAID
BBU (Backup Battery Unit)
Under the WB policy, oBBU ensures that the cached data can be written to disk even if the server is powered down or down, thus ensuring the security of the data.
What are the considerations of MySQL?
Deployment and installation of MySQL
Monitoring of MySQL
Parameter tuning of MySQL
Requirements for deploying MySQL
MySQL version recommended by : > = MySQL5.5
MySQL Storage engine recommended by : InnoDB
The basis of system tuning: monitoring
monitors the slow log of MySQL in real time
monitors the load of the database server in real time
real-time monitoring MySQL internal status value
What MySQL Status do you usually focus on?
Com_Select/Update/Delete/Insert
Bytes_received/Bytes_sent
Buffer Pool Hit Rate
Threads_connected/Threads_created/Threads_running
MySQL parameter tuning
Why does adjust the parameters of MySQL?
OMySQL is a universal database, but the business is changeable, and the default parameters cannot meet all business requirements.
Some of the internal parameters of oMySQL were made in some very old versions of MySQL, which may have been used for current limiting and protection, but with the improvement of machine performance, the parameters of these protection classes may become performance bottlenecks.
Read optimization
Rational use of indexes by is very important for MySQL query performance.
Proper adjustment of parameters in can also improve query performance.
Innodb_buffer_pool_size
The InnoDB storage engine maintains a memory area to replace new and old data.
The more memory, the more data can be cached.
Innodb_thread_concurrency
innoDB internal concurrency control parameter. Set to 0 means no control
In , if there are more concurrent requests and the parameter settings are small, the incoming requests will be queued.
Writing optimization
The structural design of table uses self-increasing fields as the primary key of the table.
indexes only the appropriate fields. Too many indexes affect write performance.
monitors the disk IO of the server. If the write delay is large, you need to expand the capacity.
chooses the correct MySQL version and sets the parameters reasonably
Which parameters help improve write performance
innoDB_flush_log_at_trx_commit & & sync_binlog
innodb log file size
innodb_io_capacity
innodb insert buffer
Two parameters that mainly affect the write performance of MySQL
innoDB_flush_log_at_trx_commit
sync_binlog
InnoDB_flush_log_at_trx_commit
controls how InnoDB transactions are refreshed. There are three values: 0, 1, 2.
ON=0-every second, write the data from the transaction log cache to the log file and flush the data from the log file to disk (efficient but not secure)
ON=1-when each transaction commits, the transaction log is written from the cache to the log file, and the data from the log file is refreshed to disk. This mode is preferred to ensure data security (inefficient, very secure)
ON=2-when each transaction commits, the transaction log data is written from the cache to the log file; every other second, but not necessarily flushed to disk, depending on the scheduling of the operating system (efficient, but not secure)
Sync_binlog
controls whether persistence is required every time a Binlog is written.
How to ensure the security of transactions
innoDB_flush_log_at_trx_commit and sync_binlog are both set to 1
transactions should be consistent with Binlog
(lock)-> xa_prepare, Fsync-> Write And Fsync Binlog-> InnoDB Commit, Fsync-> (release lock)
What are the problems with serial?
Generally speaking, SAS disks can only have 15000200 Fsync per second.
conversion to the database can only execute 50 to 60 transactions per second.
Community and official improvements
MariaDB proposes an improvement that can merge even if both parameters are 1, and the performance has been greatly improved.
officially absorbed the idea of MariaDB, and improved it on this basis, and the performance was improved again.
Tips:
officially made this optimization after the MySQL5.6 version.
Percona and MariaDB versions already include this optimization in MySQL5.5
InnoDB Redo log
Write ahead Log
The role of Redo log
Redo log is used in the recovery of database crash
What are the problems with Redo log?
if frequent writes cause the oldest dirty page in Redo log not to be refreshed to disk, the database will be stuck and the dirty page will be forced to be refreshed to disk.
MySQL default configuration of two files is only 10m, very easy to fill, the production environment should be resized appropriately.
Innodb_io_capacity
How many dirty pages are brushed by InnoDB at a time determines the throughput of the InnoDB storage engine.
in SSD and other high-performance storage media, should improve this parameter to improve the performance of the database.
Insert Buffer
sequential read and write VS random read and write
The performance of random requests is much lower than that of sequential requests.
Merging as many random requests as possible into sequential requests is the key to improving database performance.
MySQL supports Insert Buffer since version 5.1
Merge for both update and delete is supported after the MySQL5.5 version
Insert Buffer is valid only for secondary and non-unique indexes
Summary
server configuration should be reasonable (kernel version, disk scheduling policy, raid card cache)
's perfect monitoring system to find problems in advance.
Keep up with the database version, not too new, not too old
database performance optimization:
O query optimization: index optimization is primary, parameter optimization is secondary.
O write optimization: business optimization is primary, parameter optimization is secondary.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
Is there any Python you have the technology, I have the need to cooperate sincerely
© 2024 shulou.com SLNews company. All rights reserved.