In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Mysql architecture:
By: connection pool components, management services and tools components, sql interface components, query analyzer components, optimizer components,
It is composed of buffer component, plug-in storage engine and physical file.
Mysql is a unique plug-in architecture, and each storage engine has its own characteristics.
Mysql Overview of each storage engine:
(1) innodb storage engine: [/ color] [/ b] oriented to oltp (online transaction processing), row lock, support for foreign keys, unlocked read, default repeaable level (repeatable read) avoid phantom read, insert buffer, second write, adaptive hash index, pre-read through next-keylocking policy.
(2) myisam storage engine: does not support transactions, table locks, full-text indexing, suitable for olap (online analytical processing), in which myd: put data files, myi: put index files.
(3) ndb storage engine: cluster storage engine, share nothing, can improve availability.
(4) memory storage engine: data is stored in memory, table lock, and poor performance. Hash index is used by default.
(5) archive storage engine: only insert and select zlib algorithms are supported to compress 1:10, which is suitable for storing archived data such as logs and row locks.
(6) maria storage engine: aims to replace myisam, cached data and indexes, row locks, mvcc.
Innodb features:
(1) main architecture: there are 7 background threads by default, 4 io thread (insert buffer, log, read, write), 1 master thread (highest priority), 1 lock monitoring thread, 1 error monitoring thread. It can be viewed through show engine innodb status. The new version has increased the default read thread and write thread to 4 each, which can be viewed through show variables like 'innodb_io_thread%'.
(2) Storage engine consists of buffer pool (buffer pool), redo log buffer pool (redo log buffer) and additional memory pool (additional memory pool). Specific configuration can be done by show variables like 'innodb_buffer_pool_size', show variables like
'innodb_log_buffer_size', show variables like 'innodb_additional_mem_pool_size' to check.
(3) buffer pool: occupies the largest block of memory, the cache used to store all kinds of data includes index page, data page, undo page, insert buffer, adaptive hash index, lock information stored by innodb, data dictionary information and so on. The way it works is to always read database files per page (16k per page) into the buffer pool, and then keep the cached data in the buffer pool according to the least recently used (lru) algorithm. If the database file needs to be modified, always first modify the page in the cache pool (dirty pages after modification), and then refresh the dirty pages of the buffer pool to the file at a certain frequency. Check it out with the command show engine innodb status;.
(4) Log buffering: put the redo log information into this buffer first, and then flush it to the redo log file at a certain frequency.
Master thread (main thread):
(1) the operation of the loop main loop once per second:
The log buffer is flushed to disk, even if the transaction has not been committed. (always execute, so no matter how big the transaction commit
The time is also very fast)
Merge insert buffer (execute if the number of io that occurs in innodb is less than 5 times per second)
Refresh the dirty pages in the buffer pool of up to 100 innodb to disk (the proportion of the buffer pool that exceeds the configured dirty pages is held.
Line, which is decided by innodb_max_dirty_pages_pac in the configuration file. The default is 90, and the new version is 75.
Google recommendation is 80)
If user activity is not currently used, switch to backgroud loop
(2) the operation of the loop main loop every 10 seconds:
Refresh 100 dirty pages to disk (IO operations performed less than 200 times in the past 10 seconds)
Merge up to 5 insert buffers (always)
Buffer logs to disk (always)
Delete useless Undo pages (always)
Refresh 100 or 10 dirty pages to disk (have more than 70% dirty pages, refresh 100 dirty pages; otherwise refresh 10 dirty pages)
Generate a checkpoint
(3) backgroud loop, if there is no user activity (when the database is idle) or when the database is closed, it will switch to this loop:
Delete useless Undo pages (always)
Merge 20 insert buffers (always)
Jump back to the main loop (always)
Refresh 100 pages until the criteria are met (possibly done in flush loop)
If there is nothing left to do in flush loop, the InnoDB storage engine switches to suspend_loop, suspends master thread, and waits for the event to occur. If you enable the InnoDB storage engine but do not use any tables of the InnoDB storage engine, the master thread is always in a suspended state.
Insert buffering: not part of the buffer pool, Insert Buffer is an integral part of the physical page, which leads to improved InnoDB performance. According to the characteristics of the B+ algorithm (mentioned below), when inserting data, the primary key index is sequential, which does not cause random reading of the database, while for non-clustered indexes (that is, secondary indexes), the insertion of leaf nodes is no longer sequential, at this time, it is necessary to access the nonclustered index discrete, and the insertion performance becomes low here. InnoDB introduces insert buffering to determine whether nonclustered index pages are in the buffer pool, and if so, insert them directly; if not, put them in the insert buffer first. Then, as described in the above master thread, there will be a certain frequency to merge the insertion buffers. In addition, the secondary index cannot be unique, because when inserted into the insert buffer, it does not look for the index page, otherwise it will still cause random reading and lose the meaning of the insert buffer. The insert buffer may take up memory in the buffer pool, and by default it can also account for 1x2, so you can reduce this value to 1x3. Set by IBUF_POOL_SIZE_PER_MAX_SIZE, 2 means 1 compact2 and 3 means 1 pin3.
Write twice: it brings the reliability of InnoDB data. If the write fails, you can recover through the redo log, but the redo log records the physical operation of the page, and if the page itself is damaged, there is no point in redoing it. Therefore, before applying the redo log, you need a copy of the page, and when a write failure occurs, restore the page through the copy of the page, and then redo it, which is called doublewire.
Restore data = page copy + redo log
Adaptive hash indexing: the InnoDB storage engine proposes an adaptive hash index. The storage engine monitors the lookup of the index on the table. If it is observed that the establishment of the hash index will lead to an increase in speed, the hash index is established, so it is called adaptive. Adaptive hash indexing can only be used to search for equivalent queries, such as select * from table where index_col='***',. In addition, adaptive hashing is controlled by InnoDB storage engine, and we can only disable or enable it through innodb_adaptive_hash_index. Default is enabled.
Mysql file:
Parameter file: tells the Mysql instance where to find the database file when it starts, and specifies some initialization parameters that define settings such as the size of a certain memory structure. Use file storage, editable, if not loaded at startup can not be successfully started (unlike other databases). Parameters can be divided into dynamic and static, static is equivalent to read-only, dynamic can be set. For example, the key and value values found through show variable like'* * 'can be modified directly through set key=value. Similarly, there is scope for modification, that is, the seesion is valid and globally valid, and you can add session or global to the corresponding key, such as select @ @ seesion.read_buffer_size, set @ @ global.read_buffer_size.
Log file: used to record files written by an Mysql instance when it responds to a condition. Such as error log files, binary log files, slow query log files, query log files and so on.
Error log: check the address where the error log is stored through show variables like 'log_error'.
Slow log: the threshold for slow log records is set to 0.05 through show variables like'% long%'; whether it is enabled by show variables like 'log_slow_queries' and off by default; and the query that does not use indexes is recorded in slow logs by show variabes like' log. Slow logs can be viewed directly through the mysqldumpslow command in mysql.
Binaries: do not record queries, only record all modifications to the database. The purpose is to restore (point-in-time repair) and copy. View the storage path through show variables like 'datadir'. The binary log supports three formats: STATEMENT, ROW and MIX. By setting the binlog_format parameter, it is usually set to ROW, which can bring better reliability for database recovery and replication, but it will increase the size of binary files and increase network overhead during replication. View the contents of the binary log file through mysqlbinlog in mysql.
Socket files: files needed when connecting using Unix domain sockets.
Pid file: the process ID file of the Mysql instance.
Mysql structure file: used to store the Mysql structure definition file. Because of the architecture of the Mysql plug-in storage engine, each table has a corresponding file that ends with the frm suffix.
Storage engine files: store your own files to save all kinds of data, really store data and indexes and other data. The following mainly describes the tablespace files and redo log files under InnoDB's storage engine.
Tablespace files: the default tablespace file for InnoDB is ibdata1, and you can use show variables like 'innodb_file_per_table' to see if each table produces a separate .idb tablespace file. However, a separate tablespace file stores only the data, indexes, and insert buffers of the table, and the rest of the information is stored in the default tablespace.
Redo log file: if the instance and media fail, the redo log file will come in handy. For example, if the database is powered off, the InnoDB storage engine will use the redo log to recover to the moment before the power outage to ensure data integrity. The parameter innodb_log_file_size specifies the size of the redo log files; innodb_log_file_in_group specifies the number of redo log files in the log filegroup. The default is 2, which specifies the number of log image filegroups. The default is 1, which means there is only one log filegroup, and there is no image. Innodb_log_group_home_dir specifies the path where the log filegroup resides, which defaults to the database path.
The difference between binary logs and redo logs: first, binary logs record all Mysql-related logging, including logs from other storage engines such as InnoDB, MyISAM, Heap, and so on. On the other hand, the InnoDB storage engine redo log only stores the transaction log about itself; secondly, the content is different, whether the binary log file record is set to STATEMENT or ROW, or MIXED, it records the specific operation of a transaction. The InnoDB storage engine's redo log file records the physical changes to each page. In addition, the write time is different, the binary log file is recorded before the transaction is committed, and in the course of the transaction, redo log entries are constantly written to the redo log file.
Mysql innodb table
Tablespaces: tablespaces can be seen as the highest level of the logical structure of the InnoDB storage engine.
Segment: the tablespace consists of each segment, and the common segments are data segment, index segment, rollback segment and so on.
Extents: consists of 64 consecutive pages, each with a page size of 16kb, that is, an extent size of 1MB.
Page: 16kb per page and cannot be changed. Common page types are: data page, Undo page, system page, transaction data page, insert buffer bitmap page, insert buffer free list page, uncompressed binary large object page, compressed binary large object page.
Rows: the InnoDB storage engine is row-oriented, allowing up to 7992 rows of data per page.
Line record format: there are two common line record formats: Compact and Redundant,mysql5.1 version, mainly Compact line record format. For Compact, null type does not take up storage space, regardless of chartype or varchar type; for Redudant,varchar 's null, null type takes up no storage space, while char's null type takes up storage space.
The length limit of the varchar type is 65535, which can not be reached, and there will be other overhead, usually about 65530, which also depends on the selected character set. In addition, this length limit is an entire line, for example: create table test (a varchar (22000), b varchar (22000), cvarchar (22000)) charset=latin1 engine=innodb will also report an error.
For data of type blob, only the first 768 bytes of prefix data of varchar (65535) are saved in the data page, followed by an offset pointing to the row overflow page, that is, Uncompressed BLOB Page. The new InnoDB Plugin introduces a new file format called Barracuda, which has two new row record formats, Compressed and Dynamic, which completely overflow the Blog field, storing 20-byte pointers in the database page and storing the actual data in BLOB Page.
Data page structure: the data page structure consists of the following seven parts:
File Header (file header): records some header information of a page, such as page offset, previous page, next page, page type, etc., with a fixed length of 38 bytes.
Page Header (header): record the status information of the page, the number of records in the heap, the pointer to the free list, the number of bytes of deleted records, the last inserted position, etc., with a fixed length of 56 bytes.
Infimun+Supremum Records: in the InnoDB storage engine, there are two virtual row records in each data page to limit the boundaries of the records.
An Infimun record is a value that is smaller than any primary key in the page, and a Supermum is a value that is larger than any possible large value. These two values are established when the page is created and will not be deleted under any circumstances. In Compact line format and Redundant line format, the number of bytes consumed by each is different.
User Records (user record, or line record): implements the contents of the record. Again, the InnoDB storage engine tables are always organized by the B+ village index.
Free Space: refers to free space, which is also a linked list data structure. When a record is deleted, the space is added to the free list.
Page Directory (page directory): the page directory stores the relative position of the records, not the offset. Sometimes these records are called Slots (slots). InnoDB is not a slot for each record, but a sparse directory, that is, a slot may belong to multiple records, at least 4 records and at most 8 records. It is important to keep in mind that the B+ tree index itself cannot find a specific record, and the B+ tree index can find only the page where the record is located. The database loads the page into memory, and then does a binary search through Page Directory. However, the time complexity of binary search is low, and the search in memory is very fast, so by ignoring the time taken by this part of the search.
File Trailer (end of file information): in order to ensure that the page is fully written to disk (such as disk corruption during the writing process, machine downtime, etc.), the length is fixed to 8 bytes.
Views: views in Mysql are always virtual tables and do not support materialized views themselves. But through some other techniques (such as triggers), we can also achieve some simple materialized view functions.
Partitioning: the Mysql database supports RANGE, LIST, HASH, KEY, COLUMNS partitions, and can be subpartitioned using HASH or KEY.
Common indexes and algorithms for mysql innodb:
B + tree index: the data structure of B + tree is relatively complex. B represents that balance first evolved from a balanced binary tree, but B + tree is not a binary tree. A more detailed introduction can be found in this article: http://blog.csdn.net/v_JULY_v/article/details/6530142, because of the high fan-out nature of B + tree index, the height of B + tree in the database is generally 2-3 layers. That is, for a row record that looks up a key value, it only takes 2 to 3 times to IO at most. Now the average disk can do IO,2~3 at least 100 times per second, which means the query time is only 0.02 to 0.03 seconds.
The B+ index in the database can be divided into clustered index (clustered index) and auxiliary clustered index (secondary index), but its interior is B+ tree, that is, highly balanced, leaf nodes store data.
Clustered index: because the clustered index is organized according to the primary key, there can be only one clustered index per table, each data page is joined by a two-way linked list, and the leaf node stores an entire row of information. so the query optimizer prefers to follow the clustered index. In addition, the storage of clustered indexes is logically continuous. Therefore, the clustered index for the primary key sort lookup and range lookup is very fast.
Secondary index: also known as nonclustered index, the leaf node does not store all data, the main key value and a boomark (actually the key of the clustered index) tell InnoDB where to find the row data corresponding to the index, such as a secondary index with a height of 3 and a clustered index with a height of 3. If the row records are queried according to the secondary index, a total of 6 times IO is required. In addition, there can be multiple secondary indexes.
The principle of using the index: high selection, extracting a small part of the data in the table (also known as the unique index). Generally, the amount of data fetched is more than 20% of the data in the table, and the optimizer will scan the whole table instead of using the index. Fields such as gender are meaningless.
Federated index: also known as a composite index, is an index built on multiple columns (> = 2). The composite index in Innodb is also a b + tree structure. The data of the index contains multiple columns (col1, col2, col3...) Sorted by col1, col2, and col3 in the index Such as (1, 2), (1, 3), (2). The use of composite indexes should make full use of the leftmost prefix principle, which, as the name implies, is the leftmost priority. For example, if you create an index ind_col1_col2 (col1, col2), you can use the ind_col1_col2 index in the query where col1 = xxx and col2= xx or where col1 = xxx, but where col2=**** cannot find the index. When creating a multi-column index, the most frequently used and filtered column in the where clause is placed on the far left, depending on the business requirements.
Hash indexing: hash algorithm is also a common algorithm, mysql innoDB uses the more common chain address method to remove duplicates. In addition, as mentioned above, hash in innoDB is adaptive, and when to use hash is determined by the system and cannot be set manually.
Binary search method: this algorithm is quite common, so I won't mention it here. In InnoDB, the slots in each page of Page Directory are stored in the order of primary keys, and the query for a specific record is found by binary search of Page Directory.
Locks in mysql innodb
The implementation of InnoDB storage engine lock is very similar to Oracle in that it provides consistent non-lock read, row-level lock support, row-level lock has no associated overhead, and concurrency and consistency can be obtained at the same time.
The InnoDB storage engine implements the following two standard row-level locks:
Shared lock (S Lock): allows a transaction to read a row of data
Exclusive lock (X Lock): allows transactions to delete or update a row of data.
When a transaction has acquired a shared lock for row r, another transaction can immediately acquire a shared lock for row r, because reading data that does not change row r is called lock compatibility. But if a transaction wants to acquire an exclusive lock on row r, it must wait for the transaction to release the shared lock on row r-this is called lock incompatibility.
Prior to InnoDB Plugin, you could only view the current database request through commands such as SHOW FULL PROCESSLIST,SHOW ENGINE INOODB STATUS, and then determine the status of locks in the current transaction. In the new version of InnoDB Plugin, INNODB_TRX, INNODB_LOCKS and InnoDB_LOCK_WAITS are added under the INFORMATION_SCHEMA architecture. With these three tables, it is easier to monitor current transactions and analyze possible lock problems.
INNODB_TRX consists of 8 fields:
Unique transaction ID within the trx_id:InnoDB storage engine
Trx_state: status of the current transaction.
Trx_started: the start time of the transaction.
Trx_requested_lock_id: the lock ID waiting for the transaction. If the state of trx_state is LOCK WAIT, this value represents the ID of the lock resource occupied by the current transaction before waiting.
If trx_state is not LOCK WAIT, the value is NULL.
Trx_wait_started: the time the transaction waits to start.
Trx_weight: the weight of a transaction, reflecting the number of rows modified and locked by a transaction. In the InnoDB storage engine, when a deadlock occurs and needs to be rolled back, InnoDB storage selects
Select the one with the lowest value to roll back.
The result displayed by the thread ID,SHOW PROCESSLIST in trx_mysql_thread_id:Mysql.
Trx_query: the sql statement that the transaction runs.
Can be viewed through select * from infomation_schema.INNODB_TRX;
INNODB_LOCKS table, which consists of the following fields:
Lock_id: the ID of the lock.
Lock_trx_id: transaction ID.
Lock_mode: lock mode.
Lock_type: the type of lock, table lock or row lock.
Lock_table: the table to lock.
Lock_index: the index of the lock.
Lock_space:InnoDB stores the ID number of the engine tablespace.
Lock_page: the number of pages locked. In the case of table locks, the value is NULL.
Lock_rec: the number of rows locked. In the case of table locks, the value is NULL.
Lock_data: the primary key value of the locked row. This value is NULL when it is a table lock.
Can be viewed through select from information_schema.INNODB_LOCK;
INNODBLOCKWAIT consists of four fields:
Requestingtrxid: the transaction ID that requests the lock resource.
Requestinglockid: the ID of the lock applied for.
Blockingtrxid: the ID of the blocked lock.
It can be viewed through select from informationschema.INNODBLOCKWAITS;.
Consistent unlocked read: the InnoDB storage engine reads data from rows in the database at the current execution time through row multi-versioning. If the read row is performing a Delete or update operation, the read operation will not wait for the row lock to be released. Instead, the InnoDB storage engine will read a snapshot of the row. Snapshot data refers to the previous version of the row, which is implemented through the Undo segment. Undo, on the other hand, is used to roll back data in a transaction, so the snapshot itself has no additional overhead. In addition, snapshot data does not need to be locked because there is no need to modify historical data. A row may have more than one snapshot data, so this technique is called row multi-version technology. This brings concurrency control, which is called multi-version concurrency control (Multi VersionConcurrency Control, MVCC).
Isolation level of the transaction: Read uncommitted, Read committed, Repeatable read, serializable. Under Read Committed and Repeatable Read, the InnoDB storage engine uses non-locking consistent reads. However, the definition of a snapshot is different. At the Read Committed transaction isolation level, for snapshot data, inconsistent reads always read the latest snapshot data of locked rows. At the Repeatable transaction isolation level, for snapshot data, inconsistent reads always read the row data version at the beginning of the transaction.
Algorithm for locking:
Record Lock: lock on a single-line record
Gap Lock: a gap lock that locks a range but does not include the record itself
Next-Key Lock:Gap Lock + Record Lock, lock a range, and lock the record itself. A more detailed introduction can be found in this blog, http://www.db110.com/?p=1848
Lock problem:
Missing updates: a classic database problem that occurs when two or more transactions select the same row and then update the row based on the initially selected value. Each transaction is unaware of the existence of other transactions. The final update will rewrite updates made by other transactions, which will result in data loss.
Example:
Transaction An and transaction B modify the value of a row at the same time
1. Transaction A changes the value to 1 and commits
two。 Transaction B changes the value to 2 and commits.
When the value of the data is 2, the update made by transaction A will be lost.
Solution: transaction parallel to serial operation, update operation with exclusive lock.
Dirty reading: one transaction reads uncommitted update data from another transaction, that is, dirty data.
Example:
The original salary of 1.Mary was 1000, and the financial staff changed the salary of Mary to 8000 (but did not submit the transaction).
2.Mary read his salary and found that his salary had become 8000. He was overjoyed.
3. The financial staff found that the operation was wrong, rolled back the transaction, and Mary's salary was changed to 1000. Like this, the salary recorded by Mary 8000 is dirty data.
Solution: dirty reads only occur when the transaction isolation level is Read Uncommitted. The default isolation level for innoDB is Repeatable Read, so dirty reads do not occur in production environments.
Non-repeatable reading: the same data is read multiple times in the same transaction, and the result returned is different. In other words, subsequent reads can read updated data that has been committed by another transaction. On the contrary, "repeatable read" can guarantee that the read data is the same when the data is read multiple times by the same transaction, that is, subsequent reads cannot read the updated data that has been committed by another transaction. The main difference between dirty reading and unrepeatable reading is that dirty reading is reading uncommitted data, and unrepeatable reading is reading committed data.
Example:
1. In transaction 1, Mary reads his salary as 1000 and the operation is not completed
two。 In transaction 2, the treasurer modifies Mary's salary to 2000 and commits the transaction.
3. In transaction 1, when Mary reads his salary again, the salary becomes 2000
Solution: read committed data, the general database is acceptable, so the transaction isolation level is generally set to Read Committed. Mysql InnoDB avoids unrepeatable reads through the Next-Key Lock algorithm, and the default isolation level is Repeatable Read.
Transactions in mysql innodb
Four characteristics of transactions: atomicity, consistency, isolation, and persistence
Isolation is achieved through locks, and atomicity, consistency, and persistence are achieved through redo and undo of the database.
The redo log records the behavior of the transaction, which is implemented by redo to ensure the integrity of the transaction, but sometimes the transaction needs to be undone, so the undo needs to be generated. Undo and redo are just the opposite. When you make changes to the database, the database will generate not only redo, but also a certain undo, even if the transaction or statement fails for some reason, or if you request a rollback with a rollback statement, you can use this undo information to roll back the data to the way it was before the modification. Unlike redo, redo is stored in the redo log file, undo is stored in a special segment (segment) inside the database, which is called the undo segment (undo segment), and the undo segment is located in the shared tablespace. It is also important that undo records logical operations contrary to transaction operations, such as insert undo records a delete, so undo simply logically restores the database to what it was before the transaction started. Such as: insert 100000 rows of data, may cause the table space to increase, after the rollback, the table space will not be reduced back.
Characteristics and introduction of MyISAM
Default storage engine previously supported by MyISAM MYSQL5.5
The engine features of MyISAM.
1. Transactions are not supported (a transaction is a logical set of operations that make up each unit of a set of operations, either all successful or all failed)
two。 Table-level locking (locking the whole table on update): the locking mechanism is table-level locking, which can make the cost of locking very low, but greatly reduce its concurrent performance.
3. Reads and writes block each other: not only does MyISAM block reads while writing, but MyISAM also blocks writes while reading, but reads themselves do not block other reads.
4. Only indexes are cached: MyISAM can cache indexes through Key_buffer_size to greatly improve access performance and reduce disk IO, but this cache only caches indexes, not data.
5. The reading speed is fast and takes up relatively few resources.
6. Before MySQL5.5.5, it was the default storage engine.
Applicable production scenarios for MyISAM engine:
1. Businesses that do not need to support transactions (such as money transfers, payments are not allowed).
two。 Generally speaking, for applications with more read data, it is not appropriate to read and write frequently, it will be blocked, read more or write more than one.
3. Read and write concurrent access to less business (read alone or write alone with high concurrency, but not at the same time) (mainly due to locking mechanism, in fact, locking the entire table, there will be congestion)
4. A business with relatively few data modifications.
5. The requirement of data consistency is not very high.
6. Machines with poor hardware resources can use MyISAM, usually small and medium-sized websites.
Summary: a single database operation can use MyISAM, that is, as far as possible pure read, or pure write (insert,update,delete) and so on.
Essentials of MyISAM engine tuning:
1. Set up the appropriate index (caching mechanism).
two。 Adjust the read and write priority to ensure that important operations are performed more limited according to actual requirements.
3. Enable delayed insertion (batch insert as much as possible to reduce the frequency of writing)
4. Operate sequentially as much as possible so that insert data are written to the tail to reduce blocking.
5. Decompose large and long operations to reduce the blocking time of a single operation.
6. Reduce the number of concurrency (reduce access to MySQL), and some high concurrency scenarios can be queued through applications.
7. For static (infrequently changed) database data, it is possible to make full use of Query Cache or memcached cache services to improve access efficiency.
The count of 8.MyISAM is the most efficient only when scanning the full table, and conditional count requires data access on the test machine.
9. You can also use InnoDB in the master library and MyISAM in the slave library for read-write separation (not recommended, data migration and upgrade are troublesome).
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.