In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
In this issue, the editor will bring you about the steps of mysql database optimization. The article is rich in content and analyzed and described from a professional point of view. I hope you can get something after reading this article.
Steps for mysql database optimization
1: disk seek ability, with high-speed hard disk (7200 rpm / s), theoretically 7200 searches per second. There is no way to change this. The optimization method is to use multiple hard drives, or to store data separately.
2: hard disk read and write speed, this speed is very fast, this is easier to solve-you can read and write from multiple hard drives in parallel.
3:cpu.cpu handles data in memory, which is the most common constraint when there are tables with relatively small memory.
4: memory limit. When cpu needs to exceed the data suitable for cpu cache, the cache bandwidth becomes a memory bottleneck-but now the memory is so large that this problem generally does not occur.
Step 2: (I use the linux platform of the school website (Linux ADVX.Mandrakesoft.com 2.4.3-19mdk))
1: adjust server parameters
Use the command shell > mysqld-help to make a list of all mysql options and configurable variables. Output the following information:
Possible variables for option--set-variable (- o) are:
Back_log current value:5 / / the number of connections that mysql can have. Back _ log indicates how many connection requests can be stored in the stack during the time mysql pauses to accept connections.
The time the connect_timeout current value:5 / / mysql server waits for a connection before replying with bad handshake
Delayed_insert_timeout current value:200 / / the time an insert delayed waits for insert before terminating
The delayed_insert_limit current value:50 / / insert delayed processor will check to see if any select statements are not executed, and if so, execute them before continuing
Delayed_queue_size current value:1000 / / how big a team is assigned to insert delayed
Flush_time current value:0 / / if it is set to non-0, then every flush_time time, all tables are closed
The time that the interactive_timeout current value:28800 / / server waits on the foreign interactive connection before shutting it down
Join_buffer_size current value:131072 / / use the size of the buffer connected to all
Key_buffer_size current value:1048540 / / the buffer size of the index block, which can be increased to better handle the index.
Lower_case_table_names current value:0 / /
Long_query_time current value:10 / / if a query takes longer than this time, the slow_queried count will increase
Max_allowed_packet current value:1048576 / / the size of a packet
Max_connections current value:300 / / number of simultaneous connections allowed
Max_connect_errors current value:10 / / if there are more than this number of broken connections, further connections will be blocked and can be resolved with flush hosts
Max_delayed_threads current value:15 / / the number of processing insert delayed that can be started
Max_heap_table_size current value:16777216 / /
Max_join_size current value:4294967295 / / number of connections allowed to be read
Max_sort_length current value:1024 / / the number of bytes used when sorting blob or text
Max_tmp_tables current value:32 / / the number of temporary tables with a connection open at the same time
Max_write_lock_count current value:4294967295 / / specifies a value (usually very small) to start mysqld, resulting in a read lock after a certain number of write locks
The size of the net_buffer_length current value:16384 / / communication buffer, which is reset when queried
Query_buffer_size current value:0 / / buffer size when querying
Record_buffer current value:131072 / / the size of the buffer allocated for each table scanned by each sequentially scanned connection
Sort_buffer current value:2097116 / / the size of the buffer allocated per sorted connection
Table_cache current value:64 / / the number of tables opened for all connections
Thread_concurrency current value:10 / /
Size of tmp_table_size current value:1048576 / / temporary table
Thread_stack current value:131072 / / size of each thread
The amount of time the wait_timeout current value:28800 / / server waits on a connection before closing it 3
Configuring the above information according to your needs will help you.
Third:
1: if you create a large number of tables in a database, it will be slow to open, close, and create. 2:mysql uses memory
A: the keyword cache (key_buffer_size) is shared by all threads
B: each connection uses some specific thread space. A stack (default is 64k, variable thread_stack), a join buffer (variable net_buffer_length), and a result buffer (net_buffer_length). In certain cases, the connection buffer and the result buffer are dynamically expanded to max_allowed_packet.
C: all threads share a base memory
D: no memory mapping
E: each request for sequential scan is allocated a read buffer (record_buffer)
F: all joins are done once and most joins can be done without even a temporary table. The most temporary tables are memory-based (heap) tables
G: sort request allocates a sort buffer and 2 temporary tables
H: all syntax parsing and calculations are done in a local memory
I: each index file is opened only once, and the data file is opened once for each thread running concurrently
J: for each blob column table, a buffer is dynamically enlarged to read the blob value
K: the table processors of all tables in use are stored in a buffer and managed as a fifo.
L: a mysqladmin flush-tables command closes all tables that are not in use and marks all tables in use ready to close at the end of the currently executing thread
3:mysql lock table
All locks in mysql will not become deadlocks. Wirte locking: the locking principle of mysql: a: if the table is not locked, then lock; b otherwise, put the lock request into the write lock queue
Read locking: the locking principle of mysql: a: if the table is not locked, then lock; b otherwise, put the lock request into the read lock queue
Sometimes a lot of select,insert operations are done in a table, you can insert rows in a temporary table and occasionally update the real table with the records of the temporary table
A: use the low_priority attribute to give a specific insert,update or delete a lower priority
B:max_write_lock_count specifies a value (usually very small) to start mysqld so that a read lock occurs after a certain number of write locks
C: by using set sql_low_priority_updates=1, you can specify from a specific thread that all changes should be made at a lower priority.
D: specify a select with high_priority
E: if you use insert....select.... When there is a problem, use the myisam table-because it supports concurrent select and insert
The most basic optimization is to minimize the space occupied by the data on the hard disk. If the index is on the smallest column, then the index is also the smallest. Implementation method:
A: use the smallest possible data type
B: if possible, the declaration table is listed as NOT NULL.
C: if it is possible to use the changed data type, such as varchar (but the speed will be affected to some extent)
D: each table should have a primary index as short as possible e: create the index that is really needed
F: if an index has a unique prefix on the first few characters, then only index this prefix-mysql supports indexes on part of a character column
G: if a table is scanned frequently, try to split it into more tables
The fourth step
1: the use of the index, the importance of the index does not say, the function does not say, only say how to do. First, make sure that all mysql indexes (primary,unique,index) are stored in the b-tree. The main terms of the index are:
A: quickly find records for where specified conditions b: when performing joins, retrieve rows from other tables c: find max () and min () values for specific index columns
D: if sorting or grouping prefixes an available key, sorts or groups a table
E: a query may be used to optimize retrieval values without accessing the data file. if the columns of some tables are numeric and happen to be prefixed by a column, for faster, the value can be extracted from the index tree
2: query speed for storing or updating data grant execution will slightly reduce efficiency.
The function of mysql should be highly optimized. You can use benchmark (loop_count,expression) to find out if there is a problem with the query
Select query speed: if you want to make a select...where... faster, all I can think of is to build an index. You can run myisamchk--analyze on a table to better optimize the query. You can use myisamchk--sort-index--sort-records=1 to set an index to sort an index and data.
3:mysql optimizes where clause
3.1: delete unnecessary parentheses:
((an AND b) AND c OR (an AND b) AND (an AND d) > (an AND b AND c) OR (an AND b AND c AND d)
3.2: use constants
(ab > 5 AND baccalaurec AND astat5
3.3: delete constant condition
(B > = 5 AND baked 5) OR (baked 6 AND 5) OR (baked 100 AND 2 # 3) > baked 5 OR baked 6
3.4: the constant expression used by the index is evaluated only once
3.5. in a table, not a single count (*) of where retrieves information directly from the table.
3.6: the table of all constants is read out before any other table in the query
3.7: the best join combination in the external join table is found after trying all the possibilities.
3.8If there is an order by clause and a different group by clause or order by or group by contains columns that are not from the first table of the join, create a temporary table
3.9: if sql_small_result is used, msyql uses a table in memory
3.10: the index of each table is given to the query and uses an index that spans less than 30% of rows.
3.11 skip lines that do not match the having clause before each record is output
4: optimize left join
A left join b is implemented in mysql in the following ways
A: table b depends on table a
B: table a depends on all tables used in left join conditions (except b)
C: all left join conditions are moved to the where clause
D: do all join optimizations, except that a table is always read after all the tables it depends on. If there is a circular dependency, an error will occur
E: do all the standard where optimizations f: if there is a line in a that matches the where clause, but there is no matching left join condition in b, then all rows generated in b that are set to NULL
G: if you use left join to find rows that do not exist in the table and there is a column_name IS NULL test in the where section (column_name is the NOT NULL column). Then mysql will stop looking after more rows after it has found a row that matches the left join condition
5: optimize limit
A: if you select only one row with limit, when mysql needs to scan the entire table, it acts like an index
B: if you use limit# and order by,mysql, if row # is found, the sort will end instead of sorting the full table
C: when combining limit# and distinct, mysql will stop if it finds line #
D: as long as mysql has sent the first # line to the customer, mysql will abandon the query
E:limit 0 will always return an empty collection very quickly.
F: the size of the temporary table uses limit# to calculate how much space is needed to solve the query
6: optimize insert
Inserting a record consists of the following:
A: connect (3)
B: send the query to the server (2)
C: analysis query (2)
D: insert record (1 * record size)
E: insert index (1* index)
F: close (1)
The above figures can be seen as proportional to the total time.
Some ways to improve insertion speed:
6.1If you insert many rows from a connection at the same time, use insert with multiple values, which is faster than using multiple statements
6.2: it is faster to use insert delayed statements if you insert many rows from different connections
With myisam, if there are no deleted rows in the table, rows can be inserted while select:s is running
6.4.When loading a table from a text file, use load data infile. This is usually 20 times faster than insert
6.5. you can lock the table and insert-the main speed difference is that after all insert statements are completed, the index buffer is saved to the hard disk only once. It is generally faster to save as many times as there are different insert statements. Locking is not necessary if you can insert all rows with a single statement. Locking also reduces the overall connection time. But the maximum wait time for some threads will rise. For example:
Thread 1 does 1000 inserts
Thread 2,3 and 4 does 1 insert
Thread 5 does 1000 inserts
If locking is not used, 2Magne3Power4 will be finished before 1 and 5. If locking is used, it will be possible to finish after 1 and 5. But the overall time should be 40% faster. Because insert,update,delete operations are fast in mysql, better overall performance can be achieved by locking things that insert or update a row more than 5 times continuously. You can do a lock tables, followed by an occasional unlock tables (about every 1000 rows) to allow other threads to access the table. This will still lead to good performance. Load data infile is still fast for loading data.
To get some faster speed on load data infile and insert, expand the keyword buffer.
7 optimize the speed of update
Its speed depends on the size of the data being updated and the number of indexes being updated.
Another way to make update faster is to postpone changes and then make a lot of changes one row at a time. if you lock the table, it is faster to make many changes one row at a time than one at a time
8 optimize delete speed
The time to delete a record is proportional to the number of indexes. to delete records faster, you can increase the size of the index cache to delete all rows from a table much faster than to delete most of the table
Step five
1: select a table type 1.1 static myisam
This format is the simplest and safest, and it is the fastest of all disk formats. Speed depends on how easy it is to find data on disk. When locking something with an index and static format, it is simple, but the length is multiplied by the quantity. And when scanning a table It is easy to read several records at a time using disk reads. Security comes from the fact that if writing a static myisam file causes the computer down to drop, myisamchk can easily indicate where each line begins and ends, so it can usually retrieve all records, except those that are partially written. all indexes can always be rebuilt in mysql
1.2 dynamic myisam
In this format, each row must have a header indicating how long it is. When a record becomes longer during a change, it can end in more than one location. You can use optimize tablename or myisamchk to organize a table. If there is static data accessed / changed in the same table as some varchar or blob columns, move the dynamic column into another table to avoid fragmentation.
1.2.1 compress myisam and generate it with the optional myisampack tool
1.2.2 memory
This format is useful for small / medium tables. It may take several times faster to copy / create a commonly used lookup table to foreign heap tables to join multiple tables, using the same data.
Select tablename.a,tablename2.a from tablename,tablanem2,tablename3 where
Tablaneme.a=tablename2.an and tablename2.a=tablename3.an and tablename2. Caterpillar 0
To speed it up, you can create a temporary table with the join of tablename2 and tablename3, because it is looked up with the same column (tablename1.a).
CREATE TEMPORARY TABLE test TYPE=HEAP
SELECT
Tablename2.an as a2,tablename3.an as a3
FROM
Tablenam2,tablename3
WHERE
Tablename2.a=tablename3.an and cantilever 0
SELECT tablename.a,test.a3 from tablename,test where tablename.a=test.a1
SELECT tablename.a,test,a3,from tablename,test where tablename.a=test.a1 and....
1.3 Features of static tables
1.3.1 default format. Used when the table does not contain varchar,blob,text columns
1.3.2 all char,numeric and decimal columns are populated to the column width
1.3.3 very fast
1.3.4 easy buffering
1.3.5 it is easy to rebuild after down because the record is in a fixed location
1.3.6 need not be reorganized (with myisamchk) unless a large number of records are deleted and the storage size is optimized
1.3.7 usually requires more storage space than dynamic tables
1.4 characteristics of dynamic tables
1.4.1 if the table contains any varchar,blob,text columns, use this format
1.4.2 all string columns are dynamic
1.4.3 each record is preceded by one bit.
1.4.4 usually requires more disk space than fixed-length tables
1.4.5 each record uses only the required space, and if a record becomes large, it is divided into many segments as needed, which leads to record fragmentation
1.4.6 if a row is updated with information that exceeds the length of the row, the line is segmented.
1.4.7 it is difficult to rebuild the table after the system down is dropped, because a record can be multi-segment
1.4.8 the expected row length for dynamic size records is 3 + (number of columns+7) / 8 + (number of char columns) + packed size of numeric columns+length of strings + (number of NULL columns+7) / 8
There is a 6-byte penalty for each connection. Whenever a change causes a record to become larger, a dynamic record is connected. Each new connection has at least 20 bytes, so the next one may be in the same connection. If not, there will be another connection. You can use myisamchk-maliciousness to check how many connections there are. All connections can be deleted with myisamchk-r.
1.5 characteristics of compression table
1.5.1 A read-only table made with the myisampack utility.
1.5.2 unzipped code exists in all mysql distributions so that tables compressed with myisampack can be read without myisampack connections
1.5.3 takes up very little disk space
1.5.4 each record is compressed separately. The header of a record is of a fixed length (1 inch 3 bytes), depending on the maximum record of the table. Each column is compressed in a different way. Some common types of compression are:
A: there is usually a different Huffman table for each column b: suffix blank compression c: prefix white space compression d: use 1-bit storage with numbers with a value of 0
E: if the value of an integer column has a small range, the column is stored using the smallest possible type. For example, if all values are between 0 and 255, a bigint can be stored as a tinyint
G: if the column has only a small collection of possible values, the column type is converted to enum h: the column can use the combination of the compression methods above
1.5.5 can handle fixed length or dynamic length records, but can not handle blob or text column 1.5.6 can be decompressed with myisamchk
Mysql can support different index types, but the general type is isam, which is a B-tree index and can roughly calculate the size of the index file as (key_length+4) * 0.67, the sum of all the keys.
The string index is blank compressed. If the first index is a string, it can compress the prefix if the string column has a lot of trailing whitespace or a full-length varchar column that can be tracked by headquarters, blank compression makes the index file smaller. If many strings have the same prefix.
1.6 characteristics of memory tables
Heap tables within mysql use 100% dynamic hashes per occasional overflow and have no deletion-related problems. you can only access things with equations by using an index in the heap table (usually using the'= 'operator)
The disadvantages of stacking tables are:
1.6.1 all heap tables that you want to use at the same time need enough extra memory
1.6.2 cannot be searched in one part of the index
1.6.3 you cannot search for the next entry in order (that is, use this index to make an order by)
1.6.4mysql cannot calculate the approximate number of rows between two values. this is used by the optimizer to determine which index to use, but on the other hand it does not even require disk seek
These are the steps of mysql database optimization shared by the editor. If you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 267
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.