Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to optimize MySQL Database

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to optimize MySQL database". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to optimize MySQL database".

How to optimize MySQL Database

Step one:

1: disk seek ability, with high-speed hard disk (7200 rpm / s), theoretically 7200 searches per second. There is no way to change this. The optimization method is to use multiple hard drives, or to store data separately.

2: hard disk read and write speed, this speed is very fast, this is easier to solve-you can read and write from multiple hard drives in parallel.

3:cpu.cpu handles data in memory, which is the most common constraint when there are tables with relatively small memory.

4: memory limit. When cpu needs to exceed the data suitable for cpu cache, the cache bandwidth becomes a memory bottleneck-but now the memory is so large that this problem generally does not occur.

Step 2: (I use the linux platform of the school website (LinuxADVX.Mandrakesoft.com2.4.3-19mdk))

1: adjust server parameters

Use the command shell > mysqld-help to make a list of all mysql options and configurable variables. Output the following information:

Possiblevariablesforoption--set-variable (- o) are:

Back_logcurrentvalue:5// requires the number of connections mysql can have. Back _ log indicates how many connection requests can be stored in the stack during the time mysql pauses to accept connections.

The time the connect_timeoutcurrentvalue:5//mysql server waits for a connection before replying with badhandshake

Delayed_insert_timeoutcurrentvalue:200// the amount of time an insertdelayed waits for insert before terminating

The delayed_insert_limitcurrentvalue:50//insertdelayed processor checks to see if any select statements are not executed, and if so, execute them before continuing

How big a team does delayed_queue_sizecurrentvalue:1000// assign to insertdelayed?

If flush_timecurrentvalue:0// is set to non-0, then every flush_time time, all tables are closed

The time that the interactive_timeoutcurrentvalue:28800// server waits on the foreign interactive connection before shutting it down

Join_buffer_sizecurrentvalue:131072// uses the size of the buffer connected to all

Key_buffer_sizecurrentvalue:1048540// uses the buffer size of the index block, which can be increased to better handle the index.

Lower_case_table_namescurrentvalue:0//

Long_query_timecurrentvalue:10// if a query takes longer than this time, the slow_queried count will increase

Max_allowed_packetcurrentvalue:1048576// the size of a bag

Number of simultaneous connections allowed by max_connectionscurrentvalue:300//

If max_connect_errorscurrentvalue:10// has more than this number of broken connections, it will block further connections and can be resolved with flushhosts.

The number of processing insertdelayed that max_delayed_threadscurrentvalue:15// can start

Max_heap_table_sizecurrentvalue:16777216//

Number of connections allowed to be read by max_join_sizecurrentvalue:4294967295//

The number of bytes used by max_sort_lengthcurrentvalue:1024// when sorting blob or text

Max_tmp_tablescurrentvalue:32// the number of temporary tables that have a connection open at the same time

Max_write_lock_countcurrentvalue:4294967295// specifies a value (usually very small) to start mysqld so that a read lock occurs after a certain number of write locks

The size of the net_buffer_lengthcurrentvalue:16384// communication buffer-- reset to this size when queried

Buffer size when query_buffer_sizecurrentvalue:0// query

Record_buffercurrentvalue:131072// the size of the buffer allocated for each table scanned by each sequentially scanned connection

Sort_buffercurrentvalue:2097116// the size of the buffer allocated for each sorted connection

The number of tables opened by table_cachecurrentvalue:64// for all connections

Thread_concurrencycurrentvalue:10//

The size of the tmp_table_sizecurrentvalue:1048576// temporary table

Thread_stackcurrentvalue:131072// size of each thread

The amount of time that the wait_timeoutcurrentvalue:28800// server waits on a connection before closing it 3

Configuring the above information according to your needs will help you.

Step 3:

1: if you create a large number of tables in a database, it will be slow to open, close, and create (tables) ..2: mysql uses memory

A: the keyword cache (key_buffer_size) is shared by all threads

B: each connection uses some specific thread space. A stack (default is 64k, variable thread_stack), a join buffer (variable net_buffer_length), and a result buffer (net_buffer_length). In certain cases, the connection buffer and the result buffer are dynamically expanded to max_allowed_packet.

C: all threads share a base memory

D: no memory mapping

E: each request for sequential scan is allocated a read buffer (record_buffer)

F: all joins are done once and most joins can be done without even a temporary table. The most temporary tables are memory-based (heap) tables

G: sort request allocates a sort buffer and 2 temporary tables

H: all syntax parsing and calculations are done in a local memory

I: each index file is opened only once, and the data file is opened once for each thread running concurrently

J: for each blob column table, a buffer is dynamically enlarged to read the blob value

K: the table processors of all tables in use are stored in a buffer and managed as a fifo.

L: a mysqladminflush-tables command closes all tables that are not in use and marks all tables in use ready to close at the end of the currently executing thread

3:mysql lock table

All locks in mysql do not become deadlocks. Wirte locking: mysql locking principle: a: if the table is not locked, then lock; b otherwise, put the lock request into the write lock queue

Read locking: the locking principle of mysql: a: if the table is not locked, then lock; b otherwise, put the lock request into the read lock queue

Sometimes a lot of select,insert operations are done in a table, you can insert rows in a temporary table and occasionally update the real table with the records of the temporary table

A: use the low_priority attribute to give a specific insert,update or delete a lower priority

B:max_write_lock_count specifies a value (usually very small) to start mysqld so that a read lock occurs after a certain number of write locks

C: by using setsql_low_priority_updates=1, you can specify from a specific thread that all changes should be made at a lower priority.

D: specify a select with high_priority

E: if you use insert....select.... When there is a problem, use the myisam table-because it supports concurrent select and insert

The most basic optimization is to minimize the space occupied by the data on the hard disk. If the index is on the smallest column, then the index is also the smallest. Implementation method:

A: use the smallest possible data type

B: if possible, the declaration table is listed as NOTNULL.

C: if it is possible to use the changed data type, such as varchar (but the speed will be affected to some extent)

D: each table should have a primary index as short as possible e: create the index that is really needed

F: if an index has a unique prefix on the first few characters, then only index this prefix-mysql supports indexes on part of a character column

G: if a table is scanned frequently, try to split it into more tables

The fourth step

1: the use of the index, the importance of the index does not say, the function does not say, only say how to do. First, make sure that all mysql indexes (primary,unique,index) are stored in the b-tree. The main terms of the index are:

A: quickly find records for where specified conditions b: when performing joins, retrieve rows from other tables c: find max () and min () values for specific index columns

D: if sorting or grouping prefixes an available key, sorts or groups a table

E: a query may be used to optimize retrieval values without accessing data files. If the columns of some tables are numeric and happen to be the prefix of a column, the value can be taken out of the index tree for faster

2: query speed for storing or updating data grant execution will slightly reduce efficiency.

The function of mysql should be highly optimized. You can use benchmark (loop_count,expression) to find out if there is a problem with the query

Select query speed: if you want a select...where... Faster, all I can think of is indexing. You can run myisamchk--analyze on a table to better optimize the query. You can use myisamchk--sort-index--sort-records=1 to set an index to sort an index and data.

3:mysql optimizes where clause

3.1: delete unnecessary parentheses:

((aANDb) ANDcOR (aANDb) AND (aANDd) > (aANDbANDc) OR (aANDbANDcANDd)

3.2: use constants

(ab > 5ANDb=cANDa=5

3.3: delete constant condition

(B > = 5ANDb=5) OR (b=6AND5=5) OR (b=100AND2=3) > b=5ORb=6

3.4: the constant expression used by the index is evaluated only once

3.5. in a table, not a single count (*) of where retrieves information directly from the table.

3.6: the table of all constants is read out before any other table in the query

3.7: the best join combination in the external join table is found after trying all the possibilities.

3.8If there is an order by clause and a different group by clause or order by or group by contains columns that are not from the first table of the join, create a temporary table

3.9: if sql_small_result is used, msyql uses a table in memory

3.10: the index of each table is given to the query and uses an index that spans less than 30% of rows.

3.11 skip lines that do not match the having clause before each record is output

4: optimize left join

Aleftjoinb is implemented in mysql in the following ways

A: table b depends on table a

B: table a depends on all tables used in left join conditions (except b)

C: all left join conditions are moved to the where clause

D: do all join optimizations, except for a table that is always read after all the tables he depends on. If there is a circular dependency, an error will occur

E: do all the standard where optimizations f: if there is a line in a that matches the where clause, but there is no matching left join condition in b, then all rows generated in b that are set to NULL

G: if you use left join to find rows that do not exist in the table and have a column_nameISNULL test in the where section (column_name is the NOTNUL column). So, mysql will stop looking after more lines after it has found a line that matches the left join condition

5: optimize limit

A: if you select only one row with limit, when mysql needs to scan the entire table, it acts like an index

B: if you use limit# and order by,mysql, if row # is found, the sort will end instead of sorting the full table

C: when combining limit# and distinct, mysql will stop if it finds line #

D: as long as mysql has sent the first # line to the customer, mysql will abandon the query

E:limit0 always returns an empty collection very quickly.

F: the size of the temporary table uses limit# to calculate how much space is needed to solve the query

6: optimize insert

How to optimize MySQL Database

Inserting a record consists of the following:

A: connect (3)

B: send the query to the server (2)

C: analysis query (2)

D: insert record (1 * record size)

E: insert index (1* index)

F: close (1)

The above figures can be seen as proportional to the total time.

Some ways to improve insertion speed:

6.1If you insert many rows from a connection at the same time, use insert with multiple values, which is faster than using multiple statements

6.2: it is faster to use insert delayed statements if you insert many rows from different connections

With myisam, if there are no deleted rows in the table, rows can be inserted while select:s is running

6. 4: when loading a table from an article, use load data infile. This is usually 20 times faster than insert.

6.5. table can be locked and inserted-the main speed difference is that the index buffer is saved to the hard disk only once after all insert statements are completed. It is generally faster to deposit as many times as there are different insert statements. If you can insert all rows with a single statement, locking is not necessary. Locking also reduces the overall connection time. But the maximum wait time for some threads will rise. For example:

Thread1does1000inserts

Thread2,3and4does1insert

Thread5does1000inserts

If locking is not used, 2 focus 3 and 4 will be completed before 1 and 5. If locking is used, 2 focus 3 4, it will be possible to finish after 1 and 5. But the overall time should be 40% faster. Because insert,update,delete operations are fast in mysql, better overall performance can be achieved by locking things that insert or update a row more than five times in a row. If you insert a lot of rows, you can do a lock tables, followed by an occasional unlock tables (about every 1000 rows) to allow other threads to access the table. This will still lead to good performance. Load data infile is still very fast for loading data.

To get some faster speed on load data infile and insert, expand the keyword buffer.

7 optimize the speed of update

Its speed depends on the size of the data being updated and the number of indexes being updated.

Another way to make update faster is to postpone changes and then make a lot of changes one line at a time. If you lock the table, it is faster to make a lot of changes one row at a time than one at a time

8 optimize delete speed

The time it takes to delete a record is proportional to the number of indexes. To delete records faster, you can increase the size of the index cache. Deleting all rows from a table is much faster than deleting most of the table.

Step five

1: select a table type 1.1 static myisam

This format is the simplest and safest, and it is the fastest disk format. Speed comes from how easy it is for data to be found on disk. When locking something with an index and static format, it's simple, just the length multiplied by the quantity. And when scanning a table, it is easy to read several records at a time with disk reads. Security comes from the fact that if the computer down drops when writing a static myisam file, myisamchk can easily point out where each line begins and ends, so it can usually retrieve all records, except for some of the records that were written. All indexes can always be rebuilt in mysql

1.2 dynamic myisam

Each line in this format must have a header indicating how long it is. When a record becomes longer during a change, it can end in more than one location. Can use optimize tablename or myisamchk to organize a table. If there is static data accessed / changed in the same table as some varchar or blob columns, move the dynamic column into another table to avoid fragmentation.

1.2.1 compress myisam and generate it with the optional myisampack tool

1.2.2 memory

This format is useful for small / medium tables. Copying / creating a commonly used lookup table to a foreign heap table may speed up the join of multiple tables, and it may take several times faster to use the same data.

Selecttablename.a,tablename2.afromtablename,tablanem2,tablename3where

Tablaneme.aroomtablename2.aandtablename2.aroomtablename3.aandtablename2.cpermission 0

To speed it up, you can create a temporary table with the join of tablename2 and tablename3, because it is looked up with the same column (tablename1.a).

CREATE TEMPORARY TABLE testTYPE=HEAP

SELECT

Tablename2.aasa2,tablename3.aasa3

FROM

Tablenam2,tablename3

WHERE

Tablename2.a=tablename3.aandc=0

SELECTtablename.a,test.a3fromtablename,testwheretablename.a=test.a1

SELECTtablename.a,test,a3,fromtablename,testwheretablename.a=test.a1and....

1.3 Features of static tables

1.3.1 default format. Used when the table does not contain varchar,blob,text columns

1.3.2 all char,numeric and decimal columns are populated to the column width

1.3.3 very fast

1.3.4 easy buffering

1.3.5 it is easy to rebuild after down because the record is in a fixed location

1.3.6 need not be reorganized (with myisamchk) unless a large number of records are deleted and the storage size is optimized

1.3.7 usually requires more storage space than dynamic tables

1.4 characteristics of dynamic tables

1.4.1 if the table contains any varchar,blob,text columns, use this format

1.4.2 all string columns are dynamic

1.4.3 each record is preceded by one bit.

1.4.4 usually requires more disk space than fixed-length tables

1.4.5 each record uses only the required space, and if a record becomes large, it is divided into many segments as needed, which leads to record fragmentation

1.4.6 if a row is updated with information that exceeds the length of the row, the line is segmented.

1.4.7 it is difficult to rebuild the table after the system down is dropped, because a record can be multi-segment

1.4.8 the expected row length for dynamic size records is 3 + (numberof columns+7) / 8 + (numberofcharcolumns) + packedsizeofnumericcolumns+lengthofstrings+ (numberofNULLcolumns+7) / 8

There is a penalty of 6 bytes per connection. Whenever a change causes a record to become larger, a dynamic record is connected. Each new connection has at least 20 bytes, so the next larger connection may be in the same connection. If not, there will be another connection. You can use myisamchk-maliciousness to check how many connections there are. All connections can be deleted with myisamchk-r.

1.5 characteristics of compression table

1.5.1 A read-only table made with the myisampack utility.

1.5.2 unzipped code exists in all mysql distributions so that tables compressed with myisampack can be read without myisampack connections

1.5.3 takes up very little disk space

1.5.4 each record is compressed separately. The header of a record is a fixed length (3 bytes), depending on the maximum record of the table. Each column is compressed in a different way. Some common types of compression are:

A: there is usually a different Huffman table for each column b: suffix blank compression c: prefix white space compression d: use 1-bit storage with numbers with a value of 0

E: if the value of an integer column has a small range, the column is stored using the smallest possible type. For example, if all values are between 0 and 255, a bigint can be stored as a tinyint

G: if the column has only a small collection of possible values, the column type is converted to enum h: the column can use the combination of the compression methods above

1.5.5 can handle fixed length or dynamic length records, but can not handle blob or text column 1.5.6 can be decompressed with myisamchk

Mysql can support different index types, but the general type is isam, which is a B-tree index and can roughly calculate the size of the index file as (key_length+4) * 0.67, the sum of all the keys.

The string index is blank compressed. If the first index is a string, it can compress the prefix if the string column has a lot of trailing whitespace or a headquarters can channel the full-length varchar column, blank compression makes the index file smaller. If many strings have the same prefix.

1.6 characteristics of memory tables

The heap table inside mysql uses 100% dynamic hash per occasional overflow and has no deletion-related problems. You can only access things with equations by using an index in the heap table (usually using the'= 'operator)

The disadvantages of stacking tables are:

1.6.1 all heap tables that you want to use at the same time need enough extra memory

1.6.2 cannot be searched in one part of the index

1.6.3 you cannot search for the next entry in order (that is, use this index to make an order by)

1.6.4mysql cannot calculate the approximate number of rows between two values. This is used by the optimizer to determine which index to use, but on the other hand there is no need for disk seek.

Thank you for reading, the above is the content of "how to optimize the MySQL database". After the study of this article, I believe you have a deeper understanding of how to optimize the MySQL database, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 216

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report