In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Table building and sql irregularities cause the server to be almost unresponsive
Problem description:
1) the developer said that the test library session is full and requested to increase the maximum number of connections.
A large number of session (1944) are found in the database, the statements are basically the same, SELECT * FROM ali_phone_info where phone=13004669173, in the sending data state.
It is difficult for the server to log in to ssh.
Mysql > show processlist
+-+- -+
| | 413853 | sx_pac | 180.169.233.185Sending data 46268 | sx_pac | Query | Sending data | SELECT * FROM ali_phone_info where phone=13004669173 |
| | 413588 | sx_pac | 180.169.233.185Query | sx_pac | Query | Sending data | SELECT * FROM ali_phone_info where phone=15577861724 |
| | 413589 | sx_pac | 180.169.233.185 sx_pac | sx_pac | Query | 438 | Sending data | SELECT * FROM ali_phone_info where phone=13175885461 |
| | 413590 | sx_pac | 180.169.233.185 Query | sx_pac | Sending data | SELECT * FROM ali_phone_info where phone=15578128274 |
...
| | 74405 | sx_pac | 180.169.233.185 Query 11350 | sx_pac | Query | 0 | query end | create table IF NOT EXISTS o2o_huangye_info (
Primary_key VARCHAR (255) not null pri |
+-+- -+
1944 rows in set (0.01 sec)
The developer said that the sql of these queries is a verification step before insert, and if there is one, there is no need for insert.
Error log has a lot of the following information:
2018-09-04T13:17:43.990352+08:00 21201 [Note] Aborted connection 21201 to db: 'sx_pac' user:' sx_pac' host: '180.169.150.211' (Got an error reading communication packets)
2018-09-04T13:18:26.263859+08:00 21220 [Note] Aborted connection 21220 to db: 'sx_pac' user:' sx_pac' host: '180.169.150.211' (Got an error reading communication packets)
2018-09-04T13:32:24.931968+08:00 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 4900ms. The settings might not be optimal. Flushed=5 and evicted=0, during the time.
2) the server is configured with 2c4G and centos7.x
Parameters related to mysql:
Innodb_buffer_pool_size | 5G / / Aliyun rds 25g / 12g
Innodb_buffer_pool_instances | 2 / / Aliyun rds 8 / 4
Innodb_io_capacity | 200 / / Aliyun rds is 2000, all are solid state drives.
Innodb_io_capacity_max | 2000 / / Aliyun rds is 4000
Innodb_read_io_threads | 4
Innodb_write_io_threads | 4
Max_connections | 2000 / / Aliyun rds is 8512 / 4512
Log_warnings | 2
Interactive_timeout | number of seconds the server waits for activity before shutting down the interactive connection. Default: 28800 seconds (8 hours)
Wait_timeout | 28800 / /
| innodb_page_cleaners | 2 / / generally recommended settings are the same as innodb_buffer_pool_instances. 5.6.There is only one page_cleaner thread, and 5.7may have more than one. "
Innodb_log_file_size | 536870912 / / 512m, Aliyun rds 1.5g
Innodb_log_files_in_group | 2
Innodb_lru_scan_depth | 1024
3) other information display:
Mysql > analyze table ali_phone_info
+-- +
| | Table | Op | Msg_type | Msg_text | |
+-- +
| | sx_pac.ali_phone_info | analyze | status | OK | |
+-- +
1 row in set (0.03 sec)
Mysql > show table status like 'ali_phone_info'
+- -- + -+
| | Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | |
+- -- + -+
| | ali_phone_info | InnoDB | 10 | Dynamic | 752640 | 56 | 42532864 | 0 | 20512768 | 4194304 | 2018-09-21 19:30:27 | 2018-09-21 18:22:05 | NULL | utf8_general_ci | NULL |
+- -- + -+
1 row in set (0.00 sec)
Mysql > show create table ali_phone_info
+-+ +
| | Table | Create Table |
+-+ +
| | ali_phone_info | CREATE TABLE `ali_phone_ info` (
`primary_ Key` int (11) NOT NULL AUTO_INCREMENT
`phone` varchar (255) NOT NULL
`plat_ code` varchar (255) NOT NULL
`crawl_ time`timestamp NULL DEFAULT NULL
`jrjt_del_ dt` varchar (255) DEFAULT NULL
PRIMARY KEY (`primary_ key`)
) ENGINE=InnoDB AUTO_INCREMENT=787894 DEFAULT CHARSET=utf8 |
+-+
1 row in set (0.00 sec)
Problem location:
1. Build on the table
On the 2.sql statement
3. On the index
Deal with:
The 1.phone field uses 255characters, which is totally unnecessary.
The 2.phone field is a field type, but sql uses the value "SELECT * FROM ali_phone_info where phone=15578128274" of the numeric type.
3. Here "select *" is used to verify that it does not conform to the specification, and SELECT phone from ali_phone_info where phone='15578128274' should be rewritten.
4. Direct ddl method to build the index, lock table for a long time, 57w rows of data, build the index to introduce 2 decimals, online environment is not good.
Mysql > alter table ali_phone_info add key idx_ali_phone_info01 (phone (11))
Query OK, 0 rows affected (1 hour 54 min 30.51 sec)
Records: 0 Duplicates: 0 Warnings: 0
Mysql > show create table ali_phone_info
+-+ +
| | Table | Create Table |
+-+ +
| | ali_phone_info | CREATE TABLE `ali_phone_ info` (
`primary_ Key` int (11) NOT NULL AUTO_INCREMENT
`phone` varchar (255) NOT NULL
`plat_ code` varchar (255) NOT NULL
`crawl_ time`timestamp NULL DEFAULT NULL
`jrjt_del_ dt` varchar (255) DEFAULT NULL
PRIMARY KEY (`primary_ key`)
KEY `idx_ali_phone_ info01` (`phone` (11))
) ENGINE=InnoDB AUTO_INCREMENT=787894 DEFAULT CHARSET=utf8 |
+-+
1 row in set (0.00 sec)
5. A large number of session indicates that the program does not use connection pooling mechanism.
Even if it is a crawler, the front end initiates a large number of session, but the back-end operation of the database can be done through connection pooling.
6. Do not use cache, do batch insert
7. Create a table in the program
8.error log message description
[Note] Aborted connection 21201 to db:... (Got an error reading communication packets)
This is caused by the network and other reasons. Here, too much session leads to the exhaustion of system resources.
If "Got timeout reading communication packets", the idle time of the session reaches the timeout time specified by the database
[Note] InnoDB: page_cleaner: 1000ms intended loop took 4900ms. The settings might not be optimal. Flushed=5 and evicted=0, during the time.
Innodb page cleaner thread refresh policy
There are several parameters that affect the behavior of Page cleaner:
Innodb_lru_scan_depth
Innodb_adaptive_flushing_lwm
Innodb_max_dirty_pages_pct_lwm
Innodb_io_capacity_max
Innodb_flushing_avg_loops
Reference:
Https://blog.csdn.net/jc_benben/article/details/82251891
Https://www.cnblogs.com/yuyue2014/p/5553820.html
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
Change work.c to prefork.c mode
© 2024 shulou.com SLNews company. All rights reserved.