Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MySQL shuts down frequently. What's going on?

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces how MySQL stops frequently. It is very detailed and has a certain reference value. Friends who are interested must finish reading it.

The detailed log is as follows:

2017-04-13 16:25:29 40180 [Note] Server socket created on IP:':'

2017-04-13 16:25:29 40180 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.

2017-04-13 16:25:29 40180 [Note] Slave I O thread: connected to master 'xx@xxxx:6606',replication started in log' mysql-bin.000105' at position 732153962

2017-04-13 16:25:29 40180 [Warning] Slave SQL: If a crash happens this configuration does not guarantee that the relay log info will be consistent, Error_code: 0

2017-04-13 16:25:29 40180 [Note] Event Scheduler: Loaded 0 events

2017-04-13 16:25:29 40180 [Note] / mysql_base/bin/mysqld: ready for connections.

Version: '5.6.20 socket:' / tmp/mysql.sock' port: 6607 Source distribution

2017-04-13 16:25:29 40180 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000105' at position 634901970, relay log' / mysql_log/relay-log.000339' position: 25153965

2017-04-13 16:26:01 40180 [Note] / mysql_base/bin/mysqld: Normal shutdown

2017-04-13 16:26:01 40180 [Note] Giving 2 client threads a chance to die gracefully

2017-04-13 16:26:01 40180 [Note] Event Scheduler: Purging the queue. 0 events

2017-04-13 16:26:01 40180 [Note] Shutting down slave threads

2017-04-13 16:26:01 40180 [Note] Slave SQL thread exiting, replication stopped in log 'mysql-bin.000105' at position 637977115

2017-04-13 16:26:01 40180 [Note] Slave I amp O thread killed while reading event

2017-04-13 16:26:01 40180 [Note] Slave I thread exiting O thread exiting, read up to log 'mysql-bin.000105', position 732432767

2017-04-13 16:26:01 40180 [Note] Forcefully disconnecting 0 remaining clients

2017-04-13 16:26:01 40180 [Note] Binlog end

2017-04-13 16:26:01 40180 [Note] Shutting down plugin 'partition'

2017-04-13 16:26:01 40180 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'

2017-04-13 16:26:01 40180 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'

2017-04-13 16:26:01 40180 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS' automatically stopped after the mysql service process was started. And if you take a closer look at this log, you will find that there is no Error in it, and there are several warning messages in it, but you don't think it should be the root cause of the problem.

Through the log above, we will get some basic information:

This is a library, which can be seen from relay's information.

When stopping the library, it seems to be a sequential process, unlike the characteristics of power outage and abnormal crash.

The sentence marked red:

Giving 2 client threads a chance to die gracefu

I think this log is the focus of this question, how two thread can be elegantly die.

So I'm going to look at it from several angles.

Is it a systematic exception?

Is it a question of setting kernel parameters?

Whether it is the setting of database parameters

Bug

The first problem, I checked that the file system is ext4, the memory is 64GB, there is still a lot of remaining memory, the configuration and load of the system are not high.

The second question, I checked the settings of the kernel parameters, and there was no problem with the main shmmax parameters. I looked at the network settings that specified a lot of details, and we wondered whether swap would have an impact. Although the current swap usage is almost zero, we debugged it with the mentality of a try and set swapniess=1, and the test problem remains the same.

The third question is whether it is the setting of database parameters. I think the buffer_pool_size is 40G, other parameter settings are also quite reasonable, and there are no unfamiliar parameter settings, so there is no way to start here, but we still try to set the buffer_pool_size from 40G to 4G, and the result is still the same.

The fourth question, looking for bug, really found one, https://bugs.mysql.com/bug.php?id=71104, but this problem is difficult to explain, because according to the feedback of this netizen, this server is fine in the morning and is like this in the afternoon, so bug is also a bit far-fetched.

With doubt, I also tried to boot up and skip-slave-start didn't help.

I think we should change our way of thinking, and what other blind spots have not been taken into account.

I suddenly saw a file in the log directory, which, at first glance, was not generated by the MySQL system, but much like a manually specified file. Check the information inside and find that it is a check to check the running status of MySQL. So I wonder if there are some tasks set up at the system level.

Using crontab-l to view, sure enough, there are two, the second is the task script to check the status of the service, and the first is a script like check_mysql.sh

The contents are as follows:

#! / bin/bash

Datetime= `date + "F% H:%M:%S" `

/ mysql_base/bin/mysql-uxx-pxx-e "select version ();" & > / dev/null

If [$?-eq 0]

Then

# date + "F H:%M:%S"

Echo "$datetime mysql is running" > > / mysql_log/check_mysql.log

Else

Pkill mysql

Sleep 5

/ mysql_base/bin/mysqld_safe-- user=mysql > / dev/null 2 > & 1 &

Echo "$datetime ERROR:*mysql restarted*" > > / mysql_log/check_mysql.log

Fi take a closer look at this script to see if there is anything wrong. The basic idea is to connect to MySQL and check the version. If the result is 0, otherwise you will kill MySQL, and then wait 5 seconds to restart the service.

The key here is the first part of the content, if the connection fails, the following steps will certainly go wrong, that is, it will directly kill MySQL.

Confirm with this netizen that he modified a data in the morning, and the user's password should be changed, which led to this unexpected problem with the abnormal connection.

The quickest solution is to comment out the cron, then adjust the password, and more importantly, the logic needs to be continuously improved.

These are all the contents of the article "what happened to frequent MySQL shutdowns?" Thank you for your reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report