In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article introduces the knowledge of "php+redis how to troubleshoot HTTP 500: Internal Server Error in actual projects." In the actual case operation process, many people will encounter such difficulties. Next, let Xiaobian lead you to learn how to handle these situations! I hope you can read carefully and learn something!
problem description
The number of users has increased rapidly, and the number of visits has doubled in a short period of time. Due to the good capacity planning in the early stage, hardware resources can support it, but there are big problems in the software system:
40% of requests return HTTP 500: Internal Server Error
By looking at the logs, the error was found in PHP Redis connection handling
debug processing
its 1st
At first, the root cause was not found, and various methods related to the error were tried, such as:
Increase the number of PHP connections and timeout from 500ms to 2.5 s
Disable default_socket_timeout in PHP settings
Disabling SYN cookies on host systems
Check the number of file descriptors for Redis and Webservers
Add mbuffer to host system
Adjust TCP backlog
……
Tried many things, but nothing worked.
its 2nd
I tried to reproduce this problem in the pre-release environment, but unfortunately, I still failed, because the traffic was not large enough to reproduce.
its 3rd
Could it be that the code doesn't close the Redis connection?
Normally speaking, PHP will automatically close the resource connection at the end of execution, but there will be memory leakage problems in the old version. To be safe, modify the code again and manually close the connection.
And it's still not working.
its 4th
Suspect target: phpredis this client library
Do A/B testing, replace the predis library, deploy to 20% of the data center users
Thanks to good code structure, the replacement was completed quickly
But the result is still invalid, but there is also a good side, which can prove that phpredis is not a problem.
its 5th
Check out the version of Redis, it is v2.6, the latest version at that time is v2.8.9
Try upgrading Redis. It still doesn't work after upgrading.
It's okay. Stay positive. It'll bring Redis up to date.
its 6th
By looking up a large number of documents, I found a good debugging method in the official documentation. Redis Software Watchdog was opened and executed:
$ redis-cli --latency -p 6380 -h 1.2.3.4min: 0, max: 463, avg: 2.03 (19443 samples)
View Redis logs:
... [20398] 22 May 09:20:55.351 * 10000 changes in 60 seconds. Saving... [20398] 22 May 09:20:55.759 * Background saving started by pid 41941[41941] 22 May 09:22:48.197 * DB saved on disk[20398] 22 May 09:22:49.321 * Background saving terminated with success[20398] 22 May 09:25:23.299 * 10000 changes in 60 seconds. Saving... [20398] 22 May 09:25:23.644 * Background saving started by pid 42027...
Found a problem:
Saving data to the hard disk every few minutes, fork a background storage Why does it take about 400ms (see the time in entries 1 and 2 above)
At this point, we finally found the root cause of the problem, because there is a large amount of data in the Redis instance, which causes each persistence operation fork background process to be very time-consuming, and frequently modifies the key in their business, which leads to frequent triggering of persistence, which often causes blocking of Redis.
Workaround: Use a separate slave for persistence
This slave does not handle real traffic requests. Its only role is to handle persistence, transferring persistence operations from previous Redis instances to this slave.
The effect is very obvious, the problem is basically solved, but sometimes it will be reported incorrectly.
its 7th
Troubleshooting slow queries that may block Redis, found that keys * are used somewhere
Because there is more and more data in Redis, this command will naturally cause serious blocking.
Can be replaced with scan
its 8th
After the previous adjustment, the problem has been solved, and in the following months, even if the traffic continues to grow, it has resisted.
But they realized a new problem:
The current method is to create a Redis connection with a request, execute several commands, and then disconnect. When the request volume is large, this method produces serious performance waste. More than half of the commands are used to process the connection operation, which exceeds the processing logic of the business and slows down Redis.
Solution: Introducing proxy, they chose twemproxy for twitter, only need to install proxy on each webserver, twemproxy is responsible for persistent connection with Redis instance, which greatly reduces the operation of connection
twemproxy also has two convenient places:
Support memcached
Can block very time-consuming or dangerous commands, such as keys, flushall
The effect is naturally perfect, no longer have to worry about the previous connection error
its 9th
Continue optimization through data fragmentation:
Separation of data from different contexts
Consistent hash fragmentation of data in the same context
Effect:
Reduced requests, load per machine
Improved cache reliability without worrying about node failures
"php+redis in the actual project how to troubleshoot HTTP 500: Internal Server Error" content is introduced here, thank you for reading. If you want to know more about industry-related knowledge, you can pay attention to the website. Xiaobian will output more high-quality practical articles for everyone!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.