In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-31 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
Most people do not understand the knowledge points of this article "Redis memory over General Assembly", so the editor summarizes the following contents, detailed content, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this "Redis memory Assembly how to do" article.
1 the main library is down
First, let's take a look at the disaster recovery process of the main database downtime: as shown below
When the main database is down, our most common disaster recovery strategy is "cut the master". Specifically, select a slave library from the remaining slave library of the cluster and upgrade it to the master library, and then mount the remaining slave library to become its slave database, and finally restore the whole master-slave cluster structure.
The above is a complete disaster recovery process, and the cost of the process is the re-mounting of the slave database, not the switching of the master database.
This is because redis cannot continue to synchronize data from the new master library after a change in the master database, such as mysql and mongodb. Once the master of the slave database is changed in the redis cluster, the redis method is to empty the slave database of the replacement master database and then completely synchronize a copy of the data from the new master database for further transmission.
The whole redo process from the library goes like this:
The main library bgsave itself data to disk
The master library sends rdb files to the slave library
Load from the library
After loading, the upload starts and the service is provided at the same time.
Obviously, the larger the memory volume of redis, the longer the time of each step will be lengthened. The actual test data are as follows (we think our machine performance is better):
As you can see, when the data reaches 20 gigabytes, the recovery time of one slave database has been extended to nearly 20 minutes. If there are 10 slaves, it will take a total of 200 minutes to recover in turn. And if the slave library is responsible for a large number of read requests, can you tolerate such a long recovery time?
When you see this, you must ask: why can't you redo all the slave libraries at the same time? This is because if all slave libraries request rdb files from the master library at the same time, then the network card of the master library will immediately run full and enter a state that can not provide services normally. At this time, the master library is dead, which is even worse.
Of course, we can recover from the library in batches, for example, in pairs, then the recovery time for all from the library has been reduced from 200 minutes to 100 minutes. Isn't that 50 steps and 100 steps?
Another important issue is the red position in the fourth point. The continuation can be understood as a simplified mongodb oplog, which is a fixed memory space, which we call a "synchronization buffer".
The write operations of the redis master library will store a copy in this area and send it to the slave database. If it takes too long in the above steps, then it is very likely that the synchronization buffer will be rewritten. What will it do if the slave library cannot find the corresponding continuation location? The answer is to redo 1pm 2pm 3 steps!
However, because we can not solve the time-consuming of 1pr 2je 3 steps, the slave library will enter a vicious circle forever: constantly requesting complete data from the master database, which will have a serious impact on the network card of the master library.
(2) capacity expansion problem
In many cases, there will be a sudden increase in traffic, and usually our emergency response is to expand capacity before we find out the cause.
According to the table in scenario 1, it takes nearly 20 minutes for a 20-gigabyte redis to expand a slave database. Can the business tolerate 20 minutes in this emergency? He may be dead before he is ready to expand.
(3) the poor network leads to redoing from the library and eventually leads to an avalanche.
The problem in this scenario is that the synchronization between the master library and the slave library is interrupted, and the slave library is likely to accept write requests at this time, so the synchronization buffer is likely to be overwritten if the interrupt time is too long. At this time, the last synchronization location of the slave library has been lost. Although the master library has not changed after the network recovery, the slave library must be redone due to the loss of the slave library synchronization location, that is, 1pm, 2pm, 3pm and 4 steps in problem 1. If the memory volume of the master library is too large at this time, the redo speed of the slave library will be very slow, and the read request sent to the slave library will be seriously affected. At the same time, due to the large size of the transferred rdb files, the network card of the master library will be seriously affected for a long time.
4 the larger the memory, the longer the main thread is blocked by the operation that triggers persistence.
Redis is a single-threaded in-memory database that fork a new process to do when redis needs to perform time-consuming operations, such as bgsave,bgrewriteaof. When Fork is a new process, although the shareable data content does not need to be copied, it will copy the memory page table of the previous process space, which is done by the main thread, blocks all read and write operations, and takes longer as the memory is used. For example, a redis,bgsave with 20g of memory takes about 750ms to copy the memory page table and the main thread will also block 750ms because of it.
Solution.
The solution, of course, is to try to reduce memory usage, which is what we usually do:
1 set expiration time
Set the expiration time for the timely key, through the redis's own expired key cleaning policy to reduce the memory consumption of the expired key, at the same time, it can also reduce the trouble of the business, and there is no need to clean up regularly.
2 do not store garbage in redis
This is nonsense, but is there anyone who feels sorry for us?
(3) clean up useless data in time.
For example, an redis carries the data of three businesses, and after a period of time, two businesses are offline, so you can clean up the data related to these two businesses.
4 compress the data as much as possible
For example, for some data in the form of long text, compression can greatly reduce memory footprint.
5 focus on memory growth and locate high-capacity key
Whether you are a DBA or a developer, if you use redis, you must focus on memory, otherwise, you are actually incompetent. Here you can analyze which key in the redis instance is larger to help the business quickly locate abnormal key (unexpected growth of key is often the source of the problem)
6 pika
If you really don't want to be so tired, then migrate the business to the new open source pika, so you don't have to pay too much attention to memory, and the problems caused by too much redis memory are not a problem.
The above is about the content of this article "how about the Redis memory conference". I believe we all have a certain understanding. I hope the content shared by the editor will be helpful to you. If you want to know more about the relevant knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.