In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
The main content of this article is to explain "what are the random recovery levels of MySQL". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what are the Rank of MySQL random recovery?"
For MySQL data recovery, in fact, most of the time, it will be a bit impractical. In most cases, the construction of the backup and recovery system is done in one go. If you keep it as it is after the construction is perfect, you will rarely intervene and test it. But once you need to restore it, you will find that it is not good, and it is not perfect. It will cost a lot of money to restore, and it will be the end of your career.
So when we are in data recovery, we specially improve a function, that is random recovery, random recovery mainly implements two functions: backup set-based recovery and point-in-time recovery. Recovery based on the backup set is relatively simple, that is, when the backup is made, it must be restored, but it will be more complicated based on the time point. For example, if the database can be restored to 10:00:00, it is necessary to achieve a recovery ability accurate to seconds. We go a step further here, generate a random time, and then let the service restore at a specified point in time. About 10 tasks will be run every day. They are all randomly selected from the service group.
After a period of adjustment and acceptance, from the success rate of about 50% to the current success rate of about 93%, my initial requirement is two 9s. This standard has been proposed for some time. Judging from the results of practice, it takes a lot of price and painstaking efforts to achieve this standard, and it is far from as easy as it seems.
For this, I have set 3 Rank for random recovery, which can be used as a reference.
Level 1: random sampling + stand-alone recovery
The idea at this level is very simple. Randomly select an instance from the service group and restore it to the specified recovery machine. As long as the database can be started normally, it will be identified as successful, otherwise, if it cannot be started due to parameter compatibility, version differences, space bottlenecks, plug-in problems, etc., will be marked as a failure.
Of course, the disadvantage of this model is also obvious, that is, the random model, the most embarrassing thing is that the same instance is repeatedly selected, or all the large instances cause great pressure on the recovery and lead to failure. In addition, the restoration machine has become a bottleneck, and cross-room traffic and space restrictions will make it difficult for a single recovery machine to support higher index requirements, which is the main reason why it is difficult to break through a 9 in the early stage.
The second level: random sampling + load balancing of multiple IDC nodes
This idea is highly operable and has obvious advantages. The original recovery tasks can be randomly assigned to different IDC, which is a great improvement for traffic consumption across data centers. At the same time, it can also greatly improve the throughput of random recovery. For example, if we could have run 10 random recovery tasks, it would be easy if we added 15 tasks.
The third level: random policy scheduling + multi-IDC load balancing
This is the key stage in which I think there is a lot of room for improvement at present and can iterate into two 9s. It can be considered from the following aspects:
1) the recovery server implements multi-version plug-in deployment. for the recovery server, the default database version is not required, and all differential versions are plug-in directories, which can quickly build the recovery server and improve the recovery scalability.
2) customize delayed startup according to the storage and configuration of the recovery server. For example, some servers have better CPU configuration, faster database startup, and some database startup is slightly slower. Delayed startup can be realized through configuration to avoid some awkward problems in database startup.
3) large-capacity instances schedule recovery in specified servers to save resource costs. For example, if the capacity of an instance is 800g, then the recovery machine needs to be about 900g, then not all recovery servers need 900g. Generally speaking, this is a very rare phenomenon. For example, a general configuration of 500g is sufficient.
4) the scheduling frequency of large-capacity instances should be reduced as much as possible. If the capacity of an instance is large and the recovery cost is high, then we can adjust the recovery priority on the basis of effective recovery.
5) the unrestored instances need to be scheduled first. If there are 1000 instances, the coverage of the recovery cannot cover most of the instances after a long time. In fact, there is something wrong with the design of random recovery. Need to take care of those instances that are not scheduled.
6) to achieve flexible scheduling, for example, for instances with small capacity, the recovery efficiency will be much faster, so we are bound to increase the number of recovery of such instances. If the capacity of the selected instance is large, we can do some regulation in terms of duration and quantity.
Level 4: hypothesis testing based on statistical models
On the basis of the third level, on the premise of reaching two nines, the fourth level will transform recovery into a general problem. On the premise that it is impossible to achieve full dataset recovery, hypothesis testing can be carried out based on statistical models. The ultimate goal is to evaluate and analyze statistics through a valid sample data. The theoretical depth of this part is actually not so complicated. It is a new kind of thinking logic to evaluate the quality of recovery.
At this point, I believe you have a deeper understanding of "what are the random recovery Rank of MySQL?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.