In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "what are the common questions in Redis interview". Friends who are interested might as well take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn what are the common questions in the Redis interview.
1. What is a cache avalanche? How to solve it?
Usually, we use cache to cushion the impact on DB. If the cache is down, all requests will be called directly into DB, causing DB to go down-- thus causing the entire system to go down.
How to solve the problem?
2 strategies (used simultaneously):
Make the cache highly available to prevent cache downtime
Using a circuit breaker, if the cache is down, in order to prevent the whole system from downtime, restrict part of the flow into the DB and ensure that some of the traffic is available, and the rest of the requests return to the default value of the circuit breaker.
two。 What is cache traversal? How to solve it?
Explanation 1: cache query a key that does not exist, and the database does not have it. If hackers use this method extensively, it will lead to DB downtime.
Solution: we can use a default value to prevent, for example, when accessing a key that does not exist, and then access the database, or no, then put a placeholder in the cache, the next time you come, check the placeholder, if it occurs, do not go to the database query, to prevent DB downtime.
Explanation 2: a large number of requests to query a newly expired key, resulting in increased DB pressure, may lead to downtime, but in fact, the query is the same data.
Solution: you can add double-check locks to these request codes. But requests at that stage will slow down. But it's better than DB downtime.
3. What is cache concurrency competition? How to solve it?
Explanation: multiple clients write a key, if the order is wrong, the data is not correct. But we can't control the order.
Solution: use a distributed lock, such as zk, while adding a timestamp of the data. At the same time, only the client that grabs the lock can write, and at the same time, when writing, compare the timestamp of the current data with that of the data in the cache.
4. What is the double write inconsistency between cache and database? How to solve it?
Explanation: continuous write to the database and cache, but during the operation, it occurs concurrently and the data is inconsistent.
Typically, the cache and database are updated in the following order:
Update the database before updating the cache.
Delete the cache before updating the database.
Update the database before deleting the cache.
Let's take a look at the advantages and disadvantages of the three ways:
Update the database before updating the cache.
The problem with this is that when there are two requests to update the data at the same time, you will not be able to control the final cached value without using a distributed lock. That is, there is a problem when writing concurrently.
Delete the cache before updating the database.
The problem with this: if a client reads the data after deleting the cache, it is possible to read the old data and set it to the cache, resulting in the data in the cache always being old data.
There are two solutions:
Use "double delete", that is, delete, delete
Using a queue, when the key does not exist, it is queued and executed serially, and the data cannot be read until the database has been updated.
Generally speaking, it is troublesome.
Update the database before deleting the cache
This is actually a commonly used scheme, but many people don't know it. Let's introduce it here. It's called Cache Aside Pattern, invented by foreigners. If you update the database first, and then delete the cache, then there will be a moment before updating the database that the data is not timely.
At the same time, if the cache expires just before the update, it is possible for the read client to read the old value and then set the old value again after the deletion of the write client, which is a coincidence.
There are two prerequisites: the cache expires before writing, and the old data is placed at the end of the write client delete operation-that is, reading is slower than writing. Set some write operations will also lock the table.
So, it's hard to show up, but what if it does? Use double delete! Record whether the client reads the database during the update, and if so, perform a delayed deletion after updating the database.
At this point, I believe you have a deeper understanding of "what are the common questions in the Redis interview?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.