In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
This article comes from the official account of Wechat: programming Technology Universe (ID:xuanyuancoding), author: Xuanyuan Wind O
I'm Redis Hello, I'm Redis, a man named Antirez brought me into this world.
Speaking of my birth, it has a lot to do with the relational database MySQL.
Before I came to this world, MySQL had a hard time, the Internet developed faster and faster, it contained more and more data, and user requests skyrocketed, and every user request became a read and write operation after read and write operation to it. MySQL is miserable. Especially to the "double 11", "618" such national shopping carnival days, are the days of MySQL suffering.
As MySQL told me later, more than half of all user requests are read operations, and they often repeatedly query something, wasting a lot of time on disk I / O.
Later, someone wondered if we could learn from CPU and add a cache to the database. So I was born!
Soon after I was born, MySQL and I became good friends, and we often appeared on back-end servers hand in hand.
The data that applications query from MySQL can be registered with me, and when they need it later, they will ask me for it first, but I won't ask MySQL for it here.
For ease of use, I support storage of several data structures:
String
Hash
List
Set
SortedSet
Bitmap
Because I record all the registered data in memory and do not have to perform snail-like I / O operations, it takes a lot of time to find me than to find MySQL.
Don't underestimate this simple change, I can lighten the burden for MySQL! As the program runs, I cache more and more data, and I block user requests for a considerable part of the time.
With my participation, the performance of web services has improved a lot, thanks to the fact that I have been shot for the database.
Cache expiration & cache elimination but soon I found that something was wrong. The data I cached was in memory, but even on the server, the space resources in memory were still very limited. I had to think of a way to save it like this. Otherwise, I would take a jujube pill.
Soon, I came up with an idea: set a timeout for the cached content, and leave it to the applications to set how long. All I have to do is delete the expired content from me and make room in time.
There is an overtime. When should I do this cleaning work?
The simplest thing is to delete it regularly. I decided to do 100ms once, 10 times a second!
When I clean up, I can't delete all the expired ones in one breath. I have a lot of data in it. I don't know how long it will take to scan it, which will seriously affect my reception of new customer requests!
Time is tight and the task is heavy, so I have to randomly select some of them to clean up, as long as I can relieve the pressure on memory.
After a period of time like this, I found that some keys were lucky. They were not selected by my random algorithm every time, and survived every time. This is no good. These long-overdue data have been occupying a lot of memory space! Shivering and cold!
I can't rub sand in my eyes! So on the basis of the original regular deletion, another move was added:
Those who originally escaped my random selection algorithm of the key value, once encountered a query request, I found that has expired, then I will not be polite, immediately delete.
This way because it is passively triggered, no query will not occur, so it is also called lazy delete!
However, there are still some key values that have escaped my random selection algorithm and have not been queried, causing them to be at large all the time! At the same time, less and less memory space is available.
And even if I can to say the least, I can delete all the expired data, so if the expiration time is set for a long time, the memory will be full before I clean it up, so I still have to think of a way.
I thought hard for a long time, and finally came up with a big move: memory elimination strategy, this time I want to completely solve the problem!
I have provided eight strategic provisioning program options for me to make decisions when I run out of memory:
Noeviction: an error is returned and no key values are deleted
Allkeys-lru: use the LRU algorithm to delete the least recently used key values
Volatile-lru: use the LRU algorithm to remove the least recently used key values from the set of keys with the expiration time set
Allkeys-random: randomly remove from all key
Volatile-random: randomly removes from the collection of keys with expiration time set
Volatile-ttl: removes the key with the shortest remaining time from the key with the expiration time set
Volatile-lfu: removes the least frequently used keys from keys with expiration time configured
Allkeys-lfu: removes the least frequently used key from all keys
With the above combination, I no longer have to worry about filling up the space with too much out-of-date data.
Cache Penetration & & Bloom filter my life is quite comfortable, but brother MySQL is not as comfortable as I am. Sometimes when I encounter some annoying requests and the queried data does not exist, MySQL will be busy in vain! Not only that, because it doesn't exist, I can't cache it. As a result, I have to make MySQL work for nothing every time the same request comes. My value as a cache has not been realized! This is what people often call cache penetration.
When this came and went, Brother MySQL could not help saying, "Oh, brother, can you help me find a way to block the query requests that I know will not have a result?"
Then I thought of another good friend of mine: the Bloom filter.
My friend has no other skills, but is good at quickly telling you whether the data you are looking for exists from a very large data set (to tell you quietly, this friend of mine is a little unreliable, it tells you that you can't believe everything that exists. In fact, it may not exist, but if he tells you it doesn't exist, it certainly doesn't exist).
If you are interested in my friend, you can take a look at the vernacular Bloom filter BloomFilter here.
I introduced this friend to the application, and the non-existent data did not have to bother MySQL, which easily helped to solve the problem of cache penetration.
Cache Breakdown & & Cache Avalanche after a period of peace until that day
Once, the guy in MySQL was fishing leisurely when suddenly a lot of requests were made against him, which caught him by surprise.
After a period of busy work, MySQL came to me angrily, "Brother, what's going on? why are you so fierce all of a sudden?"
I checked the log and hastened to explain, "Brother, I'm really sorry, but I deleted a hot spot data just now when it expired. Unfortunately, a large number of query requests for this data came later, and I deleted them here. So the requests were sent to you."
"what are you doing? be careful next time." Brother MySQL left with an unhappy look on his face.
I didn't pay much attention to this little thing, and then I left it behind, but I didn't expect to make a bigger mess a few days later.
On that day, a large number of network requests were sent to MySQL, which was much larger than the last time, and Brother MySQL fell down several times in a moment.
It took a long time for this wave of traffic to pass, and MySQL got over it.
"Brother, what's the reason this time?" Brother MySQL is exhausted.
"this time is more unfortunate than the last time, this time is a large number of data expired almost at the same time, and then there are a lot of requests for these data, so it is larger than the last time."
Brother MySQL frowned. "then you can think of a way to torture me every three days. Who can stand this?"
"in fact, I am also very helpless, and I did not set this time. Why don't I talk to the app and ask him to set the cache expiration time evenly? at least don't let a lot of data fail collectively."
"come on, let's go together."
Later, we went to the application to discuss, not only random the expiration time of the key value, but also set the hot data never to expire, which alleviates a lot of the problem. Oh, by the way, we also named these two problems: cache breakdown and cache avalanche.
We finally lead a comfortable life again.
On the day of the egg, I was working hard. I accidentally made a mistake and the whole process collapsed.
When I started again, all the previously cached data was gone, and all the stormy requests were once again directed at Brother MySQL.
Well, if only I could remember what was cached before the crash
To predict what will happen in the future, please pay attention to the wonderful future.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.