Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The difference between Redis and Memcached

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

If you simply compare the difference between Redis and Memcached, most will get the following point of view:

1 Redis not only supports simple KBE data, but also provides storage of data structures such as list,set,hash.

2 Redis supports data backup, that is, data backup in master-slave mode.

3 Redis supports data persistence, which can keep the data in memory on disk and can be loaded and used again when rebooting.

In Redis, not all data is stored in memory all the time. This is the biggest difference from Memcached (I personally think so). Redis only caches all key information. If Redis finds that the memory usage exceeds a certain threshold, it triggers the operation of swap, and Redis calculates which value corresponding to key needs to be swap to disk based on "swappability = age*log (size_in_memory)". The value corresponding to these key is then persisted to disk and cleared in memory. This feature allows Redis to maintain more data than the memory of its machine itself. Of course, the machine's own memory must be able to maintain all the key, after all, this data will not be swap operation. At the same time, when Redis swap the data in memory to disk, the main thread providing the service and the sub-thread performing the swap operation will share this part of memory, so if the data of swap is needed to update, Redis will block the operation until the child thread completes the swap operation.

You can refer to the comparison before and after using the Redis-specific memory model:

VM off: 300k keys, 4096 bytes values: 1.3G used

VM on: 300k keys, 4096 bytes values: 73M used

VM off: 1 million keys, 256 bytes values: 430.12M used

VM on: 1 million keys, 256 bytes values: 160.09M used

VM on: 1 million keys, values as large as you want, still: 160.09M used

When reading data from the Redis, if the value corresponding to the read key is not in memory, then the Redis needs to load the data from the swap file and then return it to the requester. There is a problem with the I / O thread pool. By default, Redis will block, that is, all swap files will not be loaded until they are loaded. This strategy is suitable for batch operations because of a small number of clients. However, if you apply Redis to a large website application, it is obviously not enough to meet the situation of large concurrency. So when Redis runs, we set the size of the I / O thread pool and concurrently operate on read requests that need to load the corresponding data from the swap file to reduce blocking time.

Comparison of redis, memcache and mongoDB

Redis, memcache and mongoDB are compared from the following dimensions. Welcome to clap bricks.

1. Performance

All of them are relatively high, and performance should not be a bottleneck for us.

Generally speaking, in terms of TPS, redis is similar to memcache, but larger than mongodb.

2. Convenience of operation

Memcache data structure is single

Redis is richer. In terms of data operation, redis is better and has fewer network IO times.

Mongodb supports rich data expression, index, most similar to relational database, and supports a wide range of query languages.

3. The size of memory space and the amount of data

Redis adds its own VM feature after version 2.0, which breaks through the limitation of physical memory; you can set the expiration time of key value (similar to memcache)

Memcache can modify the maximum available memory, using LRU algorithm

MongoDB is suitable for large amount of data storage, relying on the operating system VM for memory management, eating memory is also relatively strong, services should not be together with other services.

4. Availability (single point of problem)

For a single point problem

Redis relies on the client to achieve distributed read and write; in master-slave replication, every time the slave node reconnects to the master node, it depends on the entire snapshot, no incremental replication, due to performance and efficiency problems.

Therefore, the single point problem is more complex; does not support automatic sharding, we need to rely on the program to set a consistent hash mechanism.

One alternative is to use your own active replication (multiple copies of storage) without redis's own replication mechanism, or change to incremental replication (which needs to be implemented by yourself), a tradeoff between consistency and performance.

Memcache itself has no data redundancy mechanism and is not necessary. For fault prevention, we use mature hash or ring algorithm to solve the jitter problem caused by single point of failure.

MongoDB supports master-slave,replicaset (internal paxos election algorithm, automatic fault recovery), auto sharding mechanism, and shields the client from the failover and segmentation mechanism.

5. Reliability (persistence)

For data persistence and data recovery

Redis support (snapshot, AOF): relying on snapshots for persistence, aof enhances reliability while affecting performance

Memcache does not support it and is usually used for caching to improve performance.

MongoDB has adopted binlog since version 1.8 to support the reliability of persistence.

6. Data consistency (transaction support)

Memcache uses cas to ensure consistency in concurrent scenarios

Redis transaction support is relatively weak, so it can only guarantee the continuous execution of each operation in the transaction.

MongoDB does not support transactions

7. Data analysis

MongoDB has built-in data analysis function (mapreduce), which is not supported by others.

8. Application scenarios

Redis: more performance operations and operations with small amounts of data

Memcache: used to reduce database load and improve performance in dynamic systems; cache to improve performance (suitable for reading more and writing less; for large amounts of data, sharding can be used)

MongoDB: mainly solve the problem of access efficiency of massive data

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report