Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The difference between distributed cache Redis and Memcached

2025-04-13 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article mainly explains "the difference between distributed cache Redis and Memcached". The explanation in this article is simple and clear, easy to learn and understand. Please follow the ideas of Xiaobian and go deep into it slowly to study and learn "the difference between distributed cache Redis and Memcached" together!

8 aspects provided by Redis author:

1. Performance;2. Memory usage efficiency;3. Data type support;4. Data backup and recovery;5. Data storage scheme;6. Memory management mode of underlying C language;7. Clustering and distribution;8 Thread model

Salvatore Sanfilippo, author of Redis, has compared these two memory-based data storage systems, and overall it is relatively objective.

I. Performance

Since Redis uses only a single core, Memcached can use multiple cores, so on average Redis performs better than Memcached when storing small data per core. Memcached performs better than Redis for data over 100k, although Redis has recently been optimized for storing large data, but it is still slightly inferior to Memcached.

II. Memory efficiency

Memcached is more memory efficient with simple key-value storage, whereas if Redis uses hash structure for key-value storage, its memory utilization will be higher than Memcached due to its combinatorial compression.

Redis supports server-side data operations

Redis has more data structures and supports richer data operations than Memcached. Usually in Memcached, you need to take the data to the client to make similar modifications and then set it back, serialize it and deserialize it, which greatly increases the number of network IOs and data volume. In Redis, these complex operations are usually as efficient as regular GET/SET. So, if you need a cache that can support more complex structures and operations, Redis is a good choice.

Unlike Memcached, which only supports simple key-value data records, Redis supports a much wider variety of data types. There are five main types of data: String, Hash, List, Set, and Sorted Set. Redis internally uses a redisObject object to represent all keys and values.

IV. Data backup and recovery

After memcached hangs, data cannot be recovered;Redis data can be recovered through aof after loss. Redis supports data backup, i.e. master-slave data backup.

V. Data storage

Both Redis and Memcached store data in memory and are in-memory databases. But memcached can also be used to cache other things, such as pictures, videos, and so on. Memcached stores all the data in memory, which will hang up after power failure, and the data cannot exceed the memory size;redis has some data stored on the hard disk, which can ensure the persistence of the data and support the persistence of the data (RDB, AOF), while Memcached does not support persistence. At the same time, Redis does not always store all data in memory. When physical memory runs out, Redis can swap some long-unused values to disk, but memcached exceeds the memory ratio and erases the previous data.

VI. Memory management mechanism

For memory-based database systems such as Redis and Memcached, the efficiency of memory management is a key factor affecting system performance. The malloc/free function in traditional C language is the most commonly used method of allocating and releasing memory, but this method has great defects: first, for developers, mismatched malloc and free are easy to cause memory leakage; secondly, frequent calls will cause a large number of memory fragments to be recycled and reused, reducing memory utilization; finally, as a system call, its system overhead is much greater than that of general function calls. Therefore, in order to improve the efficiency of memory management, efficient memory management schemes do not directly use malloc/free calls.

Memcached uses the Slab Allocation mechanism to manage memory by default. The main idea is to divide the allocated memory into blocks of a specific length according to a predetermined size to store key-value data records of the corresponding length, so as to completely solve the memory fragmentation problem. The Slab Allocation mechanism is designed to store external data only, meaning that all key-value data is stored in the Slab Allocation system, while Memcached's other memory requests are requested through normal malloc/free, because the number and frequency of these requests determine that they do not affect the overall system performance.

As shown in the figure, it first requests a large chunk of memory from the operating system and divides it into chunks of various size Chunk and groups chunks of the same size into groups Slab Class. A Chunk is the smallest unit of key-value data. The size of each Slab Class can be controlled by specifying a Growth Factor when Memcached starts. Assuming that the Growth Factor is 1.25, if the size of the first chunk is 88 bytes, the size of the second chunk is 112 bytes, and so on.

When Memcached receives the data sent by the client, it will first select the most suitable Slab Class according to the size of the received data, and then find a Chunk that can be used to store data by querying the list of free chunks in the Slab Class saved by Memcached. When a database expires or is discarded, the Chunk occupied by the record can be recycled and re-added to the free list.

From the above process we can see that Memcached's memory management system is efficient and does not cause memory fragmentation, but its biggest disadvantage is that it will lead to space waste. Because each Chunk is allocated a certain length of memory space, variable-length data does not fully utilize that space. For example, if you buffer 100 bytes of data into 128 bytes of Chunk, the remaining 28 bytes are wasted. Memcached's primary cache mechanism is the LRU (Least Recently Used) algorithm + timeout expiration.

Redis memory management mainly through the source code zmalloc.h and zmalloc.c two files to achieve. Redis for the convenience of memory management, after allocating a block of memory, the size of this memory will be stored in the head of the memory block.

As shown, real_ptr is the pointer returned by redis after calling malloc. redis stores the size of the memory block in the header, where size occupies a known memory size of type size_t, and returns ret_ptr. When memory needs to be freed, ret_ptr is passed to the memory manager. With ret_ptr, the program can easily calculate the value of real_ptr and then pass real_ptr to free to free memory.

Redis records all memory allocations by defining an array of length ZMALLOC_MAX_ALLOC_STAT. Each element of the array represents the number of memory blocks allocated by the current program, and the size of the memory block is the index of that element. In the source code, this array is called zmalloc_allocations. zmalloc_allocations[16] represents the number of memory blocks of length 16bytes that have been allocated. There is a static variable used_memory in zmalloc.c that records the total size of memory currently allocated. So, overall, Redis uses wrapped mallc/free, which is much simpler than Memcached's memory management method.

VII. Clustering, distributed storage

Memcached is a full-memory data caching system. Although Redis supports data persistence, full memory is the essence of its high performance. As a memory-based storage system, the size of a machine's physical memory is the maximum amount of data the system can hold. If the amount of data you need to process exceeds the physical memory size of a single machine, you need to build distributed clusters to scale storage capacity.

Memcached itself does not support distributed storage, so it can only be implemented on the client side through distributed algorithms such as consistent hashing. For distributed consistent hashing algorithms, see Summary: Distributed consistent hash algorithm. Before a client sends data to a Memcached cluster, it first calculates the destination node for the piece of data through a built-in distributed algorithm, and then the data is sent directly to that node for storage. However, when the client queries data, it also needs to calculate the node where the query data is located, and then directly send a query request to the node to obtain the data.

Compared to Memcached, which can only be implemented on the client side, Redis prefers to build distributed storage on the server side, but does not adopt consistent hashing. For the analysis of Redis clusters, see Summary: Cluster of Distributed Cache Redis. The latest version of Redis already supports distributed storage capabilities. Redis Cluster is an advanced version of Redis that is distributed and allows single point of failure. It has no central node and has linear scalability. In order to ensure data availability under single point of failure, Redis Cluster introduces Master nodes and Slave nodes. In Redis Cluster, each Master node has two Slave nodes for redundancy. This way, the downtime of any two nodes in the entire cluster will not result in data unavailability. When the Master node exits, the cluster automatically selects a Slave node to become the new Master node.

Memcache supports multi-core multi-threading, Redis single-threaded operation

Thank you for reading, the above is the content of "the difference between distributed cache Redis and Memcached", after learning this article, I believe everyone has a deeper understanding of the difference between distributed cache Redis and Memcached, and the specific use needs to be verified by practice. Here is, Xiaobian will push more articles related to knowledge points for everyone, welcome to pay attention!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report