Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What's the difference between Redis and Memcached?

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

What is the difference between Redis and Memcached? for this question, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible way.

Students who have known the two have such a general impression:

1. Compared with memcached, redis only supports simple key-value data types, and also provides storage of data structures such as list,set,zset,hash.

2. Redis supports data backup, that is, data backup in master-slave mode.

3. Redis supports data persistence, keeping the data in memory on disk, loading it again when restarting, and so on.

It seems that redis is a little more powerful than memcached, so is that actually the case? Existence is reasonable. Let's compare it one by one according to several different points.

Network IO model

Memcached is a multithreaded, non-blocking IO multiplexing network model, which is divided into listening main thread and worker subthread. After receiving the request, the listening thread listens to the network connection. After accepting the request, the connection description word pipe is passed to the worker thread to read and write IO. The network layer uses the event library encapsulated by libevent. The multi-threaded model can play a multi-core role, but it introduces the problems of cache coherency and lock, such as the most commonly used stats command of memcached. In fact, all operations of memcached have to lock this global variable and carry out technical work, which brings about performance loss.

Redis uses a single-threaded IO reuse model to encapsulate a simple AeEvent event handling framework, which mainly implements epoll, kqueue and select. For single-memory and only IO operations, single-thread can give full play to the speed advantage to *, but redis also provides some simple computing functions, such as sorting, aggregation, etc., for these operations, the application of single-thread model will seriously affect the overall throughput. In the process of CPU calculation, The entire IO schedule is blocked.

Data support type

Memcached uses key-value to store and access data, and maintains a huge HashTable in memory, which reduces the time complexity of data query to O (1) and ensures high-performance access to data.

As mentioned at the beginning: compared with memcached, redis only supports simple key-value data types, and also provides storage of data structures such as list,set,zset,hash. For more information, you can read "Redis memory usage Optimization and Storage".

Memory management mechanism

For memory-based database systems such as Redis and Memcached, the efficiency of memory management is a key factor affecting system performance. Malloc/free function in traditional C language is the most commonly used method to allocate and release memory, but this method has great defects: first, the mismatched malloc and free for developers are easy to cause memory leakage; secondly, frequent calls will cause a large number of memory fragments can not be recycled, reducing memory utilization; * as a system call, the system overhead is much higher than that of general function calls. Therefore, in order to improve the efficiency of memory management, efficient memory management solutions will not directly use malloc/free calls. Both Redis and Memcached use self-designed memory management mechanisms, but there are great differences in implementation methods. The memory management mechanisms of both are described below.

Memcached uses Slab Allocation mechanism to manage memory by default, and its main idea is to divide the allocated memory into blocks of specific length to store key-value data records of corresponding length according to the predetermined size, so as to completely solve the problem of memory fragmentation. The Slab Allocation mechanism is only designed to store external data, that is, all key-value data is stored in the Slab Allocation system, while other memory requests of Memcached are requested through ordinary malloc/free, because the number and frequency of these requests determine that they will not affect the performance of the whole system. The principle of Slab Allocation is quite simple. As shown in the figure, it first requests a large chunk of memory from the operating system, splits it into blocks of various sizes Chunk, and divides blocks of the same size into groups of Slab Class. Among them, Chunk is the smallest unit used to store key-value data. The size of each Slab Class can be controlled by setting a Growth Factor when the Memcached is started. Suppose the value of Growth Factor in the figure is 1.25.If the size of the * * group Chunk is 88 bytes, the size of the second group Chunk is 112 bytes, and so on.

When Memcached receives the data sent by the client, it will first choose the most appropriate Slab Class according to the size of the received data, and then query the list of free Chunk in the Slab Class saved by Memcached to find a Chunk that can be used to store data. When a database expires or is discarded, the Chunk occupied by the record can be recycled and re-added to the free list. From the above process, we can see that Memcached's memory management system is efficient and does not cause memory fragmentation, but its disadvantage is that it will lead to space waste. Because each Chunk allocates a specific length of memory space, variable-length data cannot make full use of that space. As shown in the figure, caching 100Bytes of data into a 128byte Chunk wastes the remaining 28 bytes.

The memory management of Redis is mainly realized through two files zmalloc.h and zmalloc.c in the source code. In order to facilitate memory management, Redis will store the size of the memory in the head of the memory block after allocating a piece of memory. As shown in the figure, real_ptr is the pointer returned by redis after calling malloc. Redis stores the size of the memory block size in the header, the amount of memory occupied by size is known, is the length of type size_t, and then returns ret_ptr. When memory needs to be freed, the ret_ptr is passed to the memory manager. With ret_ptr, the program can easily calculate the value of real_ptr and then pass the real_ptr to free to release memory.

Redis records all memory allocations by defining an array whose length is ZMALLOC_MAX_ALLOC_STAT. Each element of the array represents the number of memory blocks allocated by the current program, and the size of the memory block is the subscript of that element. In the source code, this array is zmalloc_allocations. Zmalloc_allocations [16] represents the number of memory blocks of length 16bytes that have been allocated. There is a static variable used_memory in zmalloc.c to record the total amount of memory currently allocated. So, overall, Redis uses packaged mallc/free, which is much simpler than Memcached's memory management approach.

In Redis, not all data is stored in memory all the time. This is the difference from Memcached. When physical memory is used up, Redis can swap some value that has not been used for a long time to disk. Redis only caches all key information. If Redis finds that the memory usage exceeds a certain threshold, it triggers the operation of swap, and Redis calculates which value corresponding to key needs to be swap to disk based on "swappability = age*log (size_in_memory)". The value corresponding to these key is then persisted to disk and cleared in memory. This feature allows Redis to maintain more data than the memory of its machine itself. Of course, the machine's own memory must be able to maintain all the key, after all, this data will not be swap operation. At the same time, when Redis swap the data in memory to disk, the main thread providing the service and the sub-thread performing the swap operation will share this part of memory, so if the data of swap is needed to update, Redis will block the operation until the child thread completes the swap operation. When reading data from the Redis, if the value corresponding to the read key is not in memory, then the Redis needs to load the data from the swap file and then return it to the requester. There is a problem with the I / O thread pool. By default, Redis will block, that is, all swap files will not be loaded until they are loaded. This strategy is suitable for batch operations because of a small number of clients. However, if you apply Redis to a large website application, it is obviously not enough to meet the situation of large concurrency. So when Redis runs, we set the size of the I / O thread pool and concurrently operate on read requests that need to load the corresponding data from the swap file to reduce blocking time.

Memcached uses pre-allocated memory pools and uses slab and chunk of different sizes to manage memory. Item selects appropriate chunk storage according to size. Memory pool method can save the overhead of applying for / releasing memory and reduce memory fragmentation, but it will also lead to a certain degree of space waste, and when there is still a lot of memory space, new data may also be eliminated. The reason can be found in Timyang's article: http://timyang.net/data/Memcached-lru-evictions/

Redis uses on-site memory to store data, and rarely uses free-list to optimize memory allocation. There will be memory fragments to a certain extent. Redis and storage command parameters will store data with expiration time separately and call them temporary data. Non-temporary data will never be removed, even if there is not enough physical memory. As a result, swap also does not eliminate any non-temporary data (but attempts to eliminate some temporary data), which makes Redis more suitable for storage than cache.

Data storage and persistence

Memcached does not support the persistence of in-memory data, and all data is stored in the form of in-memory.

Redis supports persistence operations. Redis provides two different persistence methods to store data on the hard disk, one is snapshotting, which can write all the data that exists at a certain time to the hard disk. Another method is called append-only files (append-only file, AOF), which copies the executed write command to the hard disk when the write command is executed.

Data consistency problem

Memcached provides cas commands to ensure the consistency of multiple concurrent access operations to the same data. Redis does not provide cas commands, which is not guaranteed, but Redis provides transactional functionality to ensure the atomicity of a string of commands without being interrupted by any operation.

Cluster management is different.

Memcached is a full memory data buffer system. Although Redis supports data persistence, full memory is the essence of its high performance after all. As a memory-based storage system, the size of the machine's physical memory is the amount of data that the system can hold. If the amount of data that needs to be processed exceeds the physical memory size of a single machine, you need to build a distributed cluster to expand storage capacity.

Memcached itself does not support distributed storage, so distributed storage of Memcached can only be realized on the client side through distributed algorithms such as consistent hashing. The following figure shows the distributed storage implementation architecture of Memcached. Before the client sends data to the Memcached cluster, it will first calculate the target node of the data through the built-in distributed algorithm, and then the data will be sent directly to that node for storage. However, when the client queries the data, it also needs to calculate the node where the query data is located, and then send the query request directly to the node to obtain the data.

Compared with Memcached, which can only implement distributed storage on the client side, Redis prefers to build distributed storage on the server side. The * version of Redis already supports distributed storage. Redis Cluster is an advanced version of Redis that is distributed and allows a single point of failure. It has no central node and is linearly scalable. Redis Cluster distributed storage architecture, nodes and nodes communicate through the binary protocol, nodes and clients communicate through the ascii protocol. In terms of data placement strategy, Redis Cluster divides the value field of the entire key into 4096 hash slots, and one or more hash slots can be stored on each node, that is, the number of * * nodes supported by Redis Cluster is 4096. The distributed algorithm used by Redis Cluster is also simple: crc16 (key)% HASH_SLOTS_NUMBER.

In order to ensure the data availability under single point of failure, Redis Cluster introduces Master node and Slave node. In Redis Cluster, each Master node has two corresponding Slave nodes for redundancy. In this way, the downtime of any two nodes in the entire cluster will not cause the data to be unavailable. When the Master node exits, the cluster automatically selects a Slave node to become the new Master node.

The answer to the question about what is the difference between Redis and Memcached is shared here. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report