In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
First of all, it is not that redis does not have any performance problems if it is not used well, or there will be all kinds of conditions, such as frequent RDB, too many fragments and so on.
Performance analysis.
Info Information:
After redis-cli enters the login interface, type info all, or redis-cli-h ${ip}-p ${post}-a "${pass}"-c info all. Usually we just enter info, which means brief mode, and info all is detailed mode.
After that, all the real-time performance information related to the Redis service is obtained, similar to the linux command top.
The data output by the info command can be divided into 10 categories, which are:
Server
Clients
Memory
Persistence
Stats
Replication
Cpu
Commandstats
Cluster
Keyspace
The following is to analyze some key information.
Server section:
Redis_version: Redis server version. Some functions and commands are different in different versions.
Arch_bits: architecture (32 or 64 bit), in some cases, a pit that is easy to ignore
Tcp_port: TCP/IP listens on the port to make sure you are operating correctly
Uptime_in_seconds: the number of seconds have elapsed since the Redis server started to confirm whether it has been restarted
Uptime_in_days: the number of days since the Redis server was started to confirm whether it has been restarted
Clients section:
Connected_clients: number of connected clients (excluding clients connected through secondary servers)
Client_longest_output_list: the longest output list among currently connected clients
Client_longest_input_buf: the maximum input cache among the currently connected clients
Blocked_clients: the number of clients waiting for blocking commands (BLPOP, BRPOP, BRPOPLPUSH)
Memory section:
Maxmemory/maxmemory_human: the maximum amount of memory that can be allocated by the configuration file redis.conf. When exceeded, LRU is triggered to delete the old data.
Used_memory/used_memory_human: the current total amount of memory actually used by redis-server, if used_memory > maxmemory, then the operating system begins to swap memory with swap space in order to free up new physical memory for new pages or active pages (page).
Used_memory_rss/used_memory_rss_human: displays the total amount of memory allocated from the operating system, that is, the actual value of the system's physical memory occupied by this redis-server, which may be fragmented if it is more than used_memory.
Mem_fragmentation_ratio: it is reasonable that the memory fragmentation rate is slightly greater than 1, which means that no memory swap has occurred in redis. If the memory fragmentation rate exceeds 1.5, it means that Redis consumes 150% of the actual physical memory, of which 50% is the memory fragmentation rate. If the memory fragmentation rate is less than 1, the Redis memory allocation exceeds the physical memory, and the operating system is swapping memory. Memory swapping can cause a very significant response delay.
The following is the calculation formula:
When there is a problem with the fragmentation rate, there are three ways to solve the problem of poor memory management and improve redis performance:
1. Restart the Redis server: if the memory fragmentation rate exceeds 1.5, restarting the Redis server can invalidate the additional memory fragments and reuse them as new memory, allowing the operating system to restore efficient memory management.
two。 Limit memory swapping: if the memory fragmentation rate is less than 1 Magi Redis instance may swap some of the data to the hard disk. Memory swapping can seriously affect the performance of Redis, so you should increase available physical memory or reduce real Redis memory footprint.
3. Modify the memory allocator:
Redis supports several different memory allocators such as glibc's malloc, jemalloc11, and tcmalloc, and each allocator has a different implementation on memory allocation and fragmentation. It is not recommended for ordinary administrators to modify the Redis default memory allocator, as this requires a full understanding of the differences between these memory allocators and a recompilation of Redis.
Used_memory_lua: the amount of memory used by the Lua scripting engine. Redis allows the use of lua scripts by default, but too much takes up available memory
Mem_allocator: the memory allocator used by the Redis specified at compile time, which can be libc, jemalloc, tcmalloc.
Persistence section:
RDB information, the bgsave command is used in the operation of RDB, which is a resource-consuming persistent operation, and it is not real-time, so it is easy to cause the downtime data to disappear. If the memory capacity is full and cannot do the bgsave operation, the hidden danger will be great.
Rdb_changes_since_last_save: how many seconds have elapsed since the last successful creation of a persistent file. Persistence needs to take up resources, and the effect of persistence should be avoided under high load. The following parameters are of reference value.
Rdb_bgsave_in_progress: whether the bgsave operation is currently in progress. Is for 1.
Rdb_last_save_time: the UNIX timestamp of the last successful creation of the RDB file.
Rdb_last_bgsave_time_sec: records the number of seconds it took to create the most recent RDB file.
Rdb_last_bgsave_status: last saved state
Rdb_current_bgsave_time_sec: if the server is creating a RDB file, this domain records the number of seconds that the current creation operation has taken.
AOF information, AOF is a way to continuously record commands to persistent files, saving resources, but AOF storage files will be unlimited storage, a long time, or operate too frequently, then there will be AOF files too large, burst the hard disk. Moreover, this method will bgsave on a regular basis.
Aof_enabled: whether AOF files are enabled
Aof_rewrite_in_progress: indicates whether writing to an AOF file is currently in progress
Aof_last_rewrite_time_sec: how long it took to create the last AOF file.
Aof_current_rewrite_time_sec: if the server is creating an AOF file, this domain records the number of seconds that the current creation operation has taken.
Aof_last_bgrewrite_status: status of last write
Aof_last_write_status: status of last write
Aof_base_size: the size of the AOF file when the server starts or after AOF rewrites the last execution.
Aof_pending_bio_fsync: the number of fsync calls waiting to be executed in the background iCandle O queue.
Aof_delayed_fsync: the number of fsync calls delayed.
Stats section:
Total_commands_processed: shows the total number of commands processed by the Redis service, and the value is incremented. Because Redis is a single-threaded model, the commands from the client are executed sequentially. If there are a large number of commands waiting to be processed in the command queue, the response time of the commands becomes slower, and even the later commands are completely blocked, resulting in the degradation of Redis performance. So at this time, we need to record whether the value of this parameter is growing too fast, resulting in poor performance.
Instantaneous_ops_per_sec: the number of commands executed by the server per second, as above, if it grows too fast, it will be a problem.
Expired_keys: the number of database keys that are automatically deleted because of expiration, for reference.
Evicted_keys: displays the number of key reclaimed and deleted due to maxmemory restrictions. Determine whether Redis uses lru policy or expiration policy according to the maxmemory-policy value set in the configuration file. If it is an expired collection, it will not be recorded here, usually this value is not 0, then consider increasing the memory limit, otherwise it will cause memory swapping, the performance will become worse, and the data will be lost.
Latest_fork_usec: the number of microseconds spent on the last fork () operation. Fork () is a resource-consuming operation, so pay attention to it.
Commandstats section:
Cmdstat_XXX: records the execution statistics of different types of commands, including read and write, where calls represents the number of command execution, usec represents the CPU time spent by the command, and usec_per_call represents the average CPU time spent by each command in microseconds. It has a certain use for troubleshooting.
-
Methods of analysis of other problems:
View the network latency of redis:
Latency data for Redis cannot be obtained from info information. If you want to see the delay time, you can run it with the Redis-cli tool plus the-- latency parameter.
Redis-cli-- latency-h 10.1.2.11-p 6379
He will continue to scan the delay time until he exits by ctrl+C and measures the response delay time of Redis in milliseconds. Due to the different operation of the server, the delay time may be incorrect. Usually, the delay time of 1G Nic is 0.2ms. If the delay value is much higher than this reference value, there is obviously a performance problem. At this time, we should consider checking the state of the network.
Check redis's slow query:
Slow log is a logging system used by Redis to record the execution time of queries. The slowlog command in Redis allows us to quickly locate those slow commands that exceed the specified execution time. By default, the slow commands whose execution time exceeds 10ms are recorded to the log, which is controlled by the parameter slowlog-log-slower-than. Record a maximum of 128 entries, controlled by the parameter slowlog-max-len, and delete automatically if you exceed it.
Usually this default parameter is sufficient, you can also modify the online CONFIG SET parameters slowlog-log-slower-than and slowlog-max-len to modify the time and limit the number of entries.
Usually, the network delay of 1gb bandwidth is expected to be around 0.2ms. If the execution time of a command alone exceeds 10ms, it is nearly 50 times slower than the network delay. You can view it by entering the slowlog get command using the Redis-cli tool, and the third field that returns the result shows the execution time of the command in subtle bits. If you only need to view the last three slow commands, type slowlog get 10.
127.0.0.1 > slowlog get 10. . .4) 1) (integer) 2152) (integer) 1489099695 3) (integer) 11983 4) 1) "SADD" 2) "USER_TOKEN_MAP51193" 3) "qIzwZKBmBJozKprQgoTEI3Qo8QO2Fixation 4" 5) 1) (integer) 2142) (integer) 1489087112 3) (integer) 18002) 1) "SADD" 2) "USER_TOKEN_MAP51192" 3) "Z3Hsquari TUNfweqvLfweqvLfroomptdchSV2JAOrrH" 6 ) 1) (integer) 2132) (integer) 1489069123 3) (integer) 15407 4) 1) "SADD" 2) "USER_TOKEN_MAP51191" 3) "S3rNzOBwUlaI3QfOK9dIITB6Bk7LIGYe"
1 = unique identifier of the log
2 = the point of execution of the recorded command, expressed in UNIX timestamp format
3 = query execution time in microseconds. The command in the example uses 11 milliseconds.
4 = commands executed, arranged in an array. The complete command is put together.
Monitor client connections:
Because Redis is a single-threaded model (only a single core can be used) to handle all client requests, but as the number of client connections increases, the thread resources for processing requests begin to reduce the processing time allocated to a single client connection, and each client needs to spend more time waiting for a response from the Redis shared service.
# View the connection status of the client 127.0.0.1 6379 > info clients# Clientsconnected_clients:11client_longest_output_list:0client_biggest_input_buf:0blocked_clients:0
The first field (connected_clients) shows the total number of client connections in the current instance, and the maximum number of client connections allowed by Redis by default is 10000. If you see more than 5000 connections, it may affect the performance of Redis. If some or most clients send a large number of commands, this number will be much lower.
Check the current client status
# check the status of all connected clients. 127.0.0.1 client listid=821882 addr=10.25.138.2:60990 fd=8 name= age=53838 idle=24 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=ping 6379 > status
This looks a little roundabout, because it is mixed with historical data:
Addr: the address and port of the client, including current and historical connections
Age: the life cycle of the client connection, that is, the duration after the connection, in seconds
Idle: the idle time of this client, that is, during this time, the client has no operation (in seconds)
Db: the database for operation. Db0~db15 is available by default for redis.
Cmd: the last command used by the client
In other words, the less idle, it means that the client has just operated, on the contrary, it is a historical record. The less the age, the newly established connection, and the larger the historical connection. Sometimes individual users use the scan or keys command, which will cause great load pressure on the redis with large amount of data, so it needs special attention.
Statistics of extra-large key:
Under the single-threaded processing mode of redis, the operation of some key with a large amount of data will obviously affect the performance, so when necessary, we should count it out and give it to the developer to optimize.
# Statistics larger keyredis-cli-h *-a *-p *-bigkeys# to view the duration of a key 127.0.0.1 key 6379 > OBJECT IDLETIME key name
-- bigkeys information resolution:
1. This command uses scan to count key, so you don't have to worry about blocking redis when you use it.
two。 The output is roughly divided into two parts, and the part above the summary just shows the scanning process. The summary section gives the largest Key of each data structure, so the following section is more important.
3. The largest key counted is that only the string type is measured by byte length. List,set,zset and so on are measured by the number of elements, which does not mean that they occupy a large amount of memory and need to be calculated separately.
When you have the largest key name, go to see the rough size
# View the serialized length debug object key of a key
Description of the output item:
Memory address of Value at:key
Refcount: number of references
Encoding: encoding type
Serializedlength: the serialized length after compression, in B, that is, Byte (bytes). Because the compression effect depends on the type of encoding, it does not necessarily reflect the size in memory, but is of reference value.
Lru_seconds_idle: idle time
In the end, the big key information we should pay attention to is the length of the serializedlength.
There is also a tool [rdbtools] that can comprehensively analyze the key information in redis, but it is not troublesome for intranet users to install it, because it is not a function that comes with the system. Please wait for another article to introduce it in detail.
Latency caused by data persistence
Redis's data persistence work itself brings latency, and a reasonable persistence policy needs to be made according to the security level and performance requirements of the data:
Although the setting of 1.AOF + fsync always can absolutely ensure data security, each operation will trigger a fsync, which will have an obvious impact on the performance of Redis.
2.AOF + fsync every second is a good compromise, fsync once per second
3.AOF + fsync never provides the best performance under the AOF persistence scheme. Using RDB persistence usually provides higher performance than using AOF, but you need to pay attention to the policy configuration of RDB.
4. Each RDB snapshot and AOF Rewrite requires the Redis main process to perform fork operations. The fork operation itself can be time-consuming, depending on the amount of memory consumed by CPU and Redis. Configure RDB snapshot and AOF Rewrite timing according to the specific situation to avoid the delay caused by too frequent fork.
For example, when Redis forks a child process, it needs to copy the memory paging table to the child process. Take the Redis instance that takes up 24GB memory as an example, it needs to copy the data of 24GB / 4kB * 8 = 48MB. On physical machines that use a single Xeon 2.27Ghz, this fork operation takes 216ms.
You can view the time (in microseconds) of the last fork operation through the latest_fork_usec field returned by the INFO command.
Delay caused by Swap
When Linux moves the memory paging used by Redis to swap space, it will block the Redis process and cause abnormal latency in Redis. Swap usually occurs when there is insufficient physical memory or when some processes are performing a large number of I-stroke O operations, both of which should be avoided as much as possible.
The swap record of the process is saved in the / proc/redis process number / smaps file, and by looking at this file, you can determine whether the delay in Redis is caused by Swap. If a large Swap size is recorded in this file, the delay is most likely caused by Swap.
As an example, you can see the status of the current swap when 0KB, that is, swap is not used
# / proc/pid/smaps shows the memory impact of the process when it is running, in which the system's runtime library (so), heap, and stack information can be seen. Cat / proc/ `ps aux | grep redis | grep-v grep | awk'{print $2}'`/ smaps00400000-00531000 r-xp 00000000 fc:02 805438521 / usr/local/bin/redis-serverSize: 1220 kBRss: 924 kBPss: 924 kBShared_Clean: 0 kBShared_Dirty: 0 kBPrivate_Clean: 924 kBPrivate_Dirty: 0 kBReferenced: 924 kBAnonymous: 0 kBAnonHugePages: 0 kBShared_Hugetlb: 0 kBPrivate_Hugetlb: 0 kBSwap: 0 kBSwapPss: 0 kBKernelPageSize: 4 kBMMUPageSize: 4 kBLocked: 0 kB
What if the memory is full:
Redis memory is full, which is really troublesome, but no matter how troublesome it is, we have to deal with it, starting with the principle of redis architecture.
First of all, we have to understand that just because redis's memory is full doesn't mean it uses 100% of the system's memory. Why do you say that? We all know that redis persistence is save and bgsave, and the commonly used bgsave (and default) is to fork a process that compresses a copy of the memory copy and saves it to the hard disk into a * .rdb file. Here is a problem. Your memory must be as large as data before you can do bgsave. Strictly speaking, as long as your memory exceeds 50% of the system memory, it can be called redis memory full.
The persistence strategy of redis only says that persistence will block the operation and cause delay, but if the memory is full, the increase in the amount of data will make the delay caused by persistence more serious, and the default is to retry every minute after persistence failure.
Then the problem arises, because the memory is full, persistence fails, and then persists a minute later, which creates a vicious circle and the performance of redis plummets. What should I do then?
Changing the persistence policy is a temporary solution
You can first try to turn off the option to terminate all client write requests caused by persistence failure
Config set stop-writes-on-bgsave-error no
But this method treats the symptoms rather than the root of the problem, we just ignore the problem.
Another solution is to turn off rdb persistence directly:
Config set save ""
Why can be solved, the answer is also obvious, turn off persistence, then will not block the operation, then the performance of your redis is guaranteed. But it will introduce new problems, without persistence, memory data will be lost if the redis-server program is restarted or closed, which is still more dangerous. And the problem of full memory still exists, if the memory uses 100% of the system memory, or even triggers the system's OOM, it will be a big hole, because the memory is completely emptied and the data is gone. This is the so-called temporary solution.
So the right thing to do is, after the non-blocking operation, delete the data that can be deleted, then pull up the persistence again, and then prepare for expansion.
Where does the occupied memory look, as mentioned above, but the definition of full memory is not necessarily just the actual memory usage, fragments should also be included, for example:
# in this case, the memory must be full of used _ memory_human:4.2Gmaxmemory_human:4.00G#, but in this case, the memory is also full of used _ memory_human:792.30Mused_memory_rss_human:3.97Gused_memory_peak_human:4.10Gmaxmemory_human:4.00G
Because the memory fragment has not been released, it will still take up memory space. For the system, the fragment is also the memory occupied by redis-server, not free memory, so the remaining memory is still not enough for bgsave. So how do we deal with the fragments?
Before redis4.0, there was no better way but to restart redis-server, and later versions added a new defragmentation parameter to eliminate this problem.
The fragmentation problem actually has a great impact, because under normal circumstances, these unused data that do take up memory will not only waste space on our redis, but also cause the risk of full memory. So as mentioned above, if the fragmentation rate exceeds 1.5, it is time to think about recycling.
What if you do need to save in-memory data? Can only give up, delete unnecessary data, so that the memory can do bgsave and then restart to recover fragments. Otherwise, avoid similar problems after upgrading to 4.0.
Optimization suggestion
System optimization
1. Close Transparent huge pages
Transparent HugePages allows kernel khugepaged threads to dynamically allocate memory at run time. It is enabled by default in most linux distributions, but the disadvantage is that it may cause delayed allocation of memory, which is not friendly to large memory applications, such as oracle,redis, which will take up a lot of memory, so it is recommended to turn it off.
# disable Transparent HugePages. The default state is [always] echo never > / sys/kernel/mm/transparent_hugepage/enabled
two。 Deploy redis on physical machines, needless to say, virtual machines or docker will have a certain delay, there is no need to waste these performance in order to manage the second.
3. Use more connection pooling instead of frequently disconnecting and reconnecting. I think the effect is self-evident.
4. The batch data operations performed by the client should be done in one interaction using the Pipeline feature.
Behavior optimization
1. If the cached data is less than 4GB, you can choose to use a 32-bit Redis instance. Because the pointer on a 32-bit instance is only half the size of 64-bit, it takes up less memory space. Redis's dump files are compatible between 32-bit and 64-bit, so if there is a need to reduce memory footprint, try using 32-bit first and then switching to 64-bit.
two。 Use Hash data structures whenever possible. Because Redis stores less than 100 fields in the Hash structure, its storage efficiency is very high. So when you don't need set operations or list push/pop operations, use the Hash structure as much as possible. The operation commands of the Hash structure are HSET (key, fields, value) and HGET (key, field), which can be used to store or retrieve specified fields from the Hash.
3. Try to set the expiration time of key. An easy way to reduce memory usage is to make sure that the expiration time of key is set whenever an object is stored. If key is used within a specified time period or the old key is unlikely to be used, you can use the Redis expiration time command (expire,expireat, pexpire, pexpireat) to set the expiration time, so that Redis will automatically delete the key when the key expires. With the ttl command, you can query the expiration time (in seconds). Display-2 indicates that key does not exist, and display-1 indicates that the timeout period is not set (that is, permanent).
4. Use multi-parameter commands: if the client sends a large number of commands in a very short time, the response time will be significantly slower, because the subsequent commands have been waiting for a large number of commands in the queue to finish. For example, cycling the LSET command to add 1000 elements to the list structure is a way of poor performance, and it is better to create a list of 1000 elements on the client side, using a single command LPUSH or RPUSH, to send 1000 elements at once to the Redis service in the form of multi-parameter construction.
5. Pipe commands: another way to reduce multiple commands is to use pipes (pipeline) to execute several commands together, thereby reducing latency problems caused by network overhead. Because 10 commands sent to the server alone will cause 10 network delay overhead, the use of pipes will return the execution results at one time, requiring only one network delay overhead. Redis itself supports pipeline commands, as do most clients, and if the latency of the current instance is obvious, it is very effective to use pipes to reduce latency.
6. Avoid slow commands that manipulate large sets: if the low frequency of command processing leads to an increase in delay time, this may be due to the use of command operations with high time complexity, which means that each command takes more time to retrieve data from the collection. Therefore, reducing the use of high-time complex commands can significantly improve the performance of Redis.
7. Limit the number of client connections: since Redis2.6, users are allowed to modify the maximum number of client connections on the maxclients property of the configuration file (Redis.conf), or you can set the maximum number of client connections by typing config set maxclients on the Redis-cli tool. Depending on the load on the number of connections, this number should be set to between 110% and 150% of the expected peak number of connections. If the number of connections exceeds this number, Redis will reject and immediately close the new connections. It is important to limit the growth of unexpected connections by setting the maximum number of connections. In addition, a failed new connection attempt returns an error message, which lets the client know that Redis has an unexpected number of connections at this time in order to perform the corresponding processing action. The above two practices are very important for controlling the number of connections and maintaining the best performance of Redis.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.