In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
As an example, redis is an in-memory database, a simple key-value structure, and a nosql database. Because the structure is simple and memory is used, the speed is very fast. As for the question, how fast? As we all know, the speed of the previous mechanical hard disk is actually good, but the read and write speed of memory is 60,000 times that of the mechanical hard disk. So, you can see how fast redis is than hard drives. Of course, solid state drives are used now, and the situation is much better than before. There is no more detailed analogy here, it is good to know that this is the case.
Installation
= =
Well, of course, first of all, installation, installation of redis is very simple, first of all, our servers are linux.
First of all, to make it clear that the in-memory database does not mean that there is no hard disk space at all, because generally for the sake of data security, some data will be backed up to the hard disk on the configuration, so it is still necessary to prepare a certain amount of hard disk space.
Download address:
Wget "http://download.redis.io/releases/redis-3.2.0.tar.gz"
Is the compiled version, now out to 3.2, but the version difference is not big, the lower version just does not support cluster or master-slave, in fact, the function is the same.
Install dependent package prompt to remember yum, generally related to the C language library, most of which must be installed in the initialization system.
Let's take a look at the installation method, which is simple:
# decompress the compiled package tar xzf redis-3.2.0.tar.gz# and enter the extracted folder cd redis-3.2.0# to compile and install makemake install# by the way, if you want to install it in a custom directory, you can also make prefix= (your directory) install
When the make install command is executed, several executables are generated in the / usr/local/bin directory, which serve the following purposes:
Redis-server: daemon launcher for Redis server
Redis-cli: Redis command line operation tool. You can also use telnet to operate according to its plain text protocol
Redis-benchmark: Redis performance testing tool to test the read and write performance of Redis on the current system
Redis-check-aof: data repair
Redis-check-dump: check the export utility
After installation, it depends on the configuration. The name of the configuration file can be changed freely, and the location is not fixed, because you can specify the configuration file to start when you start.
Do you remember compiling the directory? There is a configuration file template inside, which can be copied and used. Of course, it is necessary to change it according to your own needs.
Cd redis-3.2.0ll * .conf-rw-rw-r-- 1 root root 45390 May 6 15:11 redis.conf-rw-rw-r-- 1 root root 7109 May 6 15:11 sentinel.conf
One is the configuration file of redis, and the other is Sentinel, which belongs to the configuration file of redis cluster application.
There are too many configurations, so let's first look at some key points:
Cat redis.conf# allows you to run daemonize yes# in the background to set the port, preferably with a non-default port port 666login. For security reasons, it is best to name the private network bind 10.10.2.2 and specify the PID path of the current redis to distinguish multiple redispidfile / data/redis/data/config/redis_6666.pid# names and specify the current redis log file path logfile "/ data/redis/data/logs/redis_6666.log" # specify the RDB file name It is used to back up data to the hard disk and distinguish between different redis. When memory exceeds 45% of the available memory, the snapshot function dbfilename dump_6666.rdb# specifies the root directory of the current redis, which is used to store the authentication key of the current redis of the RDB/AOF file dir / data/redis/data#. The redis runs very fast. This password should be strong enough to limit the maximum capacity of the current redis of requirepass gggggggGGGGGGGGG999999999#. It is recommended to set it to within 45% of the available memory. The maximum can be set to 95% of the available memory of the system. # config set maxmemory can be used to modify online, but the restart fails. You need to use the config rewrite command to refresh the configuration file maxmemory 1024000000#LRU. There are four strategies. Choose maxmemory-policy allkeys-lru# to turn off automatic persistence operation depending on the situation. RDB automatic persistence is on by default, and will automatically compress and save the full amount of redis data regularly. # because redis is a single-thread operation This operation is undoubtedly more resource-consuming and blocking operations, and some caching-only environments do not necessarily mean that data is important. # save "" # by default, AOF persistence is turned off and needs to be turned on manually, and RDB has its own characteristics, which is less blocking than RDB. # appendonly yes# after opening AOF, you need to set the following two parameters to prevent the AOF file from growing. Affect subsequent operations. # auto-aof-rewrite-percentage 100#auto-aof-rewrite-min-size 64mb
The detailed analysis is as follows:
1 daemonize no
By default, redis does not run in the background. If you need to run in the background, change the value of this item to yes.
2 pidfile / var/run/redis.pid
When Redis is running in the background, Redis defaults to putting the pid file in / var/run/redis.pid, which you can configure to another address. When running multiple redis services, you need to specify different pid files and ports
3 port
Listening port. Default is 6379.
4 # bind 127.0.0.1
Specifies that Redis only receives requests from that IP address, and if it is not set, all requests are processed, and it is best to set this item for security in a production environment. Comment out by default and do not turn on
5 timeout 0
Sets the timeout, in seconds, for client connections. When the client does not issue any instructions during this period, then close the connection
6 tcp-keepalive 0
Specifies whether the TCP connection is persistent, and the "detective" signal is maintained by the server side. The default is 0. Indicates disabled
7 loglevel notice
The log level is divided into four levels, debug,verbose, notice, and warning. Notice is generally enabled in production environment.
8 logfile stdout
Configure the log file address. Standard output is used by default, that is, it is printed on the window of the command line terminal and modified to the log file directory.
9 databases 16
To set the number of databases, you can use the SELECT command to switch databases. The default database is library 0. Default 16 libraries
ten
Save 900 1
Save 300 10
Save 60 10000
Rdb automatically persists parameters, how often data snapshots are saved, that is, how often data is persisted to dump.rdb files. Used to describe "at least how many change operations in how many seconds" triggers the snapshot data save action
The default setting, which means:
If (when 10000 keys changes within 60 seconds) {
Make a mirror backup
} else if (10 keys have changed in 300 seconds) {
Make a mirror backup
} else if (1 keys has changed in 900s) {
Make a mirror backup
}
If set to empty, for example:
Save ""
That is, turn off rdb automatic persistence.
RDB automatic persistence is enabled by default, and automatically compresses and saves all the data of redis on a regular basis, but because redis is a single-threaded operation, this operation is undoubtedly more resource-consuming and blocking operations, and some caching environments may not be very important, and it is possible to close them.
Note: the manual bgsave command can still be used. You should also pay attention to whether the file of the dir parameter exists. If so, the data of the file will be loaded after restart.
11 stop-writes-on-bgsave-error yes
Whether to continue working and whether to terminate all client write requests when there is an error in persistence. The default setting of "yes" indicates termination, and in the event of a failure in saving snapshot data, this server is a read-only service. If "no", the snapshot will fail this time, but the next snapshot will not be affected, but if there is a failure, the data can only be restored to the "last success point"
12 rdbcompression yes
Whether to enable rdb file compression when making a data mirror backup is yes by default. Compression may require additional cpu overhead, but it can effectively reduce the size of rdb files and facilitate storage / backup / transfer / data recovery
13 rdbchecksum yes
10% performance loss when reading and writing
14 rdbchecksum yes
Whether to checksum, and whether to use CRC64 checksum for rdb files. The default is "yes". Then CRC checksum will be appended to the end of each rdb file content, which is convenient for third-party verification tools to check the integrity of the file.
14 dbfilename dump.rdb
The file name of the backup file of the mirror snapshot, which defaults to dump.rdb, which triggers the snapshot function when the memory used exceeds 45% of the available memory
15 dir. /
The path where the rdb/AOF file for the database mirroring backup is placed. The path here should be configured separately from the file name because when Redis makes a backup, it will first write the status of the current database to a temporary file, and when the backup is completed, the temporary file will be replaced with the file specified above, and both the temporary file here and the backup file configured above will be placed in this specified path
16 # slaveof
Set the database as a slave to other databases and specify master information for it.
17 masterauth
When the primary database connection requires password authentication, specify here
18 slave-serve-stale-data yes
Whether customers can still be allowed to access data that may be out of date when the master master server hangs up or master-slave replication is in progress. In the "yes" case, slave continues to provide read-only services to the client, and it is possible that the data has expired; in the "no" case, any data request service sent to this server (including the client and the slave of this server) will be told "error"
19 slave-read-only yes
Whether slave is "read-only". "yes" is strongly recommended.
20 # repl-ping-slave-period 10
Interval (in seconds) for slave to send ping messages to the specified master, default is 10
21 # repl-timeout 60
In communication between slave and master, the maximum idle time is 60 seconds by default. Timeout will cause the connection to close
22 repl-disable-tcp-nodelay no
Whether to disable the TCP nodelay option for the connection between slave and master. "yes" means disabled, so the data in socket communication will be sent in packet mode (the size of packet is limited by socket buffer).
Can improve the efficiency of socket communication (number of tcp interactions), but small data will be buffer, will not be sent immediately, there may be a delay for the recipient. "no" means to enable the tcp nodelay option, any data will be sent immediately, better timeliness, but less efficient. It is recommended to set it to no.
23 slave-priority 100
Suitable for Sentinel module (unstable,M-S cluster management and monitoring), additional configuration file support is required. The weight value of slave. Default is 100. When the master fails, Sentinel will find the slave with the lowest weight (> 0) from the slave list and promote it to master. If the weight value is 0, the slave is an "observer" and does not participate in the master election.
24 # requirepass foobared
Set the password that you need to use before making any other assignments after the client connects. Warning: because redis is quite fast, under a better server, an external user can try 150K passwords per second, which means you need to specify a very strong password to prevent cracking.
25 # rename-command CONFIG 3ed984507a5dcd722aeade310065ce5d (method: MD5 ('config ^!'))
Rename directives. For some instructions related to "server" control, you may not want remote client (non-administrator users) links to use them casually, so you can rename these instructions to other strings that are "difficult to read".
26 # maxclients 10000
Limit the number of customers connected at the same time. When the number of connections exceeds this value, redis will no longer receive other connection requests, and clients will receive error information when they try to connect. The default is 10000. You should consider the system file descriptor limit, which should not be too large and waste file descriptors, depending on the specific situation.
27 # maxmemory
The maximum memory that redis-cache can use (bytes), which defaults to 0, means "unlimited" and is ultimately determined by the size of OS's physical memory (swap may be used if there is insufficient physical memory). This value should not exceed the physical memory size of the machine as far as possible. from the point of view of performance and implementation, it can be physical memory 3x4. This configuration needs to be used in conjunction with "maxmemory-policy" to trigger a "purge policy" when memory data in redis reaches maxmemory. When "out of memory", any write operation (such as set,lpush, etc.) will trigger the execution of the "cleanup policy". In the real world, it is recommended that the hardware configuration of all physical machines in redis be consistent (memory consistent), while ensuring that the configuration of "maxmemory" and "policy" in master/slave is consistent. You can use the client command config set maxmemory to modify the value online. This command takes effect immediately, but it will fail after restart. You need to use the config rewrite command to refresh the configuration file.
When the memory is full, if the set command is also received, redis will first try to remove the key with the expire information set, regardless of the expiration time of the key. When deleting
It will be deleted according to the expiration time, and the key that will be expired at the earliest will be deleted first. If the key with expire information is deleted and there is not enough memory, an error will be returned. In this way, redis will no longer receive write requests, only get requests. The setting of maxmemory is more suitable for using redis as a memcached-like cache.
28 # maxmemory-policy volatile-lru
When out of memory, the data cleanup policy defaults to "volatile-lru".
Volatile-lru-> use the LRU (least recently used) algorithm for the data in the expired set. If the expiration time is specified using the "expire" directive on the key, the key will be added to the expired collection. Data that has expired / LRU will be removed first. If removing all of the expired collections still does not meet the memory requirements, OOM.
Allkeys-lru-> for all data, use LRU algorithm
Volatile-random-> take the "immediately select" algorithm for the data in the expired set, and remove the selected Kmurv until there is enough memory. If the removal of all in the expired collection is still not satisfied, OOM the
Allkeys-random-> for all data, adopt the "random selection" algorithm and remove the selected KMel V until "enough memory"
Volatile-ttl-> remove expired data by using TTL algorithm (minimum survival time) for data in expired set.
Noeviction-> do not do any interference operation, and directly return OOM exception
In addition, if the expiration of the data will not bring an exception to the "application system", and the write operation in the system is relatively intensive, it is recommended to adopt "allkeys-lru".
29 # maxmemory-samples 3
The default value is 3, and the above LRU and minimum TTL policies are not rigorous policies, but are approximately estimated, so you can select the sampling value to check.
29 appendonly no
Aof persistence switch, by default, redis will back up the database mirror to disk asynchronously in the background, but this backup is very time-consuming and cannot be backed up very frequently. So redis provides another more efficient way of database backup and disaster recovery. When appendonly mode is turned on, redis appends every write request received to the appendonly.aof file, and when redis restarts, it returns to the previous state of the file. But this will cause the appendonly.aof file to be too large, so redis also supports the BGREWRITEAOF directive to reorganize the appendonly.aof. If the data migration operation is not performed frequently, it is recommended to disable the image and enable appendonly.aof in the production environment, and you can choose to rewrite the appendonly.aof once a day with less access.
In addition, for master machines, mainly responsible for writing, it is recommended to use AOF, for slave, mainly responsible for reading, pick 1-2 to turn on AOF, and the rest are recommended to close
30 # appendfilename appendonly.aof
Name of aof file, default is appendonly.aof
31 appendfsync everysec
Sets the frequency at which appendonly.aof files are synchronized. Always means that every write is synchronized, and everysec (default) means that writes are accumulated and synchronized once a second. No does not take the initiative to fsync, it is done by OS itself. This needs to be configured according to the actual business scenario.
32 no-appendfsync-on-rewrite no
During aof rewrite, whether or not to suspend the use of file synchronization policy for newly recorded append in aof mainly considers disk IO expenditure and request blocking time. The default is no, which means "no delay". New aof records will still be synchronized immediately.
33 auto-aof-rewrite-percentage 100
When the Aof log growth exceeds the specified proportion, rewrite the log file. Setting it to 0 means that the Aof log is not automatically rewritten. The purpose of rewriting is to keep the aof volume to a minimum and to ensure that the most complete data is preserved. The percentage that the aof file should grow when the rewrite is triggered compared to the "last" rewrite. After each rewrite, redis records the size of the "new aof" file at this time (for example, A), so when the aof file grows to A* (1 + p), the next rewrite is triggered, and each time the aof record is added, the size of the current aof file is detected.
34 auto-aof-rewrite-min-size 64mb
The minimum file size that triggers aof rewrite, and the minimum file size triggered by aof file rewrite (mb,gb). Rewrite will be triggered only if the aof file is greater than this size. Default is "64mb".
35 lua-time-limit 5000
Maximum time for lua scripts to run
36 slowlog-log-slower-than 10000
Slow operation log is recorded in microseconds (1/1000000 seconds, 1000 * 1000). If the operation time exceeds this value, the command information will be "recorded". (memory, non-file). "operating time" does not include network IO expenses, but only includes the time for "memory implementation" after the request reaches server. "0" means to record all operations.
37 slowlog-max-len 128
The maximum number of entries retained by the slow Operation Log, the record will be queued, and if this length is exceeded, the old record will be removed. You can view slow record information through "SLOWLOG args" (SLOWLOG get 10MagneSlog reset)
thirty-eight
Hash-max-ziplist-entries 512
Data structures of type hash can be encoded using ziplist and hashtable. The characteristic of ziplist is that file storage (and memory storage) requires less space, and the performance is almost the same as hashtable when the content is small. Therefore, redis defaults to ziplist for the hash type. If the number of entries in hash or the length of value reaches the threshold, it will be refactored to hashtable.
This parameter refers to the maximum number of entries allowed to be stored in ziplist. The default is 512, and the recommended value is 128.
Hash-max-ziplist-value 64
Maximum number of bytes of value allowed in ziplist. Default is 64, and recommended is 1024.
thirty-nine
List-max-ziplist-entries 512
List-max-ziplist-value 64
For the list type, two encoding types of ziplist,linkedlist will be adopted. Explain it as above.
40 set-max-intset-entries 512
The maximum number of entries allowed to be saved in intset. If the threshold is reached, intset will be reconstructed to hashtable.
forty-one
Zset-max-ziplist-entries 128
Zset-max-ziplist-value 64
Zset is an ordered set, and there are 2 coding types: ziplist,skiplist. Because "sorting" will consume extra performance, when there is more data in the zset, it will be refactored to skiplist.
42 activerehashing yes
Whether to enable the rehash function of the top-level data structure, if memory allows, please enable. Rehash can greatly improve the efficiency of Kmuri V access.
forty-three
Client-output-buffer-limit normal 0 0 0
Client-output-buffer-limit slave 256mb 64mb 60
Client-output-buffer-limit pubsub 32mb 8mb 60
Client buffer control. In the interaction between the client and the server, each connection is associated with a buffer that is used to queue the response information waiting to be accepted by the client. If client can not consume response information in time, then buffer will be constantly overstocked and put memory pressure on server. If the backlog of data in the buffer reaches the threshold, the connection will be closed and the buffer will be removed.
Buffer control types include: normal-> ordinary connections; slave-> connections to slave; pubsub-> pub/sub connections, which often cause this problem, because the pub side will publish messages intensively, but the subside may not consume enough.
Instruction format: client-output-buffer-limit ", where hard represents the maximum value of buffer. The connection will be closed as soon as the threshold is reached.
Soft stands for "tolerance value", which works with seconds. If the buffer value exceeds soft and the duration reaches seconds, the connection will be closed immediately. If it exceeds soft but after seconds, the buffer data is less than soft, the connection will be retained.
If both hard and soft are set to 0, buffer control is disabled. Usually the value of hard is greater than soft.
44 hz 10
The frequency at which background tasks are performed by Redis server defaults to 10. A higher value means that redis executes intermittent task more frequently (times per second). Intermittent task includes expired set detection, closing idle timeout connections, and so on. This value must be greater than 0 and less than 500. A low value means more cpu cycles are consumed, and background task is polled more frequently. This value is too large to mean that memory sensitivity is poor. It is recommended to use the default value.
forty-five
# include / path/to/local.conf
# include / path/to/other.conf
Load the configuration file extra.
# you can modify the parameters online with the following command, but the restart fails. Config set maxmemory 644245094 writes to the configuration file config rewrite using the following command
Then take a look at the startup.
# launch redisredis-server / (custom path) / redis.conf# using the configuration file and test whether you can use redis-cli-p 6379 (the specified port number can be left empty, that is, the default)-a "password" [- c "command" (optional, non-interactive operation)] set mykey "hi" okget mykey "hi" # turn off redisredis-cli shutdown# or kill pid
Note: before restarting the server, you need to enter the shutdown save command on the Redis-cli tool, which means forcing the Redis database to perform a save operation and shutting down the Redis service, which ensures that no data is lost during the Redis shutdown.
Operation
First of all, let's take a look at the command line terminal output mode operation introduction:
# start the Redis client tool under the Shell command line. Redis-cli-h 127.0.0.1-p 6379-a'*'# clears the currently selected database to facilitate understanding of the following examples. Redis 127.0.0.1 String 6379 > flushdbOK# adds simulation data of type String. Redis 127.0.0.1 set mykey 2OKredis 6379 > set mykey2 "hello" OK# adds simulation data of type Set. Redis 127.0.0.1 integer 6379 > sadd mysetkey 123 (integer) adds simulation data of type Hash. Redis 127.0.0.1 key 6379 > hset mmtest username "stephen" (integer) gets all the key in the current database that conforms to that schema based on the schema in the parameter. As you can see from the output, the command executes without distinguishing between the types of Value associated with the parameter. Redis 127.0.0.1 mysetkey 6379 > keys my*1) "mysetkey" 2) "mykey" 3) "mykey2" # deleted two Keys. Redis 127.0.0.1 mykey 6379 > del mykey mykey2 (integer) checks to see if the Key just deleted still exists. From the returned result, the mykey has indeed been deleted. Redis 127.0.0.1 Key 6379 > exists mykey (integer) check the Key that has not been deleted to compare it with the above command result. Redis 127.0.0.1 mysetkey 6379 > exists mysetkey (integer) moves the mysetkey key from the current database into the database with ID 1, and the result shows that the move has been successful. Redis 127.0.0.1 integer 6379 > move mysetkey 1 (integer) opens the database with ID 1. Redis 127.0.0.1 Key 6379 > select 1OK# to see if the Key just moved exists, and the returned result shows that it already exists. Redis 127.0.0.1 ID 6379 [1] > exists mysetkey (integer) reopens the default database with an ID of 0. Redis 127.0.0.1 Key 6379 [1] > select 0OK# to see if the Key just removed no longer exists, and the returned result shows that it has been removed. Redis 127.0.0.1 integer 6379 > exists mysetkey (test) prepare the new test data. Redis 127.0.1 hello 6379 > set mykey "hello" OK# renames mykey to mykey1redis 127.0.1 hello 6379 > rename mykey mykey1OK# since mykey has been renamed, getting again will return nil. Redis 127.0.0.1 redis 6379 > get mykey (nil) # is obtained by the new key name. Redis 127.0.0.1 hello 6379 > get mykey1 "hello" # because the mykey no longer exists, an error message is returned. Redis 127.0.0.1 redis 6379 > rename mykey mykey1 (error) ERR no such key# prepares the test for renamenx keyredis 127.0.1 error 6379 > set oldkey "hello" OKredis 127.0.0.1 world 6379 > set newkey "world" OK#, the command was not successfully executed because newkey already exists. Redis 127.0.0.1 integer 6379 > renamenx oldkey newkey (integer) checks the value of newkey and finds that it is not overwritten by renamenx either. Redis 127.0.0.1 PERSIST/EXPIRE/EXPIREAT/TTL 6379 > get newkey "world" 2. Test data prepared for the following example. Redis 127.0.0.1 hello 6379 > set mykey "hello" OK# sets the timeout of the key to 100 seconds. Redis 127.0.0.1 integer 6379 > expire mykey 100 (integer) checks how many seconds are left with the ttl command. Redis 127.0.0.1 integer 6379 > ttl mykey (integer) 9 immediately executes the persist command, and the key that has timed out becomes the persisted key, which removes the timeout for this Key. The return value of redis 127.0.0.1 integer 6379 > persist mykey (integer) 1#ttl tells us that the key has not timed out. Redis 127.0.0.1 expire 6379 > ttl mykey (integer)-prepare the data for the following expire command. Redis 127.0.0.1 1redis 6379 > del mykey (integer) 1redis 127.0.0.1 1redis 6379 > set mykey "hello" OK# sets the timeout of the key by 100 seconds. Redis 127.0.0.1 integer 6379 > expire mykey 100 (integer) seconds use the ttl command to see how many seconds are left, and you can see from the results that there are 96 seconds left. Redis 127.0.0.1 redis 6379 > ttl mykey (integer) 9 updated the key with a timeout of 20 seconds, and the return value shows that the command was executed successfully. Redis 127.0.0.1 integer 6379 > expire mykey 20 (integer) verify again with ttl, and you can see from the result that it has been updated. Redis 127.0.0.1 redis 6379 > ttl mykey (integer) keys immediately updates the value of the key to invalidate its timeout. Redis 127.0.1 world 6379 > set mykey "world" OK# you can see from the results of ttl that after the last command that modified the key was executed, the timeout of the key was also invalid. Redis 127.0.0.1 redis 6379 > ttl mykey (integer)-13. TYPE/RANDOMKEY/SORT:# this command returns none because the mm key does not exist in the database. The value of redis 127.0.0.1 redis 6379 > type mmnone#mykey is of string type, so it returns string. Redis 127.0.0.1 set 6379 > type mykeystring# prepares a key whose value is of type set. The key of redis 127.0.0.1 2#mysetkey > sadd mysetkey 12 (integer) 2#mysetkey is set, so the string set is returned. Redis 127.0.0.1 redis 6379 > type mysetkeyset# returns any key in the database. Redis 127.0.0.1 redis 6379 > randomkey "oldkey" # clear the currently open database. Redis 127.0.0.1 nil 6379 > flushdbOK# returns nil because there is no data left. Redis 127.0.0.1 redis 6379 > randomkey (nil) # online build the redis slave database. At this time, the old dataset will be discarded and the new master server will be synchronized instead. However, many redis have passwords, so it is necessary to configure CONFIG SET MASTERAUTH 12312 slave online shutdown slave configuration. The original synchronized dataset will not be discarded. Redis 127.0.0.1 config set maxmemory 6379 > SLAVEOF NO ONE# can modify the parameters online with the following command, but the restart fails. Config set maxmemory 644245094 writes to the configuration file config rewrite using the following command
These are only some of the usages above, but it is enough for general tests. There are others such as list usage and hash usage. If you are interested, you can study them more deeply.
Similar to the concept of tail and tailf, the terminal output mode is always connected, while the following command output mode is to produce a result when the command is executed, and will not continue to connect.
Let's take a look at the command result output mode:
Redis-cli parameter
-h sets the IP address of the detection host, which defaults to 127.0.0.1
-p sets the port number of the test host. The default is 6379.
-s server sockets (overwhelm hosts and ports)
-a password used when connecting to the Master server
-r execute the specified N times of command
-I wait N seconds after executing the command, such as-I 0.1 info (0.1 seconds after execution)
-n specifies to connect to N ID database, such as-n 3 (connect to database 3)
-x reads the last parameter from the information entered by the console
-d defines multiple delimiters as the default output format (default:\ n)
-- raw returns output using the original data format
-- latency enters a special mode of continuous delayed sampling
-- slave simulates a command from the server to the master server to display feedback
-- pipe uses pipeline protocol mode
-- bigkeys snooping shows a key value with a large amount of data,-- bigkeys-I 0.1
-- help displays command line help information
-- version displays the version number
Example:
$redis-cli enters command line mode $redis-cli-n 3 set mykey "hi" inserts mykey into the third database $redis-cli-r 3 info repeats the info command three times and there are some special uses GETSET:. / redis-cli getset nid 987654321 # to return the original value of the specified key and assign a new value to him MGET:. / redis-cli mget nid uid... # means to get the value of multiple key SETNX:. / redis-cli setnx nnid 888888 # means to set the value specified by the key when a specified key does not exist. If it exists, the setting is not successful SETEX:. / redis-cli setex nid 5 666666 # means to set a value specified by key to expire after 5 seconds Set the validity period of the key/value MSET:. / redis-cli mset nid0001 "0001" nid0002 "0002" nid0003 "0003" # indicates that the data of the multi-key value pair is saved INCR:. / redis-cli incr count # indicates that the value of a given key is incremented (+ 1) Of course, value must be an integer INCRBY:. / redis-cli incrby count 5 # indicates an incremental operation with a specified step size for the value of a given key:. / redis-cli decr count # indicates an operation that decrements (- 1) the value of a given key DECRBY:. / redis-cli decrby count 7 # indicates an incremental operation APPEND:. / redis-cli append content "bad" or a specified step size for the value of a given key / redis-cli append content "good" # means to append a value to the specified key If key does not exist, new key SUBSTR:. / redis-cli substr content 0 4 # returns part of the string # list operation of the value of the specified key, essence RPUSH key string-add a value to the end of a key list LPUSH key string-add a value to a key list header LLEN key-list length LRANGE key start end-return a range of values in the list Equivalent to the paging query in mysql, LTRIM key start end-keep only a range of values in the list LINDEX key index-get the value of a specific index number in the list Note that the O (n) complexity LSET key index value-sets the value somewhere in the list RPOP key # collection operation SADD key member-add element SREM key member-delete element SCARD key-return set size SISMEMBER key member-determine whether a value is SINTER key1 key2 in the collection. KeyN-get the intersection element of multiple collections SMEMBERS key-list all the elements of the collection? Update log check, add-- fix parameter to repair log file redis-check-aof appendonly.aof check local database file redis-check-dump dump.rdb
=
Pressure testing
Redis testing is divided into two ways, the first is concurrent pressure, and the other is capacity pressure.
Concurrent stress can be tested with redis-benchmark, a software that comes with redis, as follows:
Redis-benchmark parameter
-h sets the IP address of the detection host, which defaults to 127.0.0.1
-p sets the port number of the test host. The default is 6379.
-s server sockets (overwhelm hosts and ports)
-c concurrent connections
-n number of requests
-d the size / byte value of the dataset used by the test (default 3 bytes)
-k 1: remain connected (default) 0: reconnect
The-r SET/GET/INCR method uses a random number to insert a value, and if set to 10, the value is rand:000000000000-rand:000000000009.
-P defaults to 1 (no pipeline). When the network delay is too long, pipe communication is used (request and response package to send and receive)
-Q simple information mode, which only displays basic information such as query and second value.
-- csv outputs information in CSV format
-l Wireless loop inserts test data, ctrl+c stops
-t only runs test comma-separated list commands, such as:-t ping,set,get
-I idle mode. Open 50 idle connections and wait immediately.
Example:
# SET/GET 100bytes detect redis server performance with host 127.0.0.1-port 6379 redis-benchmark-h 127.0.0.1-p 6379-Q-d 100 # 5000 concurrent connections, 100000 requests, detect host 127.0.0.1 port 6379 redis server performance redis-benchmark-h 127.0.0.1-p 6379-c 5000-n 100000 # send 100000 requests to redis server Each request comes with 60 concurrent clients redis-benchmark-n 100000-c 60
Results (part):
= SET =
Write a test to a collection
100000 requests completed in 2.38 seconds
100000 requests completed within 2.38 seconds
60 parallel clients
There are 60 concurrent clients per request
3 bytes payload
Write 3 bytes of data at a time
Keep alive: 1
Maintain a connection and a server to process these requests
100.00% / dev/null#echo $alet a++done
The test results are very simple, just look at the last result of info, where it will show how many key there are.
Performance analysis.
Info Information:
After redis-cli enters the login interface, type info all, or redis-cli-h ${ip}-p ${post}-a "${pass}"-c info all. Usually we just enter info, which means brief mode, and info all is detailed mode.
After that, all the real-time performance information related to the Redis service is obtained, similar to the linux command top.
The data output by the info command can be divided into 10 categories, which are:
Server
Clients
Memory
Persistence
Stats
Replication
Cpu
Commandstats
Cluster
Keyspace
The following is to analyze some key information.
Server section:
Redis_version: Redis server version. Some functions and commands are different in different versions.
Arch_bits: architecture (32 or 64 bit), in some cases, a pit that is easy to ignore
Tcp_port: TCP/IP listens on the port to make sure you are operating correctly
Uptime_in_seconds: the number of seconds have elapsed since the Redis server started to confirm whether it has been restarted
Uptime_in_days: the number of days since the Redis server was started to confirm whether it has been restarted
Clients section:
Connected_clients: number of connected clients (excluding clients connected through secondary servers)
Client_longest_output_list: the longest output list among currently connected clients
Client_longest_input_buf: the maximum input cache among the currently connected clients
Blocked_clients: the number of clients waiting for blocking commands (BLPOP, BRPOP, BRPOPLPUSH)
Memory section:
Maxmemory/maxmemory_human: the maximum amount of memory that can be allocated by the configuration file redis.conf. When exceeded, LRU is triggered to delete the old data.
Used_memory/used_memory_human: the current total amount of memory actually used by redis-server, if used_memory > maxmemory, then the operating system begins to swap memory with swap space in order to free up new physical memory for new pages or active pages (page).
Used_memory_rss/used_memory_rss_human: displays the total amount of memory allocated from the operating system, that is, the actual value of the system's physical memory occupied by this redis-server, which may be fragmented if it is more than used_memory.
Mem_fragmentation_ratio: it is reasonable that the memory fragmentation rate is slightly greater than 1, which means that no memory swap has occurred in redis. If the memory fragmentation rate exceeds 1.5, it means that Redis consumes 150% of the actual physical memory, of which 50% is the memory fragmentation rate. If the memory fragmentation rate is less than 1, the Redis memory allocation exceeds the physical memory, and the operating system is swapping memory. Memory swapping can cause a very significant response delay.
The following is the calculation formula:
When there is a problem with the fragmentation rate, there are three ways to solve the problem of poor memory management and improve redis performance:
1. Restart the Redis server: if the memory fragmentation rate exceeds 1.5, restarting the Redis server can invalidate the additional memory fragments and reuse them as new memory, allowing the operating system to restore efficient memory management.
two。 Limit memory swapping: if the memory fragmentation rate is less than 1 Magi Redis instance may swap some of the data to the hard disk. Memory swapping can seriously affect the performance of Redis, so you should increase available physical memory or reduce real Redis memory footprint.
3. Modify the memory allocator:
Redis supports several different memory allocators such as glibc's malloc, jemalloc11, and tcmalloc, and each allocator has a different implementation on memory allocation and fragmentation. It is not recommended for ordinary administrators to modify the Redis default memory allocator, as this requires a full understanding of the differences between these memory allocators and a recompilation of Redis.
Used_memory_lua: the amount of memory used by the Lua scripting engine. Redis allows the use of lua scripts by default, but too much takes up available memory
Mem_allocator: the memory allocator used by the Redis specified at compile time, which can be libc, jemalloc, tcmalloc.
Persistence section:
RDB information, the bgsave command is used in the operation of RDB, which is a resource-consuming persistent operation, and it is not real-time, so it is easy to cause the downtime data to disappear. If the memory capacity is full and cannot do the bgsave operation, the hidden danger will be great.
Rdb_changes_since_last_save: how many seconds have elapsed since the last successful creation of a persistent file. Persistence needs to take up resources, and the effect of persistence should be avoided under high load. The following parameters are of reference value.
Rdb_bgsave_in_progress: whether the bgsave operation is currently in progress. Is for 1.
Rdb_last_save_time: the UNIX timestamp of the last successful creation of the RDB file.
Rdb_last_bgsave_time_sec: records the number of seconds it took to create the most recent RDB file.
Rdb_last_bgsave_status: last saved state
Rdb_current_bgsave_time_sec: if the server is creating a RDB file, this domain records the number of seconds that the current creation operation has taken.
AOF information, AOF is a way to continuously record commands to persistent files, saving resources, but AOF storage files will be unlimited storage, a long time, or operate too frequently, then there will be AOF files too large, burst the hard disk. Moreover, this method will bgsave on a regular basis.
Aof_enabled: whether AOF files are enabled
Aof_rewrite_in_progress: indicates whether writing to an AOF file is currently in progress
Aof_last_rewrite_time_sec: how long it took to create the last AOF file.
Aof_current_rewrite_time_sec: if the server is creating an AOF file, this domain records the number of seconds that the current creation operation has taken.
Aof_last_bgrewrite_status: status of last write
Aof_last_write_status: status of last write
Aof_base_size: the size of the AOF file when the server starts or after AOF rewrites the last execution.
Aof_pending_bio_fsync: the number of fsync calls waiting to be executed in the background iCandle O queue.
Aof_delayed_fsync: the number of fsync calls delayed.
Stats section:
Total_commands_processed: shows the total number of commands processed by the Redis service, and the value is incremented. Because Redis is a single-threaded model, the commands from the client are executed sequentially. If there are a large number of commands waiting to be processed in the command queue, the response time of the commands becomes slower, and even the later commands are completely blocked, resulting in the degradation of Redis performance. So at this time, we need to record whether the value of this parameter is growing too fast, resulting in poor performance.
Instantaneous_ops_per_sec: the number of commands executed by the server per second, as above, if it grows too fast, it will be a problem.
Expired_keys: the number of database keys that are automatically deleted because of expiration, for reference.
Evicted_keys: displays the number of key reclaimed and deleted due to maxmemory restrictions. Determine whether Redis uses lru policy or expiration policy according to the maxmemory-policy value set in the configuration file. If it is an expired collection, it will not be recorded here, usually this value is not 0, then consider increasing the memory limit, otherwise it will cause memory swapping, the performance will become worse, and the data will be lost.
Latest_fork_usec: the number of microseconds spent on the last fork () operation. Fork () is a resource-consuming operation, so pay attention to it.
Commandstats section:
Cmdstat_XXX: records the execution statistics of different types of commands, including read and write, where calls represents the number of command execution, usec represents the CPU time spent by the command, and usec_per_call represents the average CPU time spent by each command in microseconds. It has a certain use for troubleshooting.
-
Methods of analysis of other problems:
View the network latency of redis:
Latency data for Redis cannot be obtained from info information. If you want to see the delay time, you can run it with the Redis-cli tool plus the-- latency parameter.
Redis-cli-- latency-h 10.1.2.11-p 6379
He will continue to scan the delay time until he exits by ctrl+C and measures the response delay time of Redis in milliseconds. Due to the different operation of the server, the delay time may be incorrect. Usually, the delay time of 1G Nic is 0.2ms. If the delay value is much higher than this reference value, there is obviously a performance problem. At this time, we should consider checking the state of the network.
Check redis's slow query:
Slow log is a logging system used by Redis to record the execution time of queries. The slowlog command in Redis allows us to quickly locate those slow commands that exceed the specified execution time. By default, the slow commands whose execution time exceeds 10ms are recorded to the log, which is controlled by the parameter slowlog-log-slower-than. Record a maximum of 128 entries, controlled by the parameter slowlog-max-len, and delete automatically if you exceed it.
Usually this default parameter is sufficient, you can also modify the online CONFIG SET parameters slowlog-log-slower-than and slowlog-max-len to modify the time and limit the number of entries.
Usually, the network delay of 1gb bandwidth is expected to be around 0.2ms. If the execution time of a command alone exceeds 10ms, it is nearly 50 times slower than the network delay. You can view it by entering the slowlog get command using the Redis-cli tool, and the third field that returns the result shows the execution time of the command in subtle bits. If you only need to view the last three slow commands, type slowlog get 10.
127.0.0.1 > slowlog get 10. . .4) 1) (integer) 2152) (integer) 1489099695 3) (integer) 11983 4) 1) "SADD" 2) "USER_TOKEN_MAP51193" 3) "qIzwZKBmBJozKprQgoTEI3Qo8QO2Fixation 4" 5) 1) (integer) 2142) (integer) 1489087112 3) (integer) 18002) 1) "SADD" 2) "USER_TOKEN_MAP51192" 3) "Z3Hsquari TUNfweqvLfweqvLfroomptdchSV2JAOrrH" 6 ) 1) (integer) 2132) (integer) 1489069123 3) (integer) 15407 4) 1) "SADD" 2) "USER_TOKEN_MAP51191" 3) "S3rNzOBwUlaI3QfOK9dIITB6Bk7LIGYe"
1 = unique identifier of the log
2 = the point of execution of the recorded command, expressed in UNIX timestamp format
3 = query execution time in microseconds. The command in the example uses 11 milliseconds.
4 = commands executed, arranged in an array. The complete command is put together.
Monitor client connections:
Because Redis is a single-threaded model (only a single core can be used) to handle all client requests, but as the number of client connections increases, the thread resources for processing requests begin to reduce the processing time allocated to a single client connection, and each client needs to spend more time waiting for a response from the Redis shared service.
# View the connection status of the client 127.0.0.1 6379 > info clients# Clientsconnected_clients:11client_longest_output_list:0client_biggest_input_buf:0blocked_clients:0
The first field (connected_clients) shows the total number of client connections in the current instance, and the maximum number of client connections allowed by Redis by default is 10000. If you see more than 5000 connections, it may affect the performance of Redis. If some or most clients send a large number of commands, this number will be much lower.
Check the current client status
# check the status of all connected clients. 127.0.0.1 client listid=821882 addr=10.25.138.2:60990 fd=8 name= age=53838 idle=24 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=ping 6379 > status
This looks a little roundabout, because it is mixed with historical data:
Addr: the address and port of the client, including current and historical connections
Age: the life cycle of the client connection, that is, the duration after the connection, in seconds
Idle: the idle time of this client, that is, during this time, the client has no operation (in seconds)
Db: the database for operation. Db0~db15 is available by default for redis.
Cmd: the last command used by the client
In other words, the less idle, it means that the client has just operated, on the contrary, it is a historical record. The less the age, the newly established connection, and the larger the historical connection. Sometimes individual users use the scan or keys command, which will cause great load pressure on the redis with large amount of data, so it needs special attention.
Statistics of extra-large key:
Under the single-threaded processing mode of redis, the operation of some key with a large amount of data will obviously affect the performance, so when necessary, we should count it out and give it to the developer to optimize.
# Statistics larger keyredis-cli-h *-a *-p *-bigkeys# to view the duration of a key 127.0.0.1 key 6379 > OBJECT IDLETIME key name
-- bigkeys information resolution:
1. This command uses scan to count key, so you don't have to worry about blocking redis when you use it.
two。 The output is roughly divided into two parts, and the part above the summary just shows the scanning process. The summary section gives the largest Key of each data structure, so the following section is more important.
3. The largest key counted is that only the string type is measured by byte length. List,set,zset and so on are measured by the number of elements, which does not mean that they occupy a large amount of memory and need to be calculated separately.
When you have the largest key name, go to see the rough size
# View the serialized length debug object key of a key
Description of the output item:
Memory address of Value at:key
Refcount: number of references
Encoding: encoding type
Serializedlength: the serialized length after compression, in B, that is, Byte (bytes). Because the compression effect depends on the type of encoding, it does not necessarily reflect the size in memory, but is of reference value.
Lru_seconds_idle: idle time
In the end, the big key information we should pay attention to is the length of the serializedlength.
There is also a tool [rdbtools] that can comprehensively analyze the key information in redis, but it is not troublesome for intranet users to install it, because it is not a function that comes with the system. Please wait for another article to introduce it in detail.
Latency caused by data persistence
Redis's data persistence work itself brings latency, and a reasonable persistence policy needs to be made according to the security level and performance requirements of the data:
Although the setting of 1.AOF + fsync always can absolutely ensure data security, each operation will trigger a fsync, which will have an obvious impact on the performance of Redis.
2.AOF + fsync every second is a good compromise, fsync once per second
3.AOF + fsync never provides the best performance under the AOF persistence scheme. Using RDB persistence usually provides higher performance than using AOF, but you need to pay attention to the policy configuration of RDB.
4. Each RDB snapshot and AOF Rewrite requires the Redis main process to perform fork operations. The fork operation itself can be time-consuming, depending on the amount of memory consumed by CPU and Redis. Configure RDB snapshot and AOF Rewrite timing according to the specific situation to avoid the delay caused by too frequent fork.
For example, when Redis forks a child process, it needs to copy the memory paging table to the child process. Take the Redis instance that takes up 24GB memory as an example, it needs to copy the data of 24GB / 4kB * 8 = 48MB. On physical machines that use a single Xeon 2.27Ghz, this fork operation takes 216ms.
You can view the time (in microseconds) of the last fork operation through the latest_fork_usec field returned by the INFO command.
Delay caused by Swap
When Linux moves the memory paging used by Redis to swap space, it will block the Redis process and cause abnormal latency in Redis. Swap usually occurs when there is insufficient physical memory or when some processes are performing a large number of I-stroke O operations, both of which should be avoided as much as possible.
The swap record of the process is saved in the / proc/redis process number / smaps file, and by looking at this file, you can determine whether the delay in Redis is caused by Swap. If a large Swap size is recorded in this file, the delay is most likely caused by Swap.
As an example, you can see the status of the current swap when 0KB, that is, swap is not used
# / proc/pid/smaps shows the memory impact of the process when it is running, in which the system's runtime library (so), heap, and stack information can be seen. Cat / proc/ `ps aux | grep redis | grep-v grep | awk'{print $2}'`/ smaps00400000-00531000 r-xp 00000000 fc:02 805438521 / usr/local/bin/redis-serverSize: 1220 kBRss: 924 kBPss: 924 kBShared_Clean: 0 kBShared_Dirty: 0 kBPrivate_Clean: 924 kBPrivate_Dirty: 0 kBReferenced: 924 kBAnonymous: 0 kBAnonHugePages: 0 kBShared_Hugetlb: 0 kBPrivate_Hugetlb: 0 kBSwap: 0 kBSwapPss: 0 kBKernelPageSize: 4 kBMMUPageSize: 4 kBLocked: 0 kB
What if the memory is full:
Redis memory is full, which is really troublesome, but no matter how troublesome it is, we have to deal with it, starting with the principle of redis architecture.
First of all, we have to understand that just because redis's memory is full doesn't mean it uses 100% of the system's memory. Why do you say that? We all know that redis persistence is save and bgsave, and the commonly used bgsave (and default) is to fork a process that compresses a copy of the memory copy and saves it to the hard disk into a * .rdb file. Here is a problem. Your memory must be as large as data before you can do bgsave. Strictly speaking, as long as your memory exceeds 50% of the system memory, it can be called redis memory full.
The persistence strategy of redis only says that persistence will block the operation and cause delay, but if the memory is full, the increase in the amount of data will make the delay caused by persistence more serious, and the default is to retry every minute after persistence failure.
Then the problem arises, because the memory is full, persistence fails, and then persists a minute later, which creates a vicious circle and the performance of redis plummets. What should I do then?
Changing the persistence policy is a temporary solution to turn off rdb persistence directly:
Config set save ""
Why can be solved, the answer is also obvious, turn off persistence, then will not block the operation, then the performance of your redis is guaranteed. But it will introduce new problems, without persistence, memory data will be lost if the redis-server program is restarted or closed, which is still more dangerous. And the problem of full memory still exists, if the memory uses 100% of the system memory, or even triggers the system's OOM, it will be a big hole, because the memory is completely emptied and the data is gone. This is the so-called temporary solution.
So the right thing to do is, after the non-blocking operation, delete the data that can be deleted, then pull up the persistence again, and then prepare for expansion.
Where does the occupied memory look, as mentioned above, but the definition of full memory is not necessarily just the actual memory usage, fragments should also be included, for example:
# in this case, the memory must be full of used _ memory_human:4.2Gmaxmemory_human:4.00G#, but in this case, the memory is also full of used _ memory_human:792.30Mused_memory_rss_human:3.97Gused_memory_peak_human:4.10Gmaxmemory_human:4.00G
Because the memory fragment has not been released, it will still take up memory space. For the system, the fragment is also the memory occupied by redis-server, not free memory, so the remaining memory is still not enough for bgsave. So how do we deal with the fragments?
Before redis4.0, there was no better way but to restart redis-server, and later versions added a new defragmentation parameter to eliminate this problem.
The fragmentation problem actually has a great impact, because under normal circumstances, these unused data that do take up memory will not only waste space on our redis, but also cause the risk of full memory. So as mentioned above, if the fragmentation rate exceeds 1.5, it is time to think about recycling.
What if you do need to save in-memory data? Can only give up, delete unnecessary data, so that the memory can do bgsave and then restart to recover fragments. Otherwise, avoid similar problems after upgrading to 4.0.
Optimization suggestion
System optimization
1. Close Transparent huge pages
Transparent HugePages allows kernel khugepaged threads to dynamically allocate memory at run time. It is enabled by default in most linux distributions, but the disadvantage is that it may cause delayed allocation of memory, which is not friendly to large memory applications, such as oracle,redis, which will take up a lot of memory, so it is recommended to turn it off.
# disable Transparent HugePages. The default state is [always] echo never > / sys/kernel/mm/transparent_hugepage/enabled
two。 Deploy redis on physical machines, needless to say, virtual machines or docker will have a certain delay, there is no need to waste these performance in order to manage the second.
3. Use more connection pooling instead of frequently disconnecting and reconnecting. I think the effect is self-evident.
4. The batch data operations performed by the client should be done in one interaction using the Pipeline feature.
Behavior optimization
1. If the cached data is less than 4GB, you can choose to use a 32-bit Redis instance. Because the pointer on a 32-bit instance is only half the size of 64-bit, it takes up less memory space. Redis's dump files are compatible between 32-bit and 64-bit, so if there is a need to reduce memory footprint, try using 32-bit first and then switching to 64-bit.
two。 Use Hash data structures whenever possible. Because Redis stores less than 100 fields in the Hash structure, its storage efficiency is very high. So when you don't need set operations or list push/pop operations, use the Hash structure as much as possible. The operation commands of the Hash structure are HSET (key, fields, value) and HGET (key, field), which can be used to store or retrieve specified fields from the Hash.
3. Try to set the expiration time of key. An easy way to reduce memory usage is to make sure that the expiration time of key is set whenever an object is stored. If key is used within a specified time period or the old key is unlikely to be used, you can use the Redis expiration time command (expire,expireat, pexpire, pexpireat) to set the expiration time, so that Redis will automatically delete the key when the key expires. With the ttl command, you can query the expiration time (in seconds). Display-2 indicates that key does not exist, and display-1 indicates that the timeout period is not set (that is, permanent).
4. Use multi-parameter commands: if the client sends a large number of commands in a very short time, the response time will be significantly slower, because the subsequent commands have been waiting for a large number of commands in the queue to finish. For example, cycling the LSET command to add 1000 elements to the list structure is a way of poor performance, and it is better to create a list of 1000 elements on the client side, using a single command LPUSH or RPUSH, to send 1000 elements at once to the Redis service in the form of multi-parameter construction.
5. Pipe commands: another way to reduce multiple commands is to use pipes (pipeline) to execute several commands together, thereby reducing latency problems caused by network overhead. Because 10 commands sent to the server alone will cause 10 network delay overhead, the use of pipes will return the execution results at one time, requiring only one network delay overhead. Redis itself supports pipeline commands, as do most clients, and if the latency of the current instance is obvious, it is very effective to use pipes to reduce latency.
6. Avoid slow commands that manipulate large sets: if the low frequency of command processing leads to an increase in delay time, this may be due to the use of command operations with high time complexity, which means that each command takes more time to retrieve data from the collection. Therefore, reducing the use of high-time complex commands can significantly improve the performance of Redis.
7. Limit the number of client connections: since Redis2.6, users are allowed to modify the maximum number of client connections on the maxclients property of the configuration file (Redis.conf), or you can set the maximum number of client connections by typing config set maxclients on the Redis-cli tool. Depending on the load on the number of connections, this number should be set to between 110% and 150% of the expected peak number of connections. If the number of connections exceeds this number, Redis will reject and immediately close the new connections. It is important to limit the growth of unexpected connections by setting the maximum number of connections. In addition, a failed new connection attempt returns an error message, which lets the client know that Redis has an unexpected number of connections at this time in order to perform the corresponding processing action. The above two practices are very important for controlling the number of connections and maintaining the best performance of Redis.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.