Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Installation and simple use of redis

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

As an example, redis is an in-memory database, a simple key-value structure, and a nosql database. Because the structure is simple and memory is used, the speed is very fast. As for the question, how fast? As we all know, the speed of the previous mechanical hard disk is actually good, but the read and write speed of memory is 60,000 times that of the mechanical hard disk. So, you can see how fast redis is than hard drives. Of course, solid state drives are used now, and the situation is much better than before. There is no more detailed analogy here, it is good to know that this is the case.

Installation

= =

Well, of course, first of all, installation, installation of redis is very simple, first of all, our servers are linux.

First of all, to make it clear that the in-memory database does not mean that there is no hard disk space at all, because generally for the sake of data security, some data will be backed up to the hard disk on the configuration, so it is still necessary to prepare a certain amount of hard disk space.

Download address:

Wget "http://download.redis.io/releases/redis-3.2.0.tar.gz"

Is the compiled version, now out to 3.2, but the version difference is not big, the lower version just does not support cluster or master-slave, in fact, the function is the same.

Install dependent package prompt to remember yum, generally related to the C language library, most of which must be installed in the initialization system.

Let's take a look at the installation method, which is simple:

# decompress the compiled package tar xzf redis-3.2.0.tar.gz# and enter the extracted folder cd redis-3.2.0# to compile and install makemake install# by the way, if you want to install it in a custom directory, you can also make prefix= (your directory) install

When the make install command is executed, several executables are generated in the / usr/local/bin directory, which serve the following purposes:

Redis-server: daemon launcher for Redis server

Redis-cli: Redis command line operation tool. You can also use telnet to operate according to its plain text protocol

Redis-benchmark: Redis performance testing tool to test the read and write performance of Redis on the current system

Redis-check-aof: data repair

Redis-check-dump: check the export utility

After installation, it depends on the configuration. The name of the configuration file can be changed freely, and the location is not fixed, because you can specify the configuration file to start when you start.

Do you remember compiling the directory? There is a configuration file template inside, which can be copied and used. Of course, it is necessary to change it according to your own needs.

Cd redis-3.2.0ll * .conf-rw-rw-r-- 1 root root 45390 May 6 15:11 redis.conf-rw-rw-r-- 1 root root 7109 May 6 15:11 sentinel.conf

One is the configuration file of redis, and the other is Sentinel, which belongs to the configuration file of redis cluster application.

There are too many configurations, so let's first look at some key points:

Cat redis.conf# allows you to run daemonize yes# in the background to set the port, preferably with a non-default port port 666login. For security reasons, it is best to name the private network bind 10.10.2.2 and specify the PID path of the current redis to distinguish multiple redispidfile / data/redis/data/config/redis_6666.pid# names and specify the current redis log file path logfile "/ data/redis/data/logs/redis_6666.log" # specify the RDB file name It is used to back up data to the hard disk and distinguish between different redis. When memory exceeds 45% of the available memory, the snapshot function dbfilename dump_6666.rdb# specifies the root directory of the current redis, which is used to store the authentication key of the current redis of the RDB/AOF file dir / data/redis/data#. The redis runs very fast. This password should be strong enough to limit the maximum capacity of the current redis of requirepass gggggggGGGGGGGGG999999999#. It is recommended to set it to within 45% of the available memory. The maximum can be set to 95% of the available memory of the system. # config set maxmemory can be used to modify online, but the restart fails. You need to use the config rewrite command to refresh the configuration file maxmemory 1024000000#LRU. There are four strategies. Choose maxmemory-policy allkeys-lru# to turn off automatic persistence operation depending on the situation. RDB automatic persistence is on by default, and will automatically compress and save the full amount of redis data regularly. # because redis is a single-thread operation This operation is undoubtedly more resource-consuming and blocking operations, and some caching-only environments do not necessarily mean that data is important. # save "" # by default, AOF persistence is turned off and needs to be turned on manually, and RDB has its own characteristics, which is less blocking than RDB. # appendonly yes# after opening AOF, you need to set the following two parameters to prevent the AOF file from growing. Affect subsequent operations. # auto-aof-rewrite-percentage 100#auto-aof-rewrite-min-size 64mb

The detailed analysis is as follows:

1 daemonize no

By default, redis does not run in the background. If you need to run in the background, change the value of this item to yes.

2 pidfile / var/run/redis.pid

When Redis is running in the background, Redis defaults to putting the pid file in / var/run/redis.pid, which you can configure to another address. When running multiple redis services, you need to specify different pid files and ports

3 port

Listening port. Default is 6379.

4 # bind 127.0.0.1

Specifies that Redis only receives requests from that IP address, and if it is not set, all requests are processed, and it is best to set this item for security in a production environment. Comment out by default and do not turn on

5 timeout 0

Sets the timeout, in seconds, for client connections. When the client does not issue any instructions during this period, then close the connection

6 tcp-keepalive 0

Specifies whether the TCP connection is persistent, and the "detective" signal is maintained by the server side. The default is 0. Indicates disabled

7 loglevel notice

The log level is divided into four levels, debug,verbose, notice, and warning. Notice is generally enabled in production environment.

8 logfile stdout

Configure the log file address. Standard output is used by default, that is, it is printed on the window of the command line terminal and modified to the log file directory.

9 databases 16

To set the number of databases, you can use the SELECT command to switch databases. The default database is library 0. Default 16 libraries

ten

Save 900 1

Save 300 10

Save 60 10000

Rdb automatically persists parameters, how often data snapshots are saved, that is, how often data is persisted to dump.rdb files. Used to describe "at least how many change operations in how many seconds" triggers the snapshot data save action

The default setting, which means:

If (when 10000 keys changes within 60 seconds) {

Make a mirror backup

} else if (10 keys have changed in 300 seconds) {

Make a mirror backup

} else if (1 keys has changed in 900s) {

Make a mirror backup

}

If set to empty, for example:

Save ""

That is, turn off rdb automatic persistence.

RDB automatic persistence is enabled by default, and automatically compresses and saves all the data of redis on a regular basis, but because redis is a single-threaded operation, this operation is undoubtedly more resource-consuming and blocking operations, and some caching environments may not be very important, and it is possible to close them.

Note: the manual bgsave command can still be used. You should also pay attention to whether the file of the dir parameter exists. If so, the data of the file will be loaded after restart.

11 stop-writes-on-bgsave-error yes

Whether to continue working and whether to terminate all client write requests when there is an error in persistence. The default setting of "yes" indicates termination, and in the event of a failure in saving snapshot data, this server is a read-only service. If "no", the snapshot will fail this time, but the next snapshot will not be affected, but if there is a failure, the data can only be restored to the "last success point"

12 rdbcompression yes

Whether to enable rdb file compression when making a data mirror backup is yes by default. Compression may require additional cpu overhead, but it can effectively reduce the size of rdb files and facilitate storage / backup / transfer / data recovery

13 rdbchecksum yes

10% performance loss when reading and writing

14 rdbchecksum yes

Whether to checksum, and whether to use CRC64 checksum for rdb files. The default is "yes". Then CRC checksum will be appended to the end of each rdb file content, which is convenient for third-party verification tools to check the integrity of the file.

14 dbfilename dump.rdb

The file name of the backup file of the mirror snapshot, which defaults to dump.rdb, which triggers the snapshot function when the memory used exceeds 45% of the available memory

15 dir. /

The path where the rdb/AOF file for the database mirroring backup is placed. The path here should be configured separately from the file name because when Redis makes a backup, it will first write the status of the current database to a temporary file, and when the backup is completed, the temporary file will be replaced with the file specified above, and both the temporary file here and the backup file configured above will be placed in this specified path

16 # slaveof

Set the database as a slave to other databases and specify master information for it.

17 masterauth

When the primary database connection requires password authentication, specify here

18 slave-serve-stale-data yes

Whether customers can still be allowed to access data that may be out of date when the master master server hangs up or master-slave replication is in progress. In the "yes" case, slave continues to provide read-only services to the client, and it is possible that the data has expired; in the "no" case, any data request service sent to this server (including the client and the slave of this server) will be told "error"

19 slave-read-only yes

Whether slave is "read-only". "yes" is strongly recommended.

20 # repl-ping-slave-period 10

Interval (in seconds) for slave to send ping messages to the specified master, default is 10

21 # repl-timeout 60

In communication between lave and master, the maximum idle time is 60 seconds by default. Timeout will cause the connection to close

22 repl-disable-tcp-nodelay no

Whether to disable the TCP nodelay option for the connection between slave and master. "yes" means disabled, so the data in socket communication will be sent in packet mode (the size of packet is limited by socket buffer).

Can improve the efficiency of socket communication (number of tcp interactions), but small data will be buffer, will not be sent immediately, there may be a delay for the recipient. "no" means to enable the tcp nodelay option, any data will be sent immediately, better timeliness, but less efficient. It is recommended to set it to no.

23 slave-priority 100

Suitable for Sentinel module (unstable,M-S cluster management and monitoring), additional configuration file support is required. The weight value of slave. Default is 100. When the master fails, Sentinel will find the slave with the lowest weight (> 0) from the slave list and promote it to master. If the weight value is 0, the slave is an "observer" and does not participate in the master election.

24 # requirepass foobared

Set the password that you need to use before making any other assignments after the client connects. Warning: because redis is quite fast, under a better server, an external user can try 150K passwords per second, which means you need to specify a very strong password to prevent cracking.

25 # rename-command CONFIG 3ed984507a5dcd722aeade310065ce5d (method: MD5 ('config ^!'))

Rename directives. For some instructions related to "server" control, you may not want remote client (non-administrator users) links to use them casually, so you can rename these instructions to other strings that are "difficult to read".

26 # maxclients 10000

Limit the number of customers connected at the same time. When the number of connections exceeds this value, redis will no longer receive other connection requests, and clients will receive error information when they try to connect. The default is 10000. You should consider the system file descriptor limit, which should not be too large and waste file descriptors, depending on the specific situation.

27 # maxmemory

The maximum memory that redis-cache can use (bytes), which defaults to 0, means "unlimited" and is ultimately determined by the size of OS's physical memory (swap may be used if there is insufficient physical memory). This value should not exceed the physical memory size of the machine as far as possible. from the point of view of performance and implementation, it can be physical memory 3x4. This configuration needs to be used in conjunction with "maxmemory-policy" to trigger a "purge policy" when memory data in redis reaches maxmemory. When "out of memory", any write operation (such as set,lpush, etc.) will trigger the execution of the "cleanup policy". In the real world, it is recommended that the hardware configuration of all physical machines in redis be consistent (memory consistent), while ensuring that the configuration of "maxmemory" and "policy" in master/slave is consistent. You can use the client command config set maxmemory to modify the value online. This command takes effect immediately, but it will fail after restart. You need to use the config rewrite command to refresh the configuration file.

When the memory is full, if the set command is also received, redis will first try to remove the key with the expire information set, regardless of the expiration time of the key. When deleting

It will be deleted according to the expiration time, and the key that will be expired at the earliest will be deleted first. If the key with expire information is deleted and there is not enough memory, an error will be returned. In this way, redis will no longer receive write requests, only get requests. The setting of maxmemory is more suitable for using redis as a memcached-like cache.

28 # maxmemory-policy volatile-lru

When out of memory, the data cleanup policy defaults to "volatile-lru".

Volatile-lru-> use the LRU (least recently used) algorithm for the data in the expired set. If the expiration time is specified using the "expire" directive on the key, the key will be added to the expired collection. Data that has expired / LRU will be removed first. If removing all of the expired collections still does not meet the memory requirements, OOM.

Allkeys-lru-> for all data, use LRU algorithm

Volatile-random-> take the "immediately select" algorithm for the data in the expired set, and remove the selected Kmurv until there is enough memory. If the removal of all in the expired collection is still not satisfied, OOM the

Allkeys-random-> for all data, adopt the "random selection" algorithm and remove the selected KMel V until "enough memory"

Volatile-ttl-> remove expired data by using TTL algorithm (minimum survival time) for data in expired set.

Noeviction-> do not do any interference operation, and directly return OOM exception

In addition, if the expiration of the data will not bring an exception to the "application system", and the write operation in the system is relatively intensive, it is recommended to adopt "allkeys-lru".

29 # maxmemory-samples 3

The default value is 3, and the above LRU and minimum TTL policies are not rigorous policies, but are approximately estimated, so you can select the sampling value to check.

29 appendonly no

Aof persistence switch, by default, redis will back up the database mirror to disk asynchronously in the background, but this backup is very time-consuming and cannot be backed up very frequently. So redis provides another more efficient way of database backup and disaster recovery. When appendonly mode is turned on, redis appends every write request received to the appendonly.aof file, and when redis restarts, it returns to the previous state of the file. But this will cause the appendonly.aof file to be too large, so redis also supports the BGREWRITEAOF directive to reorganize the appendonly.aof. If the data migration operation is not performed frequently, it is recommended to disable the image and enable appendonly.aof in the production environment, and you can choose to rewrite the appendonly.aof once a day with less access.

In addition, for master machines, mainly responsible for writing, it is recommended to use AOF, for slave, mainly responsible for reading, pick 1-2 to turn on AOF, and the rest are recommended to close

30 # appendfilename appendonly.aof

Name of aof file, default is appendonly.aof

31 appendfsync everysec

Sets the frequency at which appendonly.aof files are synchronized. Always means that every write is synchronized, and everysec (default) means that writes are accumulated and synchronized once a second. No does not take the initiative to fsync, it is done by OS itself. This needs to be configured according to the actual business scenario.

32 no-appendfsync-on-rewrite no

During aof rewrite, whether or not to suspend the use of file synchronization policy for newly recorded append in aof mainly considers disk IO expenditure and request blocking time. The default is no, which means "no delay". New aof records will still be synchronized immediately.

33 auto-aof-rewrite-percentage 100

When the Aof log growth exceeds the specified proportion, rewrite the log file. Setting it to 0 means that the Aof log is not automatically rewritten. The purpose of rewriting is to keep the aof volume to a minimum and to ensure that the most complete data is preserved. The percentage that the aof file should grow when the rewrite is triggered compared to the "last" rewrite. After each rewrite, redis records the size of the "new aof" file at this time (for example, A), so when the aof file grows to A* (1 + p), the next rewrite is triggered, and each time the aof record is added, the size of the current aof file is detected.

34 auto-aof-rewrite-min-size 64mb

The minimum file size that triggers aof rewrite, and the minimum file size triggered by aof file rewrite (mb,gb). Rewrite will be triggered only if the aof file is greater than this size. Default is "64mb".

35 lua-time-limit 5000

Maximum time for lua scripts to run

36 slowlog-log-slower-than 10000

Slow operation log is recorded in microseconds (1/1000000 seconds, 1000 * 1000). If the operation time exceeds this value, the command information will be "recorded". (memory, non-file). "operating time" does not include network IO expenses, but only includes the time for "memory implementation" after the request reaches server. "0" means to record all operations.

37 slowlog-max-len 128

The maximum number of entries retained by the slow Operation Log, the record will be queued, and if this length is exceeded, the old record will be removed. You can view slow record information through "SLOWLOG args" (SLOWLOG get 10MagneSlog reset)

thirty-eight

Hash-max-ziplist-entries 512

Data structures of type hash can be encoded using ziplist and hashtable. The characteristic of ziplist is that file storage (and memory storage) requires less space, and the performance is almost the same as hashtable when the content is small. Therefore, redis defaults to ziplist for the hash type. If the number of entries in hash or the length of value reaches the threshold, it will be refactored to hashtable.

This parameter refers to the maximum number of entries allowed to be stored in ziplist. The default is 512, and the recommended value is 128.

Hash-max-ziplist-value 64

Maximum number of bytes of value allowed in ziplist. Default is 64, and recommended is 1024.

thirty-nine

List-max-ziplist-entries 512

List-max-ziplist-value 64

For the list type, two encoding types of ziplist,linkedlist will be adopted. Explain it as above.

40 set-max-intset-entries 512

The maximum number of entries allowed to be saved in intset. If the threshold is reached, intset will be reconstructed to hashtable.

forty-one

Zset-max-ziplist-entries 128

Zset-max-ziplist-value 64

Zset is an ordered set, and there are 2 coding types: ziplist,skiplist. Because "sorting" will consume extra performance, when there is more data in the zset, it will be refactored to skiplist.

42 activerehashing yes

Whether to enable the rehash function of the top-level data structure, if memory allows, please enable. Rehash can greatly improve the efficiency of Kmuri V access.

forty-three

Client-output-buffer-limit normal 0 0 0

Client-output-buffer-limit slave 256mb 64mb 60

Client-output-buffer-limit pubsub 32mb 8mb 60

Client buffer control. In the interaction between the client and the server, each connection is associated with a buffer that is used to queue the response information waiting to be accepted by the client. If client can not consume response information in time, then buffer will be constantly overstocked and put memory pressure on server. If the backlog of data in the buffer reaches the threshold, the connection will be closed and the buffer will be removed.

Buffer control types include: normal-> ordinary connections; slave-> connections to slave; pubsub-> pub/sub connections, which often cause this problem, because the pub side will publish messages intensively, but the subside may not consume enough.

Instruction format: client-output-buffer-limit ", where hard represents the maximum value of buffer. The connection will be closed as soon as the threshold is reached.

Soft stands for "tolerance value", which works with seconds. If the buffer value exceeds soft and the duration reaches seconds, the connection will be closed immediately. If it exceeds soft but after seconds, the buffer data is less than soft, the connection will be retained.

If both hard and soft are set to 0, buffer control is disabled. Usually the value of hard is greater than soft.

44 hz 10

The frequency at which background tasks are performed by Redis server defaults to 10. A higher value means that redis executes intermittent task more frequently (times per second). Intermittent task includes expired set detection, closing idle timeout connections, and so on. This value must be greater than 0 and less than 500. A low value means more cpu cycles are consumed, and background task is polled more frequently. This value is too large to mean that memory sensitivity is poor. It is recommended to use the default value.

forty-five

# include / path/to/local.conf

# include / path/to/other.conf

Load the configuration file extra.

# you can modify the parameters online with the following command, but the restart fails. Config set maxmemory 644245094 writes to the configuration file config rewrite using the following command

Then take a look at the startup.

# launch redisredis-server / (custom path) / redis.conf# using the configuration file and test whether you can use redis-cli-p 6379 (the specified port number can be left empty, that is, the default)-a "password" [- c "command" (optional, non-interactive operation)] set mykey "hi" okget mykey "hi" # turn off redisredis-cli shutdown# or kill pid

Note: before restarting the server, you need to enter the shutdown save command on the Redis-cli tool, which means forcing the Redis database to perform a save operation and shutting down the Redis service, which ensures that no data is lost during the Redis shutdown.

Operation

First of all, let's take a look at the command line terminal output mode operation introduction:

# start the Redis client tool under the Shell command line. Redis-cli-h 127.0.0.1-p 6379-a'*'# clears the currently selected database to facilitate understanding of the following examples. Redis 127.0.0.1 String 6379 > flushdbOK# adds simulation data of type String. Redis 127.0.0.1 set mykey 2OKredis 6379 > set mykey2 "hello" OK# adds simulation data of type Set. Redis 127.0.0.1 integer 6379 > sadd mysetkey 123 (integer) adds simulation data of type Hash. Redis 127.0.0.1 key 6379 > hset mmtest username "stephen" (integer) gets all the key in the current database that conforms to that schema based on the schema in the parameter. As you can see from the output, the command executes without distinguishing between the types of Value associated with the parameter. Redis 127.0.0.1 mysetkey 6379 > keys my*1) "mysetkey" 2) "mykey" 3) "mykey2" # deleted two Keys. Redis 127.0.0.1 mykey 6379 > del mykey mykey2 (integer) checks to see if the Key just deleted still exists. From the returned result, the mykey has indeed been deleted. Redis 127.0.0.1 Key 6379 > exists mykey (integer) check the Key that has not been deleted to compare it with the above command result. Redis 127.0.0.1 mysetkey 6379 > exists mysetkey (integer) moves the mysetkey key from the current database into the database with ID 1, and the result shows that the move has been successful. Redis 127.0.0.1 integer 6379 > move mysetkey 1 (integer) opens the database with ID 1. Redis 127.0.0.1 Key 6379 > select 1OK# to see if the Key just moved exists, and the returned result shows that it already exists. Redis 127.0.0.1 ID 6379 [1] > exists mysetkey (integer) reopens the default database with an ID of 0. Redis 127.0.0.1 Key 6379 [1] > select 0OK# to see if the Key just removed no longer exists, and the returned result shows that it has been removed. Redis 127.0.0.1 integer 6379 > exists mysetkey (test) prepare the new test data. Redis 127.0.1 hello 6379 > set mykey "hello" OK# renames mykey to mykey1redis 127.0.1 hello 6379 > rename mykey mykey1OK# since mykey has been renamed, getting again will return nil. Redis 127.0.0.1 redis 6379 > get mykey (nil) # is obtained by the new key name. Redis 127.0.0.1 hello 6379 > get mykey1 "hello" # because the mykey no longer exists, an error message is returned. Redis 127.0.0.1 redis 6379 > rename mykey mykey1 (error) ERR no such key# prepares the test for renamenx keyredis 127.0.1 error 6379 > set oldkey "hello" OKredis 127.0.0.1 world 6379 > set newkey "world" OK#, the command was not successfully executed because newkey already exists. Redis 127.0.0.1 integer 6379 > renamenx oldkey newkey (integer) checks the value of newkey and finds that it is not overwritten by renamenx either. Redis 127.0.0.1 PERSIST/EXPIRE/EXPIREAT/TTL 6379 > get newkey "world" 2. Test data prepared for the following example. Redis 127.0.0.1 hello 6379 > set mykey "hello" OK# sets the timeout of the key to 100 seconds. Redis 127.0.0.1 integer 6379 > expire mykey 100 (integer) checks how many seconds are left with the ttl command. Redis 127.0.0.1 integer 6379 > ttl mykey (integer) 9 immediately executes the persist command, and the key that has timed out becomes the persisted key, which removes the timeout for this Key. The return value of redis 127.0.0.1 integer 6379 > persist mykey (integer) 1#ttl tells us that the key has not timed out. Redis 127.0.0.1 expire 6379 > ttl mykey (integer)-prepare the data for the following expire command. Redis 127.0.0.1 1redis 6379 > del mykey (integer) 1redis 127.0.0.1 1redis 6379 > set mykey "hello" OK# sets the timeout of the key by 100 seconds. Redis 127.0.0.1 integer 6379 > expire mykey 100 (integer) seconds use the ttl command to see how many seconds are left, and you can see from the results that there are 96 seconds left. Redis 127.0.0.1 redis 6379 > ttl mykey (integer) 9 updated the key with a timeout of 20 seconds, and the return value shows that the command was executed successfully. Redis 127.0.0.1 integer 6379 > expire mykey 20 (integer) verify again with ttl, and you can see from the result that it has been updated. Redis 127.0.0.1 redis 6379 > ttl mykey (integer) keys immediately updates the value of the key to invalidate its timeout. Redis 127.0.1 world 6379 > set mykey "world" OK# you can see from the results of ttl that after the last command that modified the key was executed, the timeout of the key was also invalid. Redis 127.0.0.1 redis 6379 > ttl mykey (integer)-13. TYPE/RANDOMKEY/SORT:# this command returns none because the mm key does not exist in the database. The value of redis 127.0.0.1 redis 6379 > type mmnone#mykey is of string type, so it returns string. Redis 127.0.0.1 set 6379 > type mykeystring# prepares a key whose value is of type set. The key of redis 127.0.0.1 2#mysetkey > sadd mysetkey 12 (integer) 2#mysetkey is set, so the string set is returned. Redis 127.0.0.1 redis 6379 > type mysetkeyset# returns any key in the database. Redis 127.0.0.1 redis 6379 > randomkey "oldkey" # clear the currently open database. Redis 127.0.0.1 nil 6379 > flushdbOK# returns nil because there is no data left. Redis 127.0.0.1 redis 6379 > randomkey (nil) # online build the redis slave database. At this time, the old dataset will be discarded and the new master server will be synchronized instead. However, many redis have passwords, so it is necessary to configure CONFIG SET MASTERAUTH 12312 slave online shutdown slave configuration. The original synchronized dataset will not be discarded. Redis 127.0.0.1 config set maxmemory 6379 > SLAVEOF NO ONE# can modify the parameters online with the following command, but the restart fails. Config set maxmemory 644245094 writes to the configuration file config rewrite using the following command

These are only some of the usages above, but it is enough for general tests. There are others such as list usage and hash usage. If you are interested, you can study them more deeply.

Similar to the concept of tail and tailf, the terminal output mode is always connected, while the following command output mode is to produce a result when the command is executed, and will not continue to connect.

Let's take a look at the command result output mode:

Redis-cli parameter

-h sets the IP address of the detection host, which defaults to 127.0.0.1

-p sets the port number of the test host. The default is 6379.

-s server sockets (overwhelm hosts and ports)

-a password used when connecting to the Master server

-r execute the specified N times of command

-I wait N seconds after executing the command, such as-I 0.1 info (0.1 seconds after execution)

-n specifies to connect to N ID database, such as-n 3 (connect to database 3)

-x reads the last parameter from the information entered by the console

-d defines multiple delimiters as the default output format (default:\ n)

-- raw returns output using the original data format

-- latency enters a special mode of continuous delayed sampling

-- slave simulates a command from the server to the master server to display feedback

-- pipe uses pipeline protocol mode

-- bigkeys snooping shows a key value with a large amount of data,-- bigkeys-I 0.1

-- help displays command line help information

-- version displays the version number

Example:

$redis-cli enters command line mode $redis-cli-n 3 set mykey "hi" inserts mykey into the third database $redis-cli-r 3 info repeats the info command three times and there are some special uses GETSET:. / redis-cli getset nid 987654321 # to return the original value of the specified key and assign a new value to him MGET:. / redis-cli mget nid uid... # means to get the value of multiple key SETNX:. / redis-cli setnx nnid 888888 # means to set the value specified by the key when a specified key does not exist. If it exists, the setting is not successful SETEX:. / redis-cli setex nid 5 666666 # means to set a value specified by key to expire after 5 seconds Set the validity period of the key/value MSET:. / redis-cli mset nid0001 "0001" nid0002 "0002" nid0003 "0003" # indicates that the data of the multi-key value pair is saved INCR:. / redis-cli incr count # indicates that the value of a given key is incremented (+ 1) Of course, value must be an integer INCRBY:. / redis-cli incrby count 5 # indicates an incremental operation with a specified step size for the value of a given key:. / redis-cli decr count # indicates an operation that decrements (- 1) the value of a given key DECRBY:. / redis-cli decrby count 7 # indicates an incremental operation APPEND:. / redis-cli append content "bad" or a specified step size for the value of a given key / redis-cli append content "good" # means to append a value to the specified key If key does not exist, new key SUBSTR:. / redis-cli substr content 0 4 # returns part of the string # list operation of the value of the specified key, essence RPUSH key string-add a value to the end of a key list LPUSH key string-add a value to a key list header LLEN key-list length LRANGE key start end-return a range of values in the list Equivalent to the paging query in mysql, LTRIM key start end-keep only a range of values in the list LINDEX key index-get the value of a specific index number in the list Note that the O (n) complexity LSET key index value-sets the value somewhere in the list RPOP key # collection operation SADD key member-add element SREM key member-delete element SCARD key-return set size SISMEMBER key member-determine whether a value is SINTER key1 key2 in the collection. KeyN-get the intersection element of multiple collections SMEMBERS key-list all the elements of the collection? Update log check, add-- fix parameter to repair log file redis-check-aof appendonly.aof check local database file redis-check-dump dump.rdb

=

Pressure testing

Redis testing is divided into two ways, the first is concurrent pressure, and the other is capacity pressure.

Concurrent stress can be tested with redis-benchmark, a software that comes with redis, as follows:

Redis-benchmark parameter

-h sets the IP address of the detection host, which defaults to 127.0.0.1

-p sets the port number of the test host. The default is 6379.

-s server sockets (overwhelm hosts and ports)

-c concurrent connections

-n number of requests

-d the size / byte value of the dataset used by the test (default 3 bytes)

-k 1: remain connected (default) 0: reconnect

The-r SET/GET/INCR method uses a random number to insert a value, and if set to 10, the value is rand:000000000000-rand:000000000009.

-P defaults to 1 (no pipeline). When the network delay is too long, pipe communication is used (request and response package to send and receive)

-Q simple information mode, which only displays basic information such as query and second value.

-- csv outputs information in CSV format

-l Wireless loop inserts test data, ctrl+c stops

-t only runs test comma-separated list commands, such as:-t ping,set,get

-I idle mode. Open 50 idle connections and wait immediately.

Example:

# SET/GET 100bytes detect redis server performance with host 127.0.0.1-port 6379 redis-benchmark-h 127.0.0.1-p 6379-Q-d 100 # 5000 concurrent connections, 100000 requests, detect host 127.0.0.1 port 6379 redis server performance redis-benchmark-h 127.0.0.1-p 6379-c 5000-n 100000 # send 100000 requests to redis server Each request comes with 60 concurrent clients redis-benchmark-n 100000-c 60

Results (part):

= SET =

Write a test to a collection

100000 requests completed in 2.38 seconds

100000 requests completed within 2.38 seconds

60 parallel clients

There are 60 concurrent clients per request

3 bytes payload

Write 3 bytes of data at a time

Keep alive: 1

Maintain a connection and a server to process these requests

100.00% / dev/null#echo $alet a++done

The test results are very simple, just look at the last result of info, where it will show how many key there are.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report