In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "what are the entry knowledge points of Redis". In the daily operation, I believe that many people have doubts about the entry knowledge points of Redis. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the questions of "what are the entry knowledge points of Redis?" Next, please follow the editor to study!
1. Introduction to Redis
REmote DIctionary Server (Redis) is a key-value storage system written by Salvatore Sanfilippo. Redis is an open source log database written in ANSI C language, complies with BSD protocol, supports network, can be memory-based and persistent, Key-Value database, and provides API in multiple languages. It is often called a data structure server because the value can be of types such as String, Map, list, sets, and sorted sets.
Everyone knows that redis is a no sql database based on key-value, so let's first learn something about key.
1. Any binary sequence can be used as a key
2. Redis has unified rules to design key.
3. The maximum length allowed for key-value is 512MB.
2. Supported languages
ActionScript Bash C C# C++ Clojure Common LispCrystal D Dart Elixir emacs lisp Erlang Fancy gawk GNU Prolog Go Haskell Haxe Io Java Javascript Julia Lua Matlab mruby Nim Node.js Objective-C OCaml Pascal Perl PHP Pure Data Python R Racket Rebol Ruby Rust Scala Scheme Smalltalk Swift Tcl VB VCL
3. What are the application scenarios of Redis?
1. Session caching is the most commonly used.
2. Message queue, such as payment
3. Activity ranking or counting
4. Publish and subscribe to messages (message notification)
5. Product list, comment list, etc.
4. Redis installation
For the introduction of redis installation and related knowledge points, please refer to the redis of Nosql database service.
The general steps for installation are as follows:
Redis is developed in c language, and the installation of redis requires a c language compilation environment
If there is no gcc, you need to install it online: yum install gcc-c++
Step 1: obtain the source code package: wget http://download.redis.io/rele...
Step 2: decompress redis:tar zxvf redis-3.0.0.tar.gz
Step 3: compile. Enter the redis source directory (cd redis-3.0.0). Execute make
Step 4: install. Make install PREFIX=/usr/local/redis
The PREFIX parameter specifies the installation directory of redis
5. Redis data type
Redis supports a total of five data types
1. String (string)
2. Hash (hash)
3. List (list)
4. Set (collection)
5. Zset (sorted set ordered set)
String (string)
It is the most basic data type of redis, a key corresponds to a value, it should be noted that it is a key maximum storage 512MB.
127.0.0.1 OK 6379 > get key "hello world" 127.0.0.1 OK 6379 > getset key "nihao"hello world" 127.0.0.1 nihao "hello" OK 127.0.1 > get key1 "hi" 127.0.0.1 > get key2 "nihao" 127.0.0.1 > get key3 "hello"
Introduction to related commands
Set sets value (value) for a Key
Get gets the value corresponding to a key (value)
Getset sets value (value) for a Key and returns the corresponding value
Mset sets value (value) for multiple key
Hash (hash)
Redis hash is a collection of key-value pairs, a mapping table of field and value of string type, suitable for storing objects.
127.0.0.1 hset redishash 1 "001" (integer) 1 127.0.0.1 hget redishash 1 "001" 127.0.0.1 OK 127.0.0.1 OK 127.0.0.16379 > hget redishash 1 "001" 127.0.0.1 OK 6379 > hget redishash 2 "002" 127.0.1) hmget redishash 121) "002"
Introduction to related commands
Hset configures the field in the hash corresponding to Key as value, and automatically creates it if the hash is not saved.
Hget gets the value of the field configuration in a hash
Hmset batch configures multiple field values in the same hash
Hmget acquires multiple field values in the same hash in batch
List (list)
Is a simple redis list of strings, sorted in the order in which they are inserted
127.0.0.1 lpush word hi (integer) 1 127.0.0.1 lpush word hello (integer) 2 127.0.1 0.1 > rpush word world (integer) 3 127.0.0.1 > lrange word 0 21) "hello" 2) "hi" 3) "world" 127.0.0.1 > llen word (integer) 3
Introduction to related commands
Lpush inserts an element to the left of the specified list, returning the length of the list after insertion
Rpush inserts an element to the right of the specified list, returning the length of the list after insertion
Llen returns the length of the specified list
Lrange returns the element values of the specified range in the specified list
Set (collection)
Is an unordered collection of type string and cannot be repeated
127.0.0.1 sadd redis redisset (integer) 1 127.0.1 0.1 sadd redis redisset1 (integer) 1 127.0.1 0.1 integer > sadd redis redisset2 (integer) 1 127.0.0.1 sadd redis redisset1 6379 > smembers redis 1) "redisset1" 2) "redisset" 3) "redisset2" 127.0.0.1 sadd redis redisset2 (integer) 0 127.0.1 1) smembers redis 1) "redisset1" 2) "redisset" 3) "redisset2" 127.0.0.1 smembers redis 6379 > smembers redis 1) "redisset1" 2) "redisset3" 3) "redisset" 4) "redisset2" 127.0.0.1 redisset3 6379 > srem redis redisset (integer) 1 127.0.0.1 redisset1 1) "redisset1" 2) "redisset3" 3) "redisset2"
Introduction to related commands
Sadd adds a string element to the set collection corresponding to key, which returns 1 successfully and 0 if the element exists.
Smembers returns all the elements in the specified collection
Srem deletes an element of the specified collection
Zset (sorted set ordered set)
Is an ordered collection of string types and cannot be repeated
Each element in sorted set needs to specify a score, and the elements are sorted in ascending order according to the score. If multiple elements have the same score, they are sorted in ascending order in dictionary order, so sorted set is very suitable for ranking.
127.0.0.1 zadd nosql 0001 (integer) 1 127.0.0.1 zadd nosql 6379 > zrem nosql 0002 (integer) 1 127.0.0.1 zadd nosql 6379 > zadd nosql 0003 (integer) 1 127.0.0.1 zadd nosql 6379 > zcount nosql 00 (integer) 3 127.0.0.1 Swiss 6379 > zcount nosql 03 (integer) 3 127.0.1 0.1 Swiss 6379 > zrem nosql 002 (integer) 1 127.0.1 0.1 Swiss 6379 > zcount nosql 03 (integer) 2 127.0.0.1 zrangebyscore nosql 6379 > zscore nosql 003 "0" 127.0.0.1 zrangebyscore nosql 0101) "001" 2) "003" 127.0.0.1 zrangebyscore nosql 6379 > zadd nosql 1 003 (integer) 0 127.0.1 zrangebyscore nosql 6379 > zadd nosql 1 004 (integer) 1 127.0.1 0.1 zrangebyscore nosql 6379 > zrangebyscore nosql 0101) "001" 2) "003" 3) "004" 127 .0.0.1: 6379 > zadd nosql 3005 (integer) 1 127.0.0.1) zadd nosql 2 006 (integer) 1 127.0.0.1 zadd nosql 6379 > zrangebyscore nosql 0101) "001" 2) "003" 3) "004" 4) "006" 5) "005"
Introduction to related commands
Zadd adds one or more elements to the specified sorteset
Zrem removes one or more elements from the specified sorteset
Zcount looks at the number of elements in the specified sorteset within the specified score range
Zscore looks at the elements of the specified score in the specified sorteset
Zrangebyscore views all elements within the specified score range in the specified sorteset
6. Commands related to key values
127.0.0.1 exists key (integer) 1 127.0.0.1 exists key1 6379 > exists key100 (integer) 0 127.0.1 0.1 > get key "nihao Hello "127.0.0.1 hi" hi 6379 > del key1 (integer) 1 127.0.0.1 hi 6379 > get key1 (nil) 127.0.0.1 hi 6379 > rename key key0 OK 127.0.0.1 > get key (nil) 127.0.1 > get key0 "nihao,hello" 127.0.0.1 > type key0 string
Exists # confirm the existence of key
Del # Delete key
Expire # sets the Key expiration time (in seconds)
Persist # remove the configuration of Key expiration time
Rename # rename key
Type # Type of return value
7. Commands related to Redis service
127.0.0.1 info 6379 > select 0 OK 127.0.0.1 VR 6379 > info # Server redis_version:3.0.6 redis_git_sha1:00000000 redis_git_dirty:0 redis_build_id:347e3eeef5029f3 redis_mode:standalone os:Linux 3.10.0-693.el7.x86_64 x86 regions 64 arch_bits:64 multiplexing_api:epoll gcc_version:4.8.5 process_id:31197 run_id:8b6ec6ad5035f5df0b94454e199511084ac6fb12 tcp_port:6379 uptime_in_seconds:8514 Uptime_in_days:0 hz:10 lru_clock:14015928 config_file:/usr/local/redis/redis.conf-omit N line 127.0.0.1 CONFIG GET 6379 > CONFIG GET 0 (empty list or set) 127.0.0.1 empty list or set 6379 > CONFIG GET 15 (empty list or set)
Slect # Select database (database number 0-15)
Quit # exit the connection
Info # access to Service Information and Statistics
Monitor # Real-time Monitoring
Config get # get the service configuration
Flushdb # Delete key from the currently selected database
Flushall # Delete key from all databases
8. Publish and subscribe to Redis
Redis publish and subscribe (pub/sub) is a mode of message communication in which one party sends information and the other party receives information.
The following picture shows three clients subscribing to the same channel at the same time.
The following figure shows that when new information is sent to Channel 1, the message is sent to the three clients that subscribe to it.
9. Redis transaction
Redis transactions can execute more than one command at a time
1. Put the queue cache before sending the exec command to end the transaction
2. Execute the transaction operation after receiving the exec command. If one command fails, other commands can continue to execute.
3. During the execution of a transaction, requests submitted by other clients will not be inserted into the list of commands executed by the transaction.
A transaction goes through three stages
Start transaction (command: multi)
Command execution
End transaction (command: exec)
127.0.0.1 MULTI OK 6379 > set key key1 QUEUED 127.0.0.1 get key QUEUED 6379 > rename key key001 QUEUED 127.0.1 get key QUEUED 6379 > exec 1) OK 2) key1 "3) OK
10. Redis security configuration
Security can be improved by modifying the configuration file device password parameters
# requirepass foobared
Remove the comment # and you can configure the password
If no password is configured, the query is as follows
127.0.0.1 6379 > CONFIG GET requirepass 1) "requirepass" 2) "
After configuring the password, authentication is required
127.0.0.1 NOAUTH Authentication required 6379 > CONFIG GET requirepass (error). 127.0.0.1 CONFIG GET requirepass 6379 > AUTH foobared # certified OK 127.0.0.1 CONFIG GET requirepass 1) "requirepass" 2) "foobared"
11. Redis persistence
There are two ways for Redis persistence: Snapshotting (snapshot), Append-only file (AOF)
Snapshotting (snapshot)
1. Write the data stored in memory to the binary file as a snapshot, such as the default dump.rdb
2 、 save 900 1
If more than one Key is modified within 900 seconds, start the snapshot save
3 、 save 300 10
If more than 10 Key are modified within # 300 seconds, start snapshot save
4 、 save 60 10000
# start snapshot save if more than 10000 Key are modified within 60 seconds
Append-only file (AOF)
1. When using AOF persistence, the service appends each write command received to the file (appendonly.aof) through the write function.
2. Description of parameters of AOF persistent storage method
Appendonly yes
# enable AOF persistent storage
Appendfsync always
# write to disk immediately after receiving the write command, which is the least efficient and has the best effect
Appendfsync everysec
# write to disk once per second, with efficiency and effect in the middle
Appendfsync no
# totally dependent on OS, the efficiency is the best, and the effect cannot be guaranteed
12. Redis performance test
Bring your own related testing tools
[root@test ~] # redis-benchmark-- help Usage: redis-benchmark [- h] [- p] [- c] [- n [- k]-h Server hostname (default 127.0.0.1)-p Server port (default 6379)-s Server socket (overrides host and port)-a Password for Redis Auth-c Number of parallel connections (default 50)-n Total number of requests (default 100000)-d Data size of SET/GET value in bytes (default 2)-dbnum SELECT the specified db number (default 0)-k 1=keep alive 0=reconnect (default 1)-r Use random keys for SET/GET/INCR Random values for SADD Using this option the benchmark will expand the string _ _ rand_int__ inside an argument with a 12 digits number in the specified range from 0 to keyspacelen-1. The substitution changes every time a command is executed. Default tests use this to hit random keys in the specified range. -P Pipeline requests. Default 1 (no pipeline). -Q Quiet. Just show query/sec values-- csv Output in CSV format-l Loop. Run the tests forever-t Only run the comma separated list of tests. The test names are the same as the ones produced as output. -I Idle mode. Just open N idle connections and wait. Examples: Run the benchmark with the default configuration against 127.0.0.1 redis-benchmark Use 6379: $redis-benchmark Use 20 parallel clients, for a total of 100k requests Against 192.168.1.1: $redis-benchmark-h 192.168.1.1-p 6379-n 100000-c 20 Fill 127.0.0.1 with about 1 million keys only using the SET test: $redis-benchmark-t set-n 1000000-r 100000000 Benchmark 127.0.0.1 Fill ping,set 6379 for a few commands producing CSV output: $redis-benchmark-t ping,set Get-n 100000-- csv Benchmark a specific command line: $redis-benchmark-r 10000-n 10000 eval 'return redis.call ("ping")' 0 Fill a list with 10000 random elements: $redis-benchmark-r 10000-n 10000 lpush mylist _ _ rand_int__ On user specified command lines _ rand_int__ is replaced with a random integer with a range of values selected by the-r option.
The actual test executes 1 million of the requests at the same time
[root@test] # redis-benchmark-n 1000000-Q PING_INLINE: 152578.58 requests per second PING_BULK: 150308.14 requests per second SET: 143266.47 requests per second GET: 148632.58 requests per second INCR: 145857.64 requests per second LPUSH: 143781.45 requests per second LPOP: 147819.66 requests per second SADD: 138350.86 requests per second SPOP: 134282.27 requests per second LPUSH (needed to benchmark LRANGE): 141302.81 requests per second LRANGE_100 (first elements): 146756.67 requests per second LRANGE_300 (first 300elements): 148104.27 requests per second LRANGE_500 (first 450elements): 152671.75 requests per second LRANGE_600 (first 600elements): 148104.27 requests per second MSET (10 keys): 132731.62 requests per second
13. Backup and recovery of Redis
There are two ways for Redis automatic backup
The first is to back up through dump.rdb files.
The second method uses aof files to realize automatic backup.
Dump.rdb backup
The default automatic file backup method of the Redis service (when AOF is not enabled), the data will be automatically loaded from the dump.rdb file when the service starts.
* * the configuration is in redis.conf.
Save 900 1
Save 300 10
Save 60 10000 million *
You can also manually execute the save command to achieve manual backup
127.0.0.1 set name key OK 6379 > set name key1 OK 127.0.0.1 set name key1 OK 6379 > BGSAVE Background saving started
When redis snapshots to dump files, dump.rdb files are automatically generated
# The filename where to dump the DB dbfilename dump.rdb-rw-r--r-- 1 root root 253 Apr 17 20:17 dump.rdb
The SAVE command indicates that the current database is snapped to a dump file using the main process
The BGSAVE command indicates that the main process will fork a child process for snapshot backup
The difference between the two types of backups is that the former blocks the main process and the latter does not.
Recovery example
# Note that you must specify a directory here, not a file name.dir / usr/local/redisdata/ # backup file storage path 127.0.0.1usr/local/redisdata/ 6379 > CONFIG GET dir 1) "dir" 2) "/ usr/local/redisdata" 127.0.0.1usr/local/redisdata/ 6379 > set key 001 OK 127.0.0.1usr/local/redisdata/ 6379 > set key1 002 OK 127.0.0.1 set key2 6379 > set key2 003 OK 127.0.0.1v 6379 > save OK
Back up the backup files to another directory
[root@test ~] # ll / usr/local/redisdata/ total 4-rw-r--r-- 1 root root 49 Apr 17 21:24 dump.rdb [root@test] # date Tue Apr 17 21:25:38 CST 2018 [root@test] # cp. / dump.rdb / tmp/
Delete data
127.0.0.1 del key1 6379 > get key1 (nil) 1 127.0.0.1
Turn off the service and copy the original backup files back to the save backup directory
[root@test ~] # redis-cli-a foobared shutdown [root@test ~] # lsof-I: 6379 [root@test ~] # cp / tmp/dump.rdb / usr/local/redisdata/ cp: overwrite'/ usr/local/redisdata/dump.rdb'? Y [root@test ~] # redis-server / usr/local/redis/redis.conf & [1] 31487
Log in to see if the data is restored
[root@test] # redis-cli-a foobared 127.0.0.1 mget key key1 key2 6379 > mget key key1 key2 1) "001" 2) "002" 3) "003"
AOF automatic backup
The redis service defaults to turning off this configuration
# APPEND ONLY MODE # appendonly no # The name of the appendonly file (default: "appendonly.aof") appendfilename "appendonly.aof" # appendfsync always appendfsync everysec # appendfsync no
The relevant parameters of the configuration file have been described in detail earlier.
AOF file backup, is to back up all the history and executed commands, and mysql binlog is very similar, in the recovery is the second time before the execution of the command, it should be noted that before the recovery, the same as the database recovery needs to manually delete the del or misoperation commands.
AOF is different from dump backup
1. Aof file backup is different from dump file backup.
2. If the priority of reading files is different, the service will start according to the following priority
If only AOF is configured, load the AOF file to recover the data when restarting
If both RBD and AOF are configured, only the AOF file is loaded to recover the data.
If only RBD is configured, the dump file will be loaded at startup to recover data
Note: as long as aof is configured but there is no aof file, the database started at this time will be empty
14. Introduction to Redis production optimization
1. Memory management optimization
When the members and values of hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 # list are not too large, it will be stored in a compact format with relatively low memory overhead.
When running Redis in the linux environment, if the memory of the system is relatively small, the automatic backup may fail at this time. You need to modify the vm.overcommit_memory parameter of the system, which is the memory allocation policy of the linux system.
0 means that the kernel will check whether there is enough memory available for the application process; if there is enough memory available, the memory request is allowed; otherwise, the memory request fails and the error is returned to the application process.
1 means that the kernel allows all physical memory to be allocated, regardless of the current memory state.
2 indicates that the kernel allows more than the sum of all physical memory and swap space to be allocated
The official statement of Redis is that it is recommended to change the value of vm.overcommit_memory to 1, which can be modified in the following ways:
(1) Edit / etc/sysctl.conf to vm.overcommit_memory=1, and then sysctl-p to make the configuration file effective
(2) sysctl vm.overcommit_memory=1
(3) echo 1 > / proc/sys/vm/overcommit_memory
* * 2. Memory pre-allocation
3. Persistence mechanism * *
Scheduled snapshots: inefficient, data will be lost
AOF: maintain data integrity (the number of instances should not be too large, 2G maximum)
Optimization summary
1) choose the appropriate data type according to business needs
2) turn off all persistence methods when the business scenario does not need persistence (using ssd disks to improve efficiency)
3) do not use virtual memory to write AOF in real time every second
4) do not allow the server where REDIS is located to use more than 3x5 of the total memory.
5) to use maxmemory
6) use multiple redis instances separately by business with a large amount of data
15. Redis cluster application
Clustering is to centralize multiple redis instances to achieve the same business requirements, or to achieve high availability and load balancing.
What are the cluster schemes?
1. Haproxy+keepalived+redis cluster
1) through the configuration file of redis, the master-slave replication and read-write separation can be realized.
2) through the configuration of haproxy, load balancing can be realized, and T will be removed from the cluster in time when a failure occurs.
3) use keepalived to achieve high availability of load
2. Sentinel official cluster management tool
The actual combat process of high availability scheme in Redis cluster production environment
1) sentinel is responsible for monitoring, reminding and automatic failover of master and slave services in the cluster
2) redis cluster is responsible for providing external services.
For more information about redis sentinel cluster cluster configuration, please see
3 、 Redis Cluster
Redis Cluster is a distributed solution of Redis, which is officially launched in Redis version 3.0, which effectively solves the needs of Redis distribution. When you encounter bottlenecks such as stand-alone memory, concurrency and traffic, you can use Cluster architecture to achieve load balancing.
1) official recommendation, no doubt.
2) decentralized, the cluster can increase by up to 1000 nodes, and the performance expands linearly with the increase of nodes.
3) easy to manage, you can add or remove nodes, move slots, and so on.
4) simple and easy to use.
At this point, the study of "what are the basic knowledge points of Redis" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.