Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Redis walkthrough (5) redis persistence

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

What is meant by persistence is that your wife asks you to last longer.

To put it bluntly, persistence: the process of saving data in memory to disk (the database is also a special performance of the disk) to ensure that access can continue after a downtime or power outage. Common persistence frameworks in java, such as Hibernate,ibatis,jdbc, are one of the persistence implementations, and of course the normal file preservation function also counts.

In the case of memcached, the information saved by memcached is not persisted, so it can only survive one process lifetime, and the next time it is restarted, all data will be lost. This can be regarded as a disadvantage of memcached. Redis provides persistence support and provides two persistence schemes.

"redis drill series", based on the case presentation, through the image, comparison, graphic way, to achieve the knowledge carding process.

The main outline of this chapter.

Comparison of two persistence schemes for redis

Description of parameters of two persistence schemes for redis

Rehearse the trigger timing of RDB

Rehearse AOF

RDB VS AOF

Comparison of two persistence schemes for 1.redis

Redis provides two ways of persistence, namely RDB (Redis DataBase) and AOF (Append Only File). RDB,AOF is equivalent to two brothers in the redis persistence world, cooperating with each other and promoting each other. Of course, they will also compete for some resources.

RDB, in short, is to generate snapshots of data stored in redis and store them on media such as disks at different points in time.

AOF, on the other hand, is to achieve persistence from a different perspective, that is, to record all the write instructions executed by redis, and the data recovery can be achieved by repeating these write instructions from front to back the next time redis is restarted.

RDB saves all the database data contained in the server to the hard disk in the form of binary files.

The following figure shows the specific operation steps. For more information, please see [http://www.cnblogs.com/luogankun/p/3986403.html]

AOF, English is Append Only File, that is, only files that cannot be overwritten are allowed to be appended. The AOF method records the write instructions that have been executed and executes the instructions again in the order from front to back when the data is recovered. It is as simple as that. There is a shadow of database software online log.

Comparison between the two

Description of parameters of two persistence schemes for redis

When it comes to file preservation, the following issues are generally considered, not limited to redis.

Save time: when to trigger. Depends on the balance between data consistency and efficiency

Save the target object: log, binary.

Save directory: local directory or shared directory

Whether to compress: save space

Whether to check or not: consider security

Save failure handling mechanism: exception report

Number of open threads: determines speed, but affects other features

Save file limit: exceed the size, not allowed; and do not allow upload * .exe files, etc.

With these questions, deduct the relevant RDB parameters, and AOF parameters, the problem is much easier.

RDB

# Save DB to disk:

#

# format: save

#

# Save data to disk according to a given time interval and number of writes

#

# the following example means:

# if the value of at least 1 key changes after 900 seconds, save it

If the value of at least 10 key changes after 300 seconds, save

# if the value of at least 10000 key changes after 60 seconds, save

#

# Note: you can disable the save function by commenting out all save lines.

# you can also disable it by directly using an empty string:

# save ""

Save 900 1

Save 300 10

Save 60 10000

# by default, if the last background save of redis fails, redis will stop accepting write operations

# this is a tough way to let users know that data cannot be correctly persisted to disk

# otherwise no one will notice the disaster.

#

# if the background save process restarts, redis will also automatically allow write operations.

#

# however, if you install reliable monitoring, you may not want redis to do this, so you can change it to no.

Stop-writes-on-bgsave-error yes

# whether to use LZF to compress strings when using dump .rdb database

# set to yes by default

# if you want to save a child process to save some cpu, you can set it to no

# but this dataset may be larger

Rdbcompression yes

# whether to verify the rdb file

Rdbchecksum yes

# set the file location of dump

Dbfilename dump.rdb

# working directory

# for example, the above dbfilename only specifies the file name

# but it will be written to this directory. This configuration item must be a directory, not a file name.

Dir. / AOF

# by default, Redis exports data to disk asynchronously. This pattern is sufficient for many applications

# but if there is a power outage or something goes wrong with the redis process, it will result in the loss of update data for a period of time (depending on the configuration item)

#

# this add-only file is an optional data persistence strategy that provides a better experience.

# for example, if the default configuration data fsync policy is used, redis will only lose updated data within one second in the event of an unexpected power outage on the server

# or when there is something wrong with the redis process and the operating system is working properly, redis will lose only one data update operation.

#

# AOF and RDB persistence can be started at the same time without conflict.

# if AOF is enabled, aof files will be loaded when redis is started, which can provide a better guarantee.

# Please get more data persistence information at http://redis.io/topics/persistence.

Appendonly no

# only add the file name of the file. (default is appendonly.aof)

# appendfilename appendonly.aof

# calling the fsync () function tells the operating system that the data is actually written to disk, rather than waiting for more data in the buffer.

# some operating systems output data to disk, while others are just ASAP.

#

# redis supports three different methods:

#

# no: do not call, but wait for the operating system to empty the buffer when the operating system is about to output data. Soon.

# always: each update data is written to the increment-only log file. Slow, but safest.

# everysec: called once per second. Compromise.

#

The default is once per second, because it is often a compromise between speed and data security.

# if you can accept letting the operating system empty the cache automatically, you can reduce this configuration to 'no' (if you can accept data loss for a period of time, the default rdb is sufficient)

# it's entirely up to you. If you want a better experience or, on the contrary, using 'always',' will be slow, but safer than 'everysec'.

#

# Please get more details in the following article:

# http://antirez.com/post/redis-persistence-demystified.html

#

# if you don't know the difference between these three items, or which machine is right for you, use default.

# appendfsync always

Appendfsync everysec

# appendfsync no

# when the AOF policy is set to 'always'' or 'everysec', the save process in the background will perform a lot of disk Icano operations

# in some linux structures, redis blocks for a long time when calling the sync () method. Remember, there is no way to solve this problem at this time, even if you make calls in different processes, you will block.

#

# using the following configuration may alleviate this problem, so that the fsync () method will not be called in the main process when storing big data or BIGREWRITEAOF.

#

# this means that if another child process is doing a save operation, redis behaves as if it were configured as' appendfsync no'.

# in practice, this means that the log may be lost for 30 seconds in the worst-case scenario (using the default configuration of linux).

#

# if you have a special situation, you can configure it as' yes'. But configuring as' no''is the most secure option.

No-appendfsync-on-rewrite no

# automatically rewrite only add files.

# redis can automatically blindly call 'BGREWRITEAOF' to rewrite the log file if the log file grows by a specified percentage.

#

# it works like this: redis records the size of the log file after each rewrite. (if there is no rewritten size after reboot, the log file size is used by default.)

#

# the base log size is compared with the current log size. If the current size is greater than the specified percentage, the rewriting mechanism is triggered.

# at the same time, you should also develop a rewrite line to avoid a percentage increase that is enough, but the log file is still very small.

#

# specify a percentage of 0 to disable the automatic rewrite of log files.

Auto-aof-rewrite-percentage 100

Auto-aof-rewrite-min-size 64mb

# redis can load truncated AOF files when starting. It is enabled by default. (supported after 3.0)

Aof-load-truncated yes

two。 Rehearse the trigger timing of RDB

# if the value of at least 1 key changes after 900 seconds, save it

If the value of at least 10 key changes after 300 seconds, save

# if the value of at least 10000 key changes after 60 seconds, save

# Note: you can disable the save function by commenting out all save lines.

# you can also disable it by directly using an empty string:

# save ""

Save 900 1

Save 300 10

Save 60 10000

Use the default RDB configuration

Redis.conf

Save 900 1save 300 10save 60 10000stop-writes-on-bgsave-error yesrdbcompression yesdbfilename dump.rdbdir. /

2.1 walkthrough "restart redis, key value lost"

[root@hadoop2 redis] # bin/redis-cli-h 192.168.163.156192.168.163.156 blog.51cto.com 6379 > flushdbOK192.168.163.156:6379 > set blog "blog.51cto.com" OK# assigns a key assignment of 192.168.163.156 blog.51cto.com 6379 > get blog "blog.51cto.com" 192.168.163.156 flushdbOK192.168.163.156:6379 6379 > exit [root@hadoop2 redis] # ps-ef | grep redisroot 2546 10 07:42? 00:00:05 / usr/local/redis / bin/redis-server 192.168.163.156 pts/0 3235 2517 0 08:54 pts/0 00:00:00 grep redis [root@hadoop2 redis] # kill-9 254 "restart redis [root@hadoop2 redis] # bin/redis-server redis.conf [root@hadoop2 redis] # bin/redis-cli-h 192.168.163.15 lost blog key 192.168.163.1566379 > get blog (nil)

This result is not very unexpected. It seems that there is a problem with "rumors all over the world that the key value of redis is not lost."

Actually, no problem. It is the deficiency of the above exercise that the amount of data does not meet the trigger timing requirements. Go ahead.

2.2 drill "restart redis without losing key values"

[root@hadoop2 redis] # bin/redis-cli-h 192.168.163.156192.168.163.156 blog.51cto.com 6379 > set blog "blog.51cto.com" OK# sends a 1w request with the help of its own testing tool Trigger RDB to save [root@hadoop2 redis] # bin/redis-benchmark-r 10000-h 192.168.163.156 = PING_INLINE = 100000 requests completed in 0.83 seconds 50 parallel clients 3 bytes payload keep alive: 199.77% get blog "blog.51cto.com" 192.168.163.156 redis service [root@hadoop2 redis] # ps-ef | grep redisroot 3241 13 3 08:54? 00:00:19 bin/redis-server 192. 168.163.156:6379root 3351 2517 0 09:05 pts/0 00:00:00 grep redis [root@hadoop2 redis] # kill-9 324 restart the redis service [root@hadoop2 redis] # bin/redis-server redis.conf # after confirming the restart Blog key still exists [root@hadoop2 redis] # bin/redis-cli-h 192.168.163.156192.168.163.156bin/redis-cli 6379 > get blog "blog.51cto.com"

It seems that "there are rumors in the world that the key value of redis is not lost", which is a prerequisite.

3. Rehearse AOF

Modify redis.conf to enable AOF and close RDB

Save 900 1save 300 10save 60 10000save "" stop-writes-on-bgsave-error yesrdbcompression yesdbfilename dump.rdbdir. / appendonly yesappendfilename "appendonly.aof" appendfsync everysecno-appendfsync-on-rewrite noauto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 16mbaof-load-truncated yes

3.1Walkthrough restart redis, whether the key value is lost

192.168.163.156 exit 6379 > set blog "blog.51cto.com" OK192.168.163.156:6379 > exit [root@hadoop2 redis] # pkill redis# restart [root@hadoop2 redis] # bin/redis-server redis.conf [root@hadoop2 redis] # bin/redis-cli-h 192.168.163.1566379 key is not lost > get blog "blog.51cto.com"

Conclusion: thanks to AOF, the bond value is not lost, which is different from RDB. The reason is that the trigger timing of the two is different.

At this point, confirm the production AOF file

# only one record is saved [root@hadoop2 redis] # ll-rw-r--r--. 1 root root 67 September 3 10:31 appendonly.aof# view log contents (text file) [root@hadoop2 redis] # cat appendonly.aof * 2 $6SELECT$10*3 $3set$4blog$14blog.51cto.com

3.2Replay the effect of restarting AOF rewriting

By looking at the contents of the log, you can confirm that the operation command log is stored. This is bound to cause the file size to grow too fast, unlike RDB files. RDB stores binaries and snapshots at some point, storing itself as memory-oriented results. An example of the content of RDB will be provided later.

Prepare test data

[root@hadoop2 redis] # bin/redis-cli-h 192.168.163.155V Sequential Operation key value 192.168.163.156 incr year (integer) 2002192.168.163.156 incr year 6379 > incr year (integer) 2003192.168.163.1566379 > incr year (integer) 2004192.168.163.156Ze6379 > incr year (integer) 2005192.163.1561979 > incr year (integer) 2006192.168.163.156: 6379 > incr year (integer) 2007192.168.163.156 AOF 6379 > incr year (integer) 2008192.168.163.156 incr year 6379 > incr year (integer) 2009192.168.163.156 incr year (integer) 2010192.168.163.156 incr year > incr year (integer) 2010192.168.163.156incr year 6379 > get year "2010" # get is not output to the AOF log [root@hadoop2 redis] # cat appendonly.aof * 2 $6SELECT$10*3 $3set$4blog$14blog.51cto.com*2 $6SELECT$10*3 $4incr$4year*2 $4incr$4year*2 $4incr$4year*2 $4incr$4year*2 $4incr$4year*2 $4incr$4year*2 $4incr$4year*2 $4incr$4year*2 $4incr$4year# With the help of benchmark tool, batch insert data, trigger AOF file rewrite [root@hadoop2 redis] # bin/redis-benchmark-r 20000-h 192.168.163.156#appendonly.aof file change process (11m-> 27m-- > 32m-> 33m-> 42m-> 8.5m. [root@hadoop2 redis] # ll appendonly.aof RWKuk. 1 root root 11m September 3 11:03 appendonly.aof-rw-r--r--. 1 root root 27m September 3 11:03 appendonly.aof [root@hadoop2 redis] # ll appendonly.aof-h Murray RW Murray. 1 root root 32m September 3 11:04 appendonly.aof [root@hadoop2 redis] # ll appendonly.aof-h Murray RW Murray r Murray. 1 root root 33m September 3 11:04 appendonly.aof [root@hadoop2 redis] # ll appendonly.aof-h Murray RW Murray. 1 root root 36m September 3 11:04 appendonly.aof [root@hadoop2 redis] # ll appendonly.aof-h Murray RW Muhami. 1 root root 42m September 3 11:04 appendonly.aof [root@hadoop2 redis] # ll appendonly.aof-hmurr RW Muhami. 1 root root 44m September 3 11:04 appendonly.aof [root@hadoop2 redis] # ll appendonly.aof-h Murray RW Murray. 1 root root 8.5m September 3 11:04 appendonly.aof [root@hadoop2 redis] # ll appendonly.aof-hmurr RW Muhami. 1 root root 11m September 3 11:04 appendonly.aof [root@hadoop2 redis] # ll appendonly.aof-h Murray RW Murray. 1 root root 15m September 3 11:04 appendonly.aof

It suddenly changed from 42m to 8.5m, and the AOF rewrite operation obviously occurred.

With year as the target, confirm what happened to AOF rewriting.

Before and after rewriting, it is found that the result of multiple operations is converted into an equivalent command, which greatly reduces the storage space.

1. We can also use bgrewriteaof to manually trigger automatic rewriting of AOF.

2. Call BGSAVE to manually trigger snapshot save and save snapshot.

But the online environment should pay attention to the blocking situation.

4.AOF VS RDB.

The two ways of persistence are not incompatible with each other, but support each other and blend with each other.

Official document, it is recommended that both be opened at the same time.

Enable both persistence (redis.conf) at the same time

Save 9001save 300 10save 60 10000stop-writes-on-bgsave-error yesrdbcompression yesdbfilename dump.rdbdir. / appendonly yesappendfilename "appendonly.aof" appendfsync everysecno-appendfsync-on-rewrite noauto-aof-rewrite-percentage 100auto-aof-rewrite-min-size 16mbaof-load-truncated yes

4.1 compare two ways of file storage

1. Delete rdb,aof file 2. Restart redis-server#3. Send a batch of 1w requests [root@hadoop2 redis] # bin/redis-benchmark-r 10000-h 192.168.163.156 Q4. Compare the size of 2 files [root@hadoop2 redis] # ll-h total amount of 29m Murray RW Murray RMI Murray. 1 root root 192 9 / 9 3 10:58 1 / 2 r / w / r / r / m / r / RW / RW / RMI / RMI / RW / RMI / RMI / RW / 1 root root 28m September 3 11:26 appendonly.aof-rw-r--r--. 1 root root 457K September 3 11:26 dump.rdb

4.2 walkthrough aof file corruption

[root@hadoop2 redis] # bin/redis-server redis.conf [root@hadoop2 redis] # bin/redis-cli-h 192.168.163.156192.168.163.156 bin/redis-cli 6379 > set blog "blog.51cto.com" OK192.168.163.156:6379 > set subject "redis" OK192.168.163.156:6379 > set year 2016OK192.168.163.156:6379 > keys * 1) "year" 2) "subject" 3) "blog"

Manually modify the appendonly.aof file

Verified by redis-check-aof, as expected, "there is something wrong with the file".

[root@hadoop2 redis] # bin/redis-check-aof Usage: bin/redis-check-aof [--fix] [root@hadoop2 redis] # bin/redis-check-aof appendonly.aof 0x 0: Expected\ r\ n, got: 0a00AOF analyzed: size=112, ok_up_to=0, diff=112AOF is not valid

... Check startup log (WARRING is on hold for the time being)

4690:M 03 Sep 11:42:12.213 # WARNING: The TCP backlog setting of 511 cannot be enforced because / proc/sys/net/core/somaxconn is set to the lower value of 128.4690:M 03 Sep 11:42:12.213 # Server started, Redis version 3.2.34690:M 03 Sep 11:42:12.213 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory=1' to / etc/sysctl.conf and then reboot or run the command' sysctl vm.overcommit_memory=1' for this to take effect.4690:M 03 Sep 112Switzerland 12.214 # Bad file format reading the append only file: make a backup of your AOF file, then use. / redis-check-aof-- fix

Client connection failed

[root@hadoop2 redis] # bin/redis-cli-h 192.168.163.156

Could not connect to Redis at 192.168.163.156:6379: Connection refused

Repair aof files

[root@hadoop2 redis] # bin/redis-check-aof

Usage: bin/redis-check-aof [--fix]

[root@hadoop2 redis] # bin/redis-check-aof-- fix appendonly.aof

0x 0: Expected\ r\ n, got: 0a00

AOF analyzed: size=112, ok_up_to=0, diff=112

This will shrink the AOF from 112 bytes, with 112 bytes, to 0 bytes

Continue? [y/N]: y

Successfully truncated AOF

The result of the # fix is that the AOF file is emptied.

The restart was successful, but unfortunately all the data was lost.

Bin/redis-server redis.conf

[root@hadoop2 redis] # cat appendonly.aof

Bin/redis-cli-h 192.168.163.156

192.168.163.156 purl 6379 > keys *

(empty list or set)

Of course, there is something wrong with the demo AOF file, which is a serious problem. You can see the importance of backup.

Another benefit of the AOF approach is illustrated by a "scene reproduction". When a student was operating redis, he accidentally executed FLUSHALL, which led to the emptying of all the data in redis memory, which is a very tragic thing. However, this is not the end of the world, as long as redis is configured with AOF persistence, and the AOF file has not been rewritten (rewrite), we can pause redis and edit the AOF file as quickly as possible, delete the FLUSHALL command on the last line, and then restart redis, and then restore all the data of redis to the state before FLUSHALL. Isn't that amazing? this is one of the benefits of AOF persistence. But if the AOF file has been rewritten, the data cannot be recovered in this way.

Our predecessors used to say quietly. Is there a problem. So I retried it.

This time I speeded up, shutting down the redis service immediately, and using the vi command to modify the aof file directly. The result is that the data can be recovered.

This demonstration is no longer subsidized.

It is persistence that makes redis a big step closer to "storage" and ensures the reliability of data.

Friendly tips, rehearsal articles, it is best to watch with the official theoretical documents, the effect is remarkable.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report