Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

[redis Learning] introduction to Redis Database

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

[this tutorial catalog]

What is 1.redis?

Who is the author of 2.redis?

3. Who is using redis?

4. Learn to install redis

5. Learn to start redis

6. Use the redis client

7.redis data structure-introduction

8.redis data structure-strings

9.redis data structure-lists

10.redis data structures-Collection

11.redis data structure-ordered set

12.redis data structure-Hash

13. Talk about redis persistence-two ways

14. Talk about redis persistence-RDB

15. Talk about redis persistence-AOF

16. Talk about redis persistence-AOF rewriting

17. Talk about redis persistence-how to choose RDB and AOF

18. Talk about master and slave-usage

19. Talk about the principle of master-slave-synchronization

20. Talk about the transaction processing of redis

21. Teach you to understand redis configuration-introduction

twenty-two。 Teach you to understand redis configuration-generic

23. Teach you to read redis configuration-Snapshot

24. Teach you to read redis configuration-copy

25. Teach you to read redis configuration-Security

twenty-six。 Teach you to understand redis configuration-restrictions

twenty-seven。 Teach you to understand redis configuration-append mode

twenty-eight。 Teach you to read redis configuration-LUA script

twenty-nine。 Teach you to read redis configuration-slow log

thirty。 Teach you to read redis configuration-event notification

thirty-one。 Teach you to understand redis configuration-Advanced configuration

[what is redis]

Redis is an open source, written in C language, supports network interaction, memory-based and persistent Key-Value database.

The official website address of redis, very easy to remember, is redis.io. (specially checked, the domain suffix io belongs to the national domain name, which is british Indian Ocean territory, that is, the British Indian Ocean Territory.)

Currently, Vmware is funding the development and maintenance of redis projects.

[who is the author of redis]

This is the author of redis. His name is Salvatore Sanfilippo. He is from Sicily, Italy, and now lives in Catania. Currently works for Pivotal.

The screen name he uses is antirez. If you are interested, you can visit his blog at antirez.com. Of course, you can also go to follow his github at http://github.com/antirez.

[who is using redis]

Blizzard, digg, stackoverflow, github, flickr...

[learn to install redis]

Download the latest version of redis-X.Y.Z.tar.gz from redis.io and decompress it, then go to the redis-X.Y.Z folder and directly make. The installation is very simple.

When make succeeds, some binary executables are generated under the src folder, including redis-server, redis-cli, and so on:

The copy code is as follows:

$find. -type f-executable

. / redis-benchmark / / tools for redis performance testing

. / redis-check-dump / / dump.rdb file used to fix the problem

. / redis-cli / / client of redis

. / redis-server / / server of redis

. / redis-check-aof / / AOF file used to fix the problem

. / redis-sentinel / / for cluster management

[learn to start redis]

Starting redis is very simple, you can start the server directly by. / redis-server, and you can specify the configuration file to load in the following way:

The copy code is as follows:

. / redis-server.. / redis.conf

By default, redis-server runs in a non-daemon manner, and the default service port is 6379.

There is also an interesting story about why the author chose 6379 as the default port. students who are good at English can take a look at the explanation in this blog post.

[use redis client]

Let's look directly at an example:

The copy code is as follows:

/ / to start the redis client in this way

$. / redis-cli

/ / use set instruction to set key and value

127.0.1 6379 > set name "roc"

OK

/ / to get the value of name

127.0.0.1 purl 6379 > get name

"roc"

/ / close the redis server through the client

127.0.0.1 purl 6379 > shutdown

127.0.0.1purl 6379 >

[redis data structure-introduction]

Redis is an advanced key:value storage system in which value supports five data types:

1. String (strings)

two。 List of strings (lists)

3. Collection of strings (sets)

4. Ordered set of strings (sorted sets)

5. Hash (hashes)

With regard to key, there are a few points to remind you:

1.key should not be too long, try not to exceed 1024 bytes, which not only consumes memory, but also reduces the efficiency of lookup.

2.key should not be too short, if it is too short, the readability of key will be reduced.

3. In a project, key is best to use a uniform naming pattern, such as user:10000:passwd.

[redis data structure-strings]

Some people say that if you only use string types in redis and do not use the persistence feature of redis, then redis is very much like memcache. This shows that the strings type is a very basic data type and a necessary data type for any storage system.

Let's look at the simplest example:

The copy code is as follows:

Set mystr "hello world!" / / sets the string type

Get mystr / / read string type

The use of the string type is that simple, because it is binary safe, so you can store the contents of an image file as a string.

In addition, we can also perform numeric operations through string types:

The copy code is as follows:

127.0.0.1 6379 > set mynum "2"

OK

127.0.0.1 purl 6379 > get mynum

"2"

127.0.0.1 purl 6379 > incr mynum

(integer) 3

127.0.0.1 purl 6379 > get mynum

"3"

Look, redis converts the string type to a numeric value when it comes to numeric operations.

Because INCR and other instructions have the characteristics of atomic operation, we can use redis's INCR, INCRBY, DECR, DECRBY and other instructions to achieve the effect of atomic counting. If, in a certain scenario, three clients simultaneously read the value of mynum (the value is 2), and then add 1 to it at the same time, then the final value of mynum must be 5. Many websites use this feature of redis to achieve business statistical counting requirements.

[redis data structure-lists]

Another important data structure of redis is called lists, which is translated into Chinese as "list".

First of all, to be clear, the lists in redis is not an array but a linked list in the underlying implementation, that is to say, for a lists with millions of elements, the time complexity of inserting a new element in the head and tail is constant, for example, inserting a new element in the lists header of 10 elements with LPUSH should be the same speed as inserting a new element in the lists header of tens of millions of elements.

Although lists has such advantages, it also has its disadvantages, that is, the element location of chain phenotype lists is slower, while the element location of array lists is much faster.

The common operations of lists include LPUSH, RPUSH, LRANGE and so on. We can use LPUSH to insert a new element on the left side of lists, RPUSH to insert a new element on the right side of lists, and LRANGE command to specify a range from lists to extract the element. Let's look at a few examples:

The copy code is as follows:

/ / create a new list called mylist, and insert the element "1" in the header of the list

127.0.0.1 6379 > lpush mylist "1"

/ / returns the number of elements in the current mylist

(integer) 1

/ / insert element "2" to the right of mylist

127.0.0.1 6379 > rpush mylist "2"

(integer) 2

/ / insert element "0" on the left side of mylist

127.0.0.1 6379 > lpush mylist "0"

(integer) 3

/ / list the elements in mylist from number 0 to number 1

127.0.0.1 lrange mylist 6379 > 0 1

1) "0"

2) "1"

/ / list the first element from the number 0 to the last in the mylist

127.0.0.1 lrange mylist 6379 > 0-1

1) "0"

2) "1"

3) "2"

Lists is widely used, just to name a few:

1. We can use lists to implement a message queue, and we can ensure that the order does not need to be sorted through ORDER BY like MySQL.

two。 The function of paging can also be easily realized by using LRANGE.

3. In the blog system, comments on each blog post can also be stored in a separate list.

[redis data structure-Collection]

The collection of redis is a kind of unordered collection, in which the elements are not in order.

Collection-related operations are also rich, such as adding new elements, deleting existing elements, taking intersection, taking union, taking difference sets, and so on. Let's look at examples:

The copy code is as follows:

/ / add a new element "one" to the collection myset

127.0.1 6379 > sadd myset "one"

(integer) 1

127.0.1 6379 > sadd myset "two"

(integer) 1

/ / list all elements in the collection myset

127.0.0.1 purl 6379 > smembers myset

1) "one"

2) "two"

/ / determine whether element 1 is in the collection myset. Return 1 indicates existence.

127.0.1 6379 > sismember myset "one"

(integer) 1

/ / determine whether element 3 is in the collection myset. If 0 is returned, it does not exist.

127.0.1 6379 > sismember myset "three"

(integer) 0

/ / create a new collection yourset

127.0.0.1 6379 > sadd yourset "1"

(integer) 1

127.0.0.1 6379 > sadd yourset "2"

(integer) 1

127.0.0.1 purl 6379 > smembers yourset

1) "1"

2) "2"

/ / the union of one to two sets

127.0.0.1 purl 6379 > sunion myset yourset

1) "1"

2) "one"

3) "2"

4) "two"

There are also some common ways to use collections. For example, QQ has a social function called "friend tags". You can tag your friends, such as "Beauty", "tuhao", "Brother" and so on. At this time, you can use redis collections to store each user's tags in a collection.

[redis data structure-ordered set]

Redis not only provides sets, but also thoughtfully provides sorted sets. Each element in an ordered set is associated with an score, which is the basis for sorting.

Most of the time, we call the ordered set in redis zsets, because in redis, the operation instructions related to the ordered set begin with z, such as zrange, zadd, zrevrange, zrangebyscore and so on.

Old rules, let's take a look at some vivid examples:

/ / add an ordered collection myzset, and add an element baidu.com, which is assigned the serial number 1:

The copy code is as follows:

127.0.0.1 6379 > zadd myzset 1 baidu.com

(integer) 1

/ / add an element 360.com to myzset and give it the serial number 3

127.0.0.1 6379 > zadd myzset 3 360.com

(integer) 1

/ / add an element google.com to myzset and give it the serial number 2

127.0.0.1 6379 > zadd myzset 2 google.com

(integer) 1

/ / listing all the elements of myzset, along with their serial numbers, you can see that myzset is already in order.

127.0.0.1 6379 > zrange myzset 0-1 with scores

1) "baidu.com"

2) "1"

3) "google.com"

4) "2"

5) "360.com"

6) "3"

/ / list only the elements of myzset

127.0.0.1 zrange myzset 6379 > 0-1

1) "baidu.com"

2) "google.com"

3) "360.com"

[redis data structure-hash]

Finally, I would like to introduce you to hashes, that is, hash. Hash is a data structure that has been available since the redis-2.0.0 version.

Hashes stores the mapping between strings and string values. For example, a hash is suitable for a user to store his full name, last name, age, and so on.

Let's look at an example:

The copy code is as follows:

/ / Hash is created and assigned

127.0.0.1 6379 > HMSET user:001 username antirez password P1pp0 age 34

OK

/ / list the contents of the hash

127.0.0.1 purl 6379 > HGETALL user:001

1) "username"

2) "antirez"

3) "password"

4) "P1pp0"

5) "age"

6) "34"

/ / change a value in the hash

127.0.0.1 6379 > HSET user:001 password 12345

(integer) 0

/ / list the contents of the hash again

127.0.0.1 purl 6379 > HGETALL user:001

1) "username"

2) "antirez"

3) "password"

4) "12345"

5) "age"

6) "34"

The operation of hashes is also very rich. If you need it, you can check it here.

[talk about redis persistence-two ways]

Redis provides two ways of persistence, namely RDB (Redis DataBase) and AOF (Append Only File).

RDB, in short, is to generate snapshots of data stored in redis and store them on media such as disks at different points in time.

AOF, on the other hand, is to achieve persistence from a different perspective, that is, to record all the write instructions executed by redis, and the data recovery can be achieved by repeating these write instructions from front to back the next time redis is restarted.

In fact, both RDB and AOF can be used at the same time. In this case, if redis is restarted, AOF will be preferred for data recovery. This is because the data recovery in AOF is more complete.

If you don't have the need for data persistence, you can also turn off RDB and AOF, so that redis will become a pure memory database, just like memcache.

[talk about redis persistence-RDB]

RDB method, which persists the data of redis at a certain time to disk, is a snapshot persistence method.

In the process of data persistence, redis will first write the data to a temporary file, and the last persisted file will not be replaced with this temporary file until the persistence process is over. It is this feature that allows us to back up at any time, because snapshot files are always fully available.

For the RDB mode, redis creates (fork) a child process separately for persistence, while the main process does not perform any IO operations, thus ensuring the extremely high performance of redis.

If large-scale data recovery is needed and is not very sensitive to the integrity of data recovery, then RDB is more efficient than AOF.

Although RDB has many advantages, its disadvantages can not be ignored. If you are very sensitive to the integrity of your data, then RDB is not for you, because even if you persist every 5 minutes, there will still be nearly 5 minutes of data loss when redis fails. Therefore, redis also provides another way of persistence, and that is AOF.

[talk about redis persistence-AOF]

AOF, English is Append Only File, that is, only files that cannot be overwritten are allowed to be appended.

As mentioned earlier, the AOF method records the write instructions that have been executed and executes the instructions again in the order from front to back when the data is recovered.

We can turn on the AOF function by configuring appendonly yes in redis.conf. If there is a write operation (such as SET, etc.), the redis is appended to the end of the AOF file.

The default AOF persistence policy is fsync once per second (fsync refers to recording write instructions in the cache to disk), because in this case, redis can still maintain good processing performance, and even if redis fails, only the last second of data will be lost.

It doesn't matter if you happen to encounter incomplete log writes due to full disk space, full inode, or power outage when you append logs. Redis provides a redis-check-aof tool that can be used to repair logs.

Because of the append method, if no processing is done, the AOF file will become larger and larger. For this reason, redis provides an AOF file rewriting (rewrite) mechanism, that is, when the size of the AOF file exceeds the set threshold, redis will start the content compression of the AOF file, keeping only the minimum instruction set that can recover the data. For example, it may be more vivid. If we call INCR instructions 100 times, we have to store 100 instructions in the AOF file, but this is obviously very inefficient. We can combine these 100 instructions into a single SET instruction, which is how the rewriting mechanism works.

In AOF rewriting, we still use the process of writing temporary files first and then replacing them after completion, so power outages, disk fullness and other problems will not affect the availability of AOF files, which we can rest assured.

Another benefit of the AOF approach is illustrated by a "scene reproduction". When a student was operating redis, he accidentally executed FLUSHALL, which led to the emptying of all the data in redis memory, which is a very tragic thing. However, this is not the end of the world, as long as redis is configured with AOF persistence, and the AOF file has not been rewritten (rewrite), we can pause redis and edit the AOF file as quickly as possible, delete the FLUSHALL command on the last line, and then restart redis, and then restore all the data of redis to the state before FLUSHALL. Isn't that amazing? this is one of the benefits of AOF persistence. But if the AOF file has been rewritten, the data cannot be recovered in this way.

Although there are many advantages, the AOF approach also has shortcomings, for example, in the case of the same data size, AOF files are larger than RDB files. Moreover, the recovery speed of AOF mode is slower than that of RDB mode.

If you execute the BGREWRITEAOF command directly, redis generates a completely new AOF file that includes the minimum set of commands that can recover existing data.

If you are unlucky and the AOF file is bad, don't worry too much. Redis will not rashly load the problematic AOF file, but report an error to exit. At this point, you can fix the wrong file with the following steps:

1. Back up bad AOF files

two。 Run redis-check-aof-fix to repair

3. Use diff-u to look at the differences between the two files and identify the problem points.

4. Restart redis and load the repaired AOF file

[talk about redis persistence-AOF rewriting]

It is necessary to understand the inner workings of AOF rewriting.

When rewriting is about to begin, redis creates (fork) a "rewrite child process" that first reads the existing AOF file, parses and compresses the instructions it contains, and writes them to a temporary file.

At the same time, the main worker process will accumulate the newly received write instructions into the memory buffer while continuing to write to the original AOF file, which ensures the availability of the original AOF file and avoids accidents in the rewriting process.

When the rewrite child process finishes rewriting, it sends a signal to the parent process, which appends the write instructions cached in memory to the new AOF file when the parent process receives the signal.

When the append is complete, redis will replace the old AOF file with the new AOF file, and then a new write instruction will be appended to the new AOF file.

[talk about redis persistence-how to choose RDB and AOF]

As to whether we should choose RDB or AOF, the official suggestion is to use both at the same time. This provides a more reliable persistence solution.

[talk about master-slave-usage]

Like MySQL, redis supports master-slave synchronization, as well as one-master-multi-slave and multi-slave structures.

Master-slave structure, one is for pure redundant backup, the other is to improve read performance, for example, the SORT which consumes a lot of performance can be borne by the slave server.

The master-slave synchronization of redis is asynchronous, which means that master-slave synchronization does not affect the master logic and does not degrade the processing performance of redis.

In the master-slave architecture, you can consider turning off the data persistence function of the master server and only allowing the slave server to persist, which can improve the processing performance of the master server.

In the master-slave architecture, the slave server is usually set to read-only mode, which prevents the data from the slave server from being mistakenly modified. However, instructions such as CONFIG can still be accepted from the server, so you should not directly expose the server to an insecure network environment. If necessary, consider renaming important instructions to avoid misexecution by outsiders.

[talk about master-slave-synchronization principle]

The slave server issues a SYNC instruction to the master server, and when the master server receives this command, it invokes the BGSAVE instruction to create a child process dedicated to data persistence, that is, to write the data of the master server to the RDB file. During data persistence, the master server caches all write instructions executed in memory.

After the execution of the BGSAVE instruction, the master server sends the persisted RDB file to the slave server, receives the file from the server, stores it on disk, and then reads it into memory. After this action is completed, the master server will send the write instructions cached during this period to the slave server in the format of redis protocol.

In addition, the point is that even if there are multiple SYNC instructions from the slave server at the same time, the master server will only execute BGSAVE once and then send the persisted RDB file to multiple downstream. Before the redis2.8 version, if the slave server was disconnected from the master server for some reason, there would be a full data synchronization between the master server and the master server; after version 2.8, redis supports a more efficient incremental synchronization strategy, which greatly reduces the recovery cost of disconnection.

The master server maintains a buffer in memory that stores the content to be sent to the slave server. After a network disconnection occurs between the slave server and the master server, the slave server will try to connect with the master server again. Once the connection is successful, the slave server will send out the "ID of the master server that you want to synchronize" and "the offset location (replication offset) of the data you want to request". After receiving such a synchronization request, the master server first verifies that the master server ID matches its own ID, and then checks whether the "requested offset" exists in its own buffer, and if both are satisfied, the master server sends incremental content to the slave server.

Incremental synchronization function requires server-side support for brand-new PSYNC instructions. This instruction is available only after redis-2.8.

[talk about the transaction processing of redis]

As we all know, a transaction refers to "a complete action, either performed or nothing done".

Before we talk about redis transaction processing, I'd like to introduce four redis instructions, namely, MULTI, EXEC, DISCARD, and WATCH. These four instructions form the basis of redis transaction processing.

1.MULTI is used to assemble a transaction

2.EXEC is used to execute a transaction

3.DISCARD is used to cancel a transaction

4.WATCH is used to monitor some key, and once the key is changed before the transaction executes, it cancels the transaction execution.

The lessons learned on paper are not profound. Let's take a look at an example of MULTI and EXEC:

The copy code is as follows:

Redis > MULTI / / Mark the start of the transaction

OK

Redis > INCR user_id / / multiple commands join the queue sequentially

QUEUED

Redis > INCR user_id

QUEUED

Redis > INCR user_id

QUEUED

Redis > PING

QUEUED

Redis > EXEC / / execute

1) (integer) 1

2) (integer) 2

3) (integer) 3

4) PONG

In the above example, we see the word QUEUED, which means that when we assemble transactions with MULTI, every command will be cached in the memory queue. If QUEUED appears, it means that our command has been successfully inserted into the cache queue. When we execute EXEC in the future, these commands that have been QUEUED will be assembled into a transaction to execute.

For transaction execution, if AOF persistence is enabled by redis, once the transaction is successfully executed, the commands in the transaction will be written to disk once through the write command. If there happens to be power outage, hardware failure and other problems in the process of writing to disk, then only part of the commands may be persisted by AOF, and the AOF file will be incomplete. We can fix this with the redis-check-aof tool, which removes incomplete information from the AOF file to ensure that the AOF file is fully available.

With regard to matters, we often encounter two types of mistakes:

1. Error before calling EXEC

two。 Error after calling EXEC

"error before calling EXEC" may be caused by syntax errors or insufficient memory. Whenever a command fails to write to the buffer queue, redis records it, and when the client invokes EXEC, redis refuses to execute the transaction. (this is the strategy after version 2.6.5. In versions prior to 2.6.5, redis ignored those commands that failed to join the queue and only executed those that joined the queue successfully). Let's look at an example like this:

The copy code is as follows:

127.0.0.1 purl 6379 > multi

OK

127.0.0.1 6379 > / / an obviously incorrect instruction

(error) ERR unknown command ''

127.0.0.1 purl 6379 > ping

QUEUED

127.0.0.1 purl 6379 > exec

/ / redis mercilessly refused the execution of the transaction because "there was an error before"

(error) EXECABORT Transaction discarded because of previous errors.

For "errors after invoking EXEC," redis takes a completely different strategy, that is, redis ignores these errors and continues to execute other commands in the transaction. This is because, for application-level errors, it is not a problem that redis itself needs to consider and deal with, so if a command fails in a transaction, it will not affect the execution of other commands. Let's also look at an example:

The copy code is as follows:

127.0.0.1 purl 6379 > multi

OK

127.0.0.1 6379 > set age 23

QUEUED

/ / age is not a collection, so the following is an obviously incorrect instruction

127.0.0.1 6379 > sadd age 15

QUEUED

127.0.0.1 set age 6379 >

QUEUED

127.0.0.1 exec 6379 > when executing a transaction, redis ignores the second instruction execution error

1) OK

2) (error) WRONGTYPE Operation against a key holding the wrong kind of value

3) OK

127.0.0.1 purl 6379 > get age

"29" / / it can be seen that the third instruction has been successfully executed

OK, let's talk about the last instruction "WATCH", which is a good instruction that can help us achieve an effect similar to "optimistic locking", that is, CAS (check and set).

WATCH itself is used to "monitor whether key has been changed", and supports monitoring multiple key at the same time. As long as the transaction is not actually triggered, WATCH will dutifully monitor. Once a key is found to have been modified, nil will be returned when the EXEC is executed, indicating that the transaction cannot be triggered.

The copy code is as follows:

127.0.0.1 6379 > set age 23

OK

127.0.0.1 6379 > watch age / / start monitoring age

OK

127.0.0.1 set age 6379 > age 24 / / before EXEC, the value of age was modified

OK

127.0.0.1 purl 6379 > multi

OK

127.0.0.1 6379 > set age 25

QUEUED

127.0.0.1 purl 6379 > get age

QUEUED

127.0.0.1 6379 > exec / / trigger EXEC

(nil) / / transaction cannot be executed

[teach you to understand redis configuration-introduction]

We can specify the configuration file that should be loaded when we start redis-server as follows:

The copy code is as follows:

$. / redis-server / path/to/redis.conf

Next, let's explain the meaning of each configuration item in the redis configuration file. Note that this article is based on the redis-2.8.4 version.

The official redis.conf file provided by redis has 700 + lines, of which more than 100 behaviors are effectively configured, and the other 600 behaviors are annotated.

At the beginning of the configuration file, some units of measurement are first identified:

The copy code is as follows:

# 1k = > 1000 bytes

# 1kb = > 1024 bytes

# 1m = > 1000000 bytes

# 1mb = > 1024024 bytes

# 1g = > 1000000000 bytes

# 1gb = > 1024 "1024" 1024 bytes

As you can see, the redis configuration is not case-sensitive, and 1GB, 1Gb, and 1gB are all the same. This also shows that redis only supports bytes, not bit units.

Redis supports "introducing an external configuration file into the main configuration file", much like the include directive in CumberCure +, such as:

The copy code is as follows:

Include / path/to/other.conf

If you look at the redis configuration file, you will find that it is still very organized. The redis configuration file is divided into several large areas, which are:

1. Universal (general)

two。 Snapshot (snapshotting)

3. Copy (replication)

4. Security (security)

5. Limit (limits)

6. Append mode (append only mode)

7.LUA script (lua scripting)

8. Slow log (slow log)

9. Event Notification (event notification)

Let's explain them one by one.

[teach you to understand redis configuration-generic]

By default, redis does not run as daemon. You can control the running form of redis through the daemonize configuration item. If changed to yes, then redis will run as daemon:

The copy code is as follows:

Daemonize no

When running as daemon, redis generates a pid file, which is generated by default in / var/run/redis.pid. Of course, you can specify the location where the pid file is generated through pidfile, such as:

The copy code is as follows:

Pidfile / path/to/redis.pid

By default, redis responds to connection requests from all available network cards on the machine. Of course, redis allows you to specify the IP to bind through bind configuration items, such as:

The copy code is as follows:

Bind 192.168.1.2 10.8.4.2

The default service port for redis is 6379, which you can modify through the port configuration item. If the port is set to 0, redis will not listen on the port.

The copy code is as follows:

Port 6379

Some students will ask, "if redis does not listen on the port, how can it communicate with the outside world?" in fact, redis also supports receiving requests through unix socket. You can specify the path to the unixsocket file through the unixsocket configuration item and the permissions for the file through unixsocketperm.

The copy code is as follows:

Unixsocket / tmp/redis.sock

Unixsocketperm 755

When a redis-client has not sent a request to the server, then the server has the right to actively close the connection. You can set the "idle timeout" through timeout, and 0 means never close.

The copy code is as follows:

Timeout 0

The TCP connection survival policy can be set through the tcp-keepalive configuration item (in seconds). If it is set to 60 seconds, the server will issue an ACK request to the client whose connection is idle every 60 seconds to check whether the client has hung up, and the client that does not respond will close its connection. So it takes up to 120 seconds to close a connection. If set to 0, no survival test will be performed.

The copy code is as follows:

Tcp-keepalive 0

Redis supports setting the log level through the loglevel configuration item, which is divided into four levels, namely debug, verbose, notice and warning.

The copy code is as follows:

Loglevel notice

Redis also supports setting the location where log files are generated through logfile configuration items. If set to an empty string, redis outputs the log to standard output. If you set the log to output to standard output in the daemon case, the log will be written to / dev/null.

The copy code is as follows:

Logfile ""

If you want the log to be printed to syslog, it's also easy to control it through syslog-enabled. In addition, syslog-ident allows you to specify log flags in syslog, such as:

The copy code is as follows:

Syslog-ident redis

It also supports specifying syslog devices, which can be USER or LOCAL0-LOCAL7. You can refer to the usage of the syslog service itself for details.

The copy code is as follows:

Syslog-facility local0

For redis, you can set the total number of databases. If you want a redis to contain 16 databases, the settings are as follows:

The copy code is as follows:

Databases 16

The numbers of these 16 databases will be 0 to 15. The default database is the database numbered 0. Users can use select to select the appropriate database.

[teach you to read redis configuration-Snapshot]

Snapshot, which mainly involves the configuration related to RDB persistence of redis, let's take a look.

We can use the following instructions to save the data to disk, that is, to control the RDB snapshot function:

The copy code is as follows:

Save

For example:

The copy code is as follows:

Save 9001 / / means that every 15 minutes and at least one key change is triggered.

Save 30010 / / means that every 5 minutes and at least 10 key changes are triggered.

Save 60 10000 / / means that at least 10000 key changes occur every 60 seconds, triggering a persistence.

If you want to disable the RDB persistence policy, as long as you don't set any save instructions, or pass an empty string parameter to save, you can achieve the same effect, like this:

The copy code is as follows:

Save ""

If the RDB snapshot feature is enabled by the user, if redis fails to persist data to disk, by default, redis will stop accepting all write requests. The advantage of this is that it makes it clear to the user that the data in memory is inconsistent with the data on disk. If redis continues to accept write requests regardless of this inconsistency, it may lead to some catastrophic consequences.

If the next RDB persistence is successful, redis will automatically resume accepting write requests.

Of course, if you don't care about such data inconsistencies or other means of discovering and controlling such inconsistencies, you can turn off this feature to ensure that redis continues to accept new write requests if snapshot writes fail. The configuration items are as follows:

The copy code is as follows:

Stop-writes-on-bgsave-error yes

For snapshots stored on disk, you can set whether to compress storage. If so, redis uses the LZF algorithm for compression. If you don't want to consume CPU for compression, you can set this feature to off, but the snapshots stored on disk will be larger.

The copy code is as follows:

Rdbcompression yes

After storing the snapshot, we can also have redis use the CRC64 algorithm for data verification, but doing so will increase the performance consumption by about 10%. If you want to get the maximum performance improvement, you can turn this feature off.

The copy code is as follows:

Rdbchecksum yes

We can also set the name of the snapshot file, which is configured by default:

The copy code is as follows:

Dbfilename dump.rdb

Finally, you can also set the path where the snapshot file is stored. For example, the default setting is the current folder:

The copy code is as follows:

Dir. /

[teach you to read redis configuration-copy]

Redis provides master-slave synchronization function.

Through the slaveof configuration item, you can control a redis as the slave server of another redis, and navigate to the location of the master redis by specifying the IP and port. In general, we recommend that users set a different frequency of snapshot persistence cycles for slave redis, or configure a different service port for slave redis, and so on.

The copy code is as follows:

Slaveof

If the master redis sets the authentication password (using requirepass to set it), masterauth should be used to set the verification password in the configuration of the slave redis, otherwise, the master redis will deny the access request from the slave redis.

The copy code is as follows:

Masterauth

How does the redis handle external access requests when the slave redis loses its connection to the master redis, or when master-slave synchronization is in progress? Here, there are two options from redis:

The first option: if slave-serve-stale-data is set to yes (the default), the slave redis will continue to respond to read and write requests from the client.

The second option: if slave-serve-stale-data is set to no, the slave redis will return "SYNC with master in progress" to the client's request. Of course, there are exceptions. When the client sends an INFO request and a SLAVEOF request, the slave redis will still process it.

You can control whether a slave redis can accept write requests. Writing data directly to the slave redis is generally only suitable for those data with a very short life cycle, because the temporary data will be cleaned up when the master-slave synchronization occurs. Since the redis2.6 version, the default is read-only from redis.

The copy code is as follows:

Slave-read-only yes

Read-only slave redis is not suitable for direct exposure to untrusted clients. To minimize risk, you can use the rename-command directive to rename some potentially destructive commands to avoid direct external calls. For example:

The copy code is as follows:

Rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52

Slave redis periodically sends PING packets to the master redis. You can control the cycle through the repl_ping_slave_period command. The default is 10 seconds.

The copy code is as follows:

Repl-ping-slave-period 10

When master-slave synchronization occurs, a timeout may occur in these cases:

1. From a redis point of view, when there is a large-scale IO transmission.

two。 From a redis point of view, the primary redis timed out when data was transferred or PING

3. From the point of view of the master redis, the slave redis timed out while replying to the PING of the slave redis

The user can set the time limit for the above timeout, but make sure that this time limit is larger than the value of repl-ping-slave-period, otherwise the master redis will consider the slave redis timeout every time.

The copy code is as follows:

Repl-timeout 60

We can control whether TCP_NODELAY is disabled when master-slave synchronization occurs. If TCP_NODELAY is enabled, the master redis uses fewer TCP packets and less bandwidth to transfer data to the slave redis. But this may add some synchronization latency, which is about 40 milliseconds. If you turn off TCP_NODELAY, the latency of data synchronization will be reduced, but it will consume more bandwidth. If you don't know TCP_NODELAY, you can come here to popularize science.

The copy code is as follows:

Repl-disable-tcp-nodelay no

We can also set the synchronization queue length. The queue length (backlog) is a buffer in the master redis that is used by the master redis to cache data that should be sent to the slave redis during disconnection from the slave redis. In this way, when you reconnect from redis, you don't have to resynchronize all the data, you only need to synchronize this part of the incremental data.

The copy code is as follows:

Repl-backlog-size 1mb

If the master redis cannot connect to the slave redis after waiting for a period of time, the data in the buffer queue will be cleaned up. We can set the length of time for the main redis to wait. If set to 0, it means that it will never clean up. The default is 1 hour.

The copy code is as follows:

Repl-backlog-ttl 3600

We can set priority for a large number of slave redis. If the master redis does not continue to work properly, the high priority slave redis will be upgraded to the master redis. The smaller the number, the higher the priority. For example, a master redis has three slave redis with priority numbers 10,100,25, then the slave redis numbered 10 will be selected to upgrade the master redis first. When the priority is set to 0, this slave redis will never be selected. The default priority is 100.

The copy code is as follows:

Slave-priority 100

If the master Redis finds that there are more than M slave redis connections with a latency greater than N seconds, then the master redis stops accepting external write requests. This is because the slave redis generally sends a PING to the master redis every second, and the master redis records the last time each slave redis sent the PING, so the master redis can understand the operation of each slave redis.

The copy code is as follows:

Min-slaves-to-write 3

Min-slaves-max-lag 10

The above example shows that if there are three slave redis with a connection delay greater than or equal to 10 seconds, then the master redis will no longer accept external write requests. If one of the above two configurations is set to 0, this feature will be turned off. By default, min-slaves-to-write is 0 and min-slaves-max-lag is 10.

[teach you to understand redis configuration-Security]

We can ask the redis client to authenticate the password before sending the request to redis-server. I'm sure you'll use this feature when your redis-server is in an untrusted network environment. Due to the high performance of redis, you can complete as many as 150000 password attempts per second, so you'd better set a password that is complex enough, otherwise it is easy to crack.

The copy code is as follows:

Requirepass zhimakaimen

Here we set the password to "open sesame" through requirepass.

Redis allows us to rename redis instructions, such as renaming some of the more dangerous commands to avoid misexecution. For example, you can change the CONFIG command to a very complex name, which avoids external calls and meets the needs of internal calls:

The copy code is as follows:

Rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c89

We can even disable the CONFIG command, which is to change the name of CONFIG to an empty string:

The copy code is as follows:

Rename-command CONFIG ""

It is important to note, however, that if you use AOF for data persistence, or if you need to communicate with the slave redis, changing the name of the instruction may cause problems.

[teach you to understand redis configuration-restrictions]

We can set how many clients redis can connect to at the same time. The default is 10000 clients. When you cannot set the process file handle limit, redis will be set to the current file handle limit minus 32, because redis will leave some handles for its internal processing logic.

If this limit is reached, redis rejects new connection requests and sends a "max number of clients reached" to those connection requesters in response.

The copy code is as follows:

Maxclients 10000

We can even set the amount of memory that redis can use. Once the memory usage limit is reached, redis will attempt to remove internal data, and removal rules can be specified through maxmemory-policy.

If redis cannot remove data from memory according to the removal rule, or if we set "do not allow removal", then redis will return an error message for instructions that need to request memory, such as SET, LPUSH, and so on. However, instructions with no memory requests will still respond normally, such as GET, etc.

The copy code is as follows:

Maxmemory

It is important to note that if your redis is the master redis (indicating that your redis has a slave redis), then when setting the upper limit of memory usage, you need to set some memory space in the system for the synchronization queue cache, and only if you set it to "do not remove", do not consider this factor.

For memory removal rules, redis provides up to six removal rules. They are:

1.volatile-lru: use the LRU algorithm to remove key from expired collections

2.allkeys-lru: remove key using the LRU algorithm

3.volatile-random: removes random key from expired collections

4.allkeys-random: remove random key

5.volatile-ttl: remove the key with the lowest TTL value, that is, the key that has only recently expired.

6.noeviction: no removal. For write operations, only error messages are returned.

Whichever of the above removal rules is used, redis returns an error message for the write request if there is no appropriate key to remove.

The copy code is as follows:

Maxmemory-policy volatile-lru

Both the LRU algorithm and the minimum TTL algorithm are not accurate algorithms, but estimates. So you can set the sample size. If redis checks the three key by default and selects the one of the LRU, then you can change the number of key samples.

The copy code is as follows:

Maxmemory-samples 3

Finally, we add that as of the current version (2.8.4), the write instructions supported by redis include the following:

The copy code is as follows:

Set setnx setex append

Incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd

Sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby

Zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby

Getset mset msetnx exec sort

[teach you to understand redis configuration-append mode]

By default, redis persists data to disk asynchronously. This pattern has been proven to be effective in most applications, but when some problems occur, such as a power outage, this mechanism can cause a few minutes of write requests to be lost.

As described in the first half of the blog post, appending files (Append Only File) is a better way to maintain data consistency. Even when the server is powered off, only one second of the write request is lost, and even one write request is lost when the redis process has a problem and the operating system is running normally.

We suggest that AOF mechanism and RDB mechanism can be used at the same time without any conflict. For a discussion of how to maintain data consistency, see this article.

The copy code is as follows:

Appendonly no

We can also set the name of the aof file:

The copy code is as follows:

Appendfilename "appendonly.aof"

The fsync () call tells the operating system to write the cached instructions to disk immediately. Some operating systems will do it "immediately", while others will do it "as soon as possible".

Redis supports three different modes:

1.no: fsync () is not called. Instead, it is up to the operating system to decide the timing of the sync. In this mode, redis will have the fastest performance.

2.always: fsync () is called after each request is written. In this mode, redis is relatively slow, but the data is the most secure.

3.everysec: call fsync () once per second. This is a tradeoff between performance and security.

Everysec by default. For more information about data consistency, please refer to this article.

The copy code is as follows:

Appendfsync everysec

When the fsync mode is set to always or everysec, if the background persistence process needs to perform a large disk IO operation, then redis may get stuck when fsync () is called. This hasn't been fixed yet, because even if we were to execute fsync () in another new thread, it would block synchronous write calls.

To alleviate this problem, we can use the following configuration item so that when BGSAVE or BGWRITEAOF is running, the call to fsync () in the main process is blocked. This means that when another process is refactoring the AOF file, redis's persistence is disabled, as if we had set "appendsync none". If you have latency problems with your redis, please set the following option to yes. Otherwise, keep no, as this is the safest option to ensure data integrity.

The copy code is as follows:

No-appendfsync-on-rewrite no

We allow redis to automatically rewrite aof. When aof grows to a certain size, redis implicitly calls BGREWRITEAOF to rewrite log files to reduce file size.

Here's how redis works: redis records the aof size of the last override. If the redis has not been rewritten since startup, the size of the aof file at startup will be used as the benchmark. This baseline value is compared with the current aof size. If the current aof size exceeds the set growth ratio, an override is triggered. In addition, you need to set a minimum size to prevent rewriting from being triggered when the aof is very small.

The copy code is as follows:

Auto-aof-rewrite-percentage 100

Auto-aof-rewrite-min-size 64mb

If auto-aof-rewrite-percentage is set to 0, this rewrite feature is turned off.

[teach you to read redis configuration-LUA script]

The maximum running time of lua scripts needs to be strictly limited. Note that the unit is milliseconds:

The copy code is as follows:

Lua-time-limit 5000

If this value is set to 0 or negative, there will be neither an error nor a time limit.

[teach you to read redis configuration-slow log]

Redis slow log means that a system has made log queries for more than a specified period of time. This length of time does not include IO operations, such as interaction with the client, sending response content, etc., but only the time it takes to actually execute the query command.

For slow logs, you can set two parameters, one is the execution time, in microseconds, and the other is the length of the slow log. When a new command is written to the log, the oldest one is removed from the command log queue.

The unit is microseconds, that is, 1000000 represents one second. A negative number disables slow logging, while 0 forces each command to be logged.

The copy code is as follows:

Slowlog-log-slower-than 10000

The maximum length of slow log, you can fill in the value freely, there is no upper limit, but it should be noted that it will consume memory. You can use SLOWLOG RESET to reset this value.

The copy code is as follows:

Slowlog-max-len 128

[teach you to understand redis configuration-event notification]

Redis can notify the client of the occurrence of certain events. A specific explanation of this feature can be found in this article.

[teach you to understand redis configuration-Advanced configuration]

Some configuration items about hash data structures:

The copy code is as follows:

Hash-max-ziplist-entries 512

Hash-max-ziplist-value 64

Some configuration items about list data structures:

The copy code is as follows:

List-max-ziplist-entries 512

List-max-ziplist-value 64

Configuration items for collection data structures:

The copy code is as follows:

Set-max-intset-entries 512

Configuration items for ordered collection data structures:

The copy code is as follows:

Zset-max-ziplist-entries 128

Zset-max-ziplist-value 64

About whether the configuration item for rehashing is required:

The copy code is as follows:

Activerehashing yes

For controls on client output buffering:

The copy code is as follows:

Client-output-buffer-limit normal 0 0 0

Client-output-buffer-limit slave 256mb 64mb 60

Client-output-buffer-limit pubsub 32mb 8mb 60

Configuration items for frequency:

The copy code is as follows:

Hz 10

Configuration items for rewriting aof

The copy code is as follows:

Aof-rewrite-incremental-fsync yes

At this point, the introduction to redis is over. There is a lot of content, but it is relatively basic. This article does not cover redis clusters, how redis works, redis source code, redis-related LIB libraries, and so on. We will contribute them one after another. Please look forward to:)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report