Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The method of installing and configuring Redis and integrating SpringBoot

2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces "the installation and configuration of Redis and the method of integrating SpringBoot". In the daily operation, I believe that many people have doubts about the installation and configuration of Redis and the method of integrating SpringBoot. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "Redis installation and configuration and integration of SpringBoot". Next, please follow the editor to study!

Install Redis# download Rediswget https://download.redis.io/releases/redis-6.0.9.tar.gz# extract redistar-zxvf redis-6.0.9.tar.gz# install gcc environment Install yum-y install gcc-c++cd redis-5.0.5 # install make & & make installRedis configuration startup script # copy redis_init_script in the utils directory to / etc/init.d directory redis_init_script is the startup script cp utils/redis_init_script / etc/init.dcd / etc/init.dvim / etc/init.d/redis_init_script#- -redis_init_script start-#! / bin/sh## Simple Redis init.d script conceived to work on Linux systems# as it does use of the / proc filesystem.### BEGIN INIT INFO# Provides: redis_6379# Default-Start: 2 34 "Default-Stop: 0 1" Short-Description: Redis data structure server# Description: Redis data structure server. See https://redis.io### END INIT INFO# redis default port REDISPORT=6379# redis default launch redis-server location EXEC=/usr/local/bin/redis-server# redis redis-cli location CLIEXEC=/usr/local/bin/redis-cli# redis pid location splicing default port parameters. PIDFILE=/var/run/redis_$ {REDISPORT} .pid # redis conf configuration file CONF= "/ usr/local/redis/conf/redis.conf" # $1 parameter is start or stop case "$1" in start) if [- f $PIDFILE] then echo "$PIDFILE exists Process is already running or crashed "else echo" Starting Redis server... "$EXEC $CONF fi Stop) if [!-f $PIDFILE] then echo "$PIDFILE does not exist Process is not running "else PID=$ (cat $PIDFILE) echo" Stopping... "$CLIEXEC-p $REDISPORT shutdown while [- x / proc/$ {PID}] do echo" Waiting for Redis to shutdown... " Sleep 1 done echo "Redis stopped" fi;; *) echo "Please use start or stop as first argument" Esac#-redis_init_script end-# start redis. / redis_init_script start # stop redis. / redis_init_script stop configuration Redis open self-startup # add a note #! / bin/sh## Simple Redis init.d script conceived to work on Linux systems# as it does use of the / in the following location Proc filesystem.### BEGIN INIT INFO# Provides: redis_6379# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: Redis data structure server# Description: Redis data structure server. See https://redis.io### END INIT INFO#chkconfig: 22345 10 90#desccription: Start and Stop redis:wq! # Save # register redis to boot chkconfig redis_init_script onRedis configuration file parsing # set background run yes background run, no foreground run daemonize yes # pidfile pid directory file pidfile / var/run/redis_6379.pid# dir redis workspace. A directory must be written. Cannot write a filename dir / usr/local/redis/working# bind which ip address can access redis-server 0.0.0.0 any address can access bind 0.0.0.0 requirepass set redis link password requirepass 584521SpringBoot integration Redis

Introduce dependency

Org.springframework.boot spring-boot-starter-data-redis

Configure redis

Spring: redis: database: 0 # Database host: 127.0.0.1 # redis address port: 6379 # redis Port password: 584521 # redis password

Interface test

@ ApiIgnore@RestController@RequestMapping ("redis") public class RedisController {@ Autowired private RedisTemplate redisTemplate; @ GetMapping ("/ set") public Object set (String key, String value) {redisTemplate.opsForValue () .set (key, value); return "ok";} @ GetMapping ("get") public Object get (String key) {Object value = redisTemplate.opsForValue () get (key); return value @ GetMapping ("delete") public Object delete (String key) {redisTemplate.delete (key); return "ok";}}

JDK serialization mechanism used by redis by default

Redis persistence mechanism Redis RDB mechanism Redis DataBase

At regular intervals, the data in memory is written to a temporary file on disk as a snapshot. Read the snapshot file into memory when restoring. If redis goes down and restarts, there will certainly be no data in memory. When restarting redis. Read recovery data from a RDB file

# Open the redis.conf file vim redis.conf # redis Workspace dir / usr/local/redis/working # rdb persistent file name dbfilename dump.rdb# save to save to the hard disk. How many times in how much time has changed save 900 1save 300 10 save 60 1000 stop-writes-on-bgsave-error saved, an error occurred to stop writing operations stop-writes-on-bgsave-error yes# rdbcompression compressed rdb files like saving cpu performance overhead. You can close the no rdbcompression yes # rdbchecksum compressed rdb file later. Whether you want to verify the rdb file. There will be 10% performance loss, rdbchecksum yes.

Advantages of RDB

Full backup

Can be transmitted remotely.

When a child process is backed up, the main process does not have IO operations.

Shortcomings of RDB

There is a trigger mechanism for rdb persistence. If the last data has not been triggered to save. Then it may lead to redis downtime and data inconsistency after restart.

Redis AOF mechanism Append Only File

By default, Redis uses RDB mode as a persistence operation

About the configuration name of AOF

# Select whether to enable the synchronization policy for aofappendonly yes# appendfilename to configure the name of the aof persistence file appendfilename "appendonly.aof" # appendfsync aof file. Always is for each write operation. It takes up a lot of resources. , erverysec synchronizes once a second no never synchronizes when appendfsync everysec# no-appendfsync-on-rewrite is rewritten, synchronization can be avoided. If it is yes, it is possible to cause inconsistency in the content of the file bi-appendfsync-on-rewrite no# auto-aof-rewrite-percentage 100aof file growth ratio, which refers to the growth ratio of the current aof file compared to the last rewrite. Aof rewriting means that after the aof file is in a certain size, the entire memory is rewritten into the aof file to reflect the latest state, so as to avoid the problem that the file is too large and the actual memory data is small. (frequent modification problem) auto-aof-rewrite-percentage 10 auto-aof-rewrite-min-size 64mb aof files rewrite the minimum file size. That is, the initial aof file must reach this file before it is triggered, and each subsequent rewrite will not be based on this variable, according to the size after the last rewrite. This variable is valid only when initialization starts redis, and if it is a redis restore, lastSize is equal to the initial aof file size. Auto-aof-rewrite-min-size 64mb# aof-load-truncated means that when redis recovers, it ignores the last instruction that may be problematic. The default is yes. That is, there may be a problem of instruction miswriting when aof is written (suddenly power off, half written). In this case, yes will log and continue, while no will directly restore failed .aof-load-truncated yes.

Accidentally used flushdb, flushall. We can stop redis-server and open the aof file. Delete the command flushdb flushall directly. Restart the redis service

RDB and AOF can be used at the same time, load order, load AOF first and then load RDB

Redis master-slave architecture

The original single Redis serves as a Master and multiple slaves (Slave). Read-write separation architecture. Master as a write library, slave as a read library, that is to say, write operation Mater, most read operations function Slave. Slave will make a full copy of the data to Mater.

Master starts first, then Slave, and then Slave goes to the ping Master node, after ping notification. Master submits the data to Slave. The data replication of Master is full replication.

Master copies all the data from memory to generate a RDB file. And then transmit it to our Slave node over the network.

After Slave gets the RDB file, it will download it to its own hard drive and then load it into Slave. This is just an operation when it is started for the first time

The subsequent operation Master will be transferred directly to the Slave node. Transfer operations do not block write operations

To configure one master and two slaves mechanism, you need to prepare three Redis machines, one master and two slaves.

# start Redis-cliredis-cli# to view Redis role information info replication# Replicationrole:master # current redis role connected_slaves:0 # current number of redis connections master_replid:9d6bd1e0965ba650aed518034318a11c243c2d8cmaster_replid2:0000000000000000000000000000000000000000master_repl_offset:0second_repl_offset:-1repl_backlog_active:0repl_backlog_size:1048576repl_backlog_first_byte_offset:0repl_backlog_histlen:0# replicaof main library ip Master library port number replicaof 192.168.1.191 637 master masterauth master library redis password masterauth 58452 "replica-read-only yes configuration slave master can only read and write data replica-read-only yes# stop redis service. / redis_init_script stop # Delete dump.rdb and aof files rm-rf dump.rdb * .aof# start redis service. / redis_init_script start# replicaof master library ip Master library port number replicaof 192.168.1.191 637 master masterauth master library redis password masterauth 58452 "replica-read-only yes configuration slave master can only read and write data replica-read-only yes# stop redis service. / redis_init_script stop # Delete dump.rdb and aof file rm-rf dump.rdb *. Aof# start redis service. / redis_init_script start

Disk-backed: Redis creates a process. A file is written to the hard drive, which is then transferred to the Redis Slave node.

Diskless: Redis creates a process that writes the RDB file to the socket connection without touching the disk. It can be transmitted through socket. Redis transfers the socket to multiple Slave nodes after a period of time when the Slave is connected.

After all starts successfully, check the role status of each Redis through the info replication command.

Diskless replication

# when the disk is slow, but the network environment is very good. Then you can use the diskless transport repl-diskless-sync no# to configure how long it takes to connect to the slave before you start transmitting over the socket. Repl-diskless-sync-delay 5

Configuration from the second Redis Slave

Configuration from Redis, the first Slave

Master Redis Master

Redis cache expiration processing and memory obsolescence mechanism

Active timing deletion

# check hz 10 10 times per second

Check the expired key regularly and randomly, and clean up and delete if it expires. (hz configuration with number of checks per second in redis.conf)

Passive inert deletion

When the client requests a key that has expired, redis checks whether the key expires, deletes it, and returns a nil if it expires. This strategy is friendly and will not cost too much, but the memory footprint will be high.

What if the Redis memory is full?

# maxmemory configure how many memory units can be consumed by redis bytemaxmemory 2048

If a data is rarely accessed recently, it can be considered as unlikely that it will be accessed in the future. Therefore, when the space is full, the data accessed by the minimum frequency is the first to be eliminated.

If a data has not been accessed in a recent period of time, it can be considered that it is unlikely to be accessed in the future. Therefore, when the space is full, the data that has not been accessed for the longest time is replaced (eliminated) first.

LRU (The Least Recently Used, the most recently unused algorithm) is a common caching algorithm, which is widely used in many distributed caching systems such as Redis and Memcached.

LFU (Least Frequently Used, the least recently used algorithm) is also a common caching algorithm.

Redis provides the following various phase-out mechanisms

# volatile-lru-> Evict using approximated LRU among the keys with an expire set.# allkeys-lru-> Evict any key using approximated LRU.# volatile-lfu-> Evict using approximated LFU among the keys with an expire set.# allkeys-lfu-> Evict any key using approximated LFU.# volatile-random-> Remove a random key among the ones with an expire set.# allkeys-random-> Remove a random key, any key.# volatile-ttl-> Remove the key with the nearest expire time (minor TTL) # noeviction-> Don't evict anything, just return an error on write operations.

How much memory is reached in maxmemory. Indicates that the memory is full

Redis Sentinel mode

It turns out that in the case of one master and two slaves, once the Master node goes down. We won't be able to write data. Because the primary node is down. Data cannot be written from the slave node. Only data can be read.

Configure the sentinel.conf file in the Redis package

# bind ip which ip can connect # bind 127.0.0.bind yes to enable the binding mode after enabling the protected mode. No does not enable protected-mode no # port number port 2637 whether the background daemonize yes# pid file location is enabled or not. It is not the same process as redis pidfile / var/run/redis-sentinel.pid # configure sentinel's log file logfile / usr/local/redis/logs/sentinel/redis-sentinel.log# dir sentinel workspace dir / usr/local/redis/sentinel# sentinel monitor configure the master name, ip address and port number that the configuration listens on. Sentinel monitor xh-master 192.168.1.191 6379 "sentinel auth-pass mymaster MySUPER--secret-0123passw0rdsentinel auth-pass xh-master 58452" sentinel down-after-milliseconds master name. The sentry thought that the master failed during the period of sentinel down-after-milliseconds xh-master 10000 # sentinel parallel-syncs master name and the need to synchronize the data of several slave nodes at the same time. The name of the sentinel parallel-syncs xh-master sentinel failover-timeout master. Failover timeout. Sentinel failover-timeout xh-master 180000

Below, you can copy sentinel directly to the other two Redis nodes

Scp. / sentinel.conf root@192.168.1.192:/usr/local/redis/scp. / sentinel.conf root@192.168.1.193:/usr/local/redis/

Activate Redis-sentinel Sentinel.

# Boot will report an error redis-sentinel # saying that Sentinel did not specify the configuration file 6031 Nov 21 Nov 12 Nov 20.727 # Sentinel started without a config file. Exiting...# specifies the configuration file to launch redis-sentinel / usr/local/redis/sentinel.conf install this way to start slave 1 and slave 2

After startup is complete, when we manually stop the redis-server service. Redis Sentinel, will be in the remaining two slave nodes. Elect a master node.

When the original master node is restarted, master is no longer a master node. Has been transformed into a slave node. You can use info replication

4-4 solve the problem of non-synchronization after the recovery of the original Master in this class, I believe that careful students will find that after the original Master (191) is restored to Slave, his synchronization status is not OK, the state is master_link_status:down, why? This is because we only set the masterauth of 192,193, which is used to synchronize the data of master, but at the beginning, master is not affected. When master is converted to slave, he cannot synchronize data from the new master because he does not have auth. As a result, when info replication is synchronized, the synchronization status is down, so you only need to modify the masterauth in redis.conf to 584521. The scheme that general master data cannot be synchronized to slave is as follows: 1. In the problem of network communication, it is necessary to ensure that they can communicate with each other through ping and internal networks. two。 Disable the firewall and develop the corresponding port (it is recommended to permanently disable the firewall in the virtual machine, and ensure private network interconnection in the case of the CVM). 3. Unify all passwords, do not miss a node is not set. # View master node information under xh-master sentinel master xh-master# view slaves node information under xh-master sentinel slaves xh-master# view Sentinel node information under xh-master sentinel sentinels xh-masterRedis use Sentinel mode to integrate SpringBoot

Configure the ymal file

Spring: redis: database: 1 password: 584521 sentinel: master: xh-master # the name of the configuration master nodes: 192.168.1.191viso 26379192.168.1.192 Veg26379192.168.1.193Vera 26379 # configure the port number of the redis sentry and the ipRedis Cluster cluster

The capacity of a single master is limited, and there will be bottlenecks when the data reaches a certain extent. At this time, it can be horizontally expanded to multi-master clusters.

Redis-cluster: it can support multiple master-slave, support massive data, and achieve high availability and high concurrency. In fact, Sentinel mode is also a kind of cluster, which can improve the concurrency of read requests, but there may be some problems in fault tolerance, such as asynchronous replication when master synchronizes data to slave. At this time, the data on slave is not as new as master. Data synchronization takes time, and 1-2 seconds of data will be lost. When master is restored and converted to slave, new data is lost.

Each node knows the relationship between each other, will also know their own role, of course, they will also know that they exist in a cluster environment, they can interact and communicate with each other, ong. Then these relationships will be saved to a configuration file, each node has, which we will configure when building.

If the client wants to establish a connection with the cluster, it only needs to establish a relationship with one of them.

The failure of a node is also detected by more than half of the nodes, and the master-slave switch is objectively offline, which is the same as what we mentioned earlier in Sentinel mode.

There are many slots in Redis, which can also be called slot nodes, which are used to store data.

Set up Redis-cluster cluster

Modify the Redis configuration file under the 201Node

# enable cluster cluster yes on, no close cluster-enabled yes# cluster-config-file nodes-6379.conf cluster-config-file cluster node configuration file cluster-config-file nodes-6379.conf# cluster-node-timeout 15000 configure redis-cluster timeout cluster-node-timeout 150 enable aof persistence appendonly yes# modify 10,000 configuration file. Deleting aof and rbd files may cause an error rm-rf * .aof rm-rf * .rdb # stop redis/etc/init.d/redis_init_script stop # start redis/etc/init.d/redis_init_script start

Repeat operation 202,203,204,205,206

Create a cluster cluster through redis-cli

# if you set the password, remember to set the password redis-cli-a 584521-- cluster create 192.168.1.201purl 6379 192.168.1.202 cluster-replicas 6379 192.168.1.203 cluster-replicas 6379 192.168.1.204 cluster-replicas 1

The above is the relationship between three masters and three followers. M is master. S is Slave. 205 assigned to 201206 assigned to 202. 204 assigned to 203

# the above will finally ask you if you want to configure the cluster Can I set the above configuration? (type 'yes' to accept): yes

Check cluster cluster information

Redis-cli-a 584521-cluster check 192.168.1.201:6379Redis Slots concept

A total of 16384 slot nodes are allocated

[OK] All 16384 slots covered

How to allocate slot nodes?

Distribute 16384 equally among three Master

How to store slot slot

Redis performs a hash on the key of each stored data and then modulates 16384, calculating the formula hash (key)% 16384

Go to the console of the cluster

# View cluster information redis-cli-c-a 584521-h 192.168.1.202-p6379 # View node information integrate Cluster cluster in cluster nodesRedis SpringBoot

Configure the yaml file

Spring: redis: password: 584521 cluster: nodes: 192.168.1.201Viru 6379 192.168.1.202Partition 6379 192.168.1.203Rd 6379 192.168.1.204LV 6379 192.168.1.205Rd 6379 192.168.1.206RV 6379 cache penetration

Cache traversal refers to data that is not available in the cache or database, and users keep making requests, such as initiating a data with an id of'- 1', or data with a large id that does not exist. At this point, because there is no data in the cache, every request will fall on the database. Resulting in excessive pressure on the database.

The interface layer adds verification, such as user authentication verification, and logically verifies the request parameters.

Data that cannot be obtained from the cache, and data that cannot be obtained in the database. It can also be written to the cache at this time. K-null way. The cache validity period is set to a short point. If the setting is too long, it will cause the normal situation not to be used.

Cache breakdown

Cache breakdown means that there is no cache in the cache, but some data in the database usually expires. At this time, due to too many concurrent users and the cache does not read the data, all requests fall into the database, resulting in excessive pressure on the database.

Set hotspot data to never expire

Lock it and read it. Lock the key. When the data is not available in the cache. Release a thread to read the data in the database.

Cache avalanche

Cache avalanche refers to a large number of data in the cache to the expiration time, and the huge amount of query data, resulting in excessive pressure on the database and even downmachine. The difference between cache breakdown and cache breakdown is that cache breakdown refers to and check the same data, cache avalanche is that different data are expired, a lot of data can not be found in order to check the database.

The expiration time of cached data is set randomly to prevent a large number of data expiration at the same time.

If the cache database is distributed, distribute the hot spot data evenly in different cache databases.

Set hotspot data is never out of date.

At this point, the study of "Redis installation configuration and integration of SpringBoot" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report