In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to build redis master-slave sentinel mode with docker and integrate it into springboot project". In daily operation, I believe many people have doubts about how to use docker to build redis master-slave sentry mode and integrate it into springboot project. The editor consulted all kinds of materials and sorted out simple and useful operation methods, hoping to answer "how to use docker to build redis master-slave sentinel mode." And integrate into the springboot project "doubts help!" Next, please follow the editor to study!
First, build redis on the host
The overall redis and Sentinel architecture are both 3 nodes (1 master and 2 slaves)
The default container is enabled in bridge mode, so the master node ip fetched by the sentry is the container private network ip, and the client cannot access the container private network, so you need to use host network mode orchestration (network_mode: "host").
Host public network IP,192.168.1.254
1, (redis orchestration file) master-slave/docker-compose.yml
Version: "3" services: master: image: redis:latest container_name: redis-master command: redis-server-- requirepass 123456-- port 6379 ports:-"6379" network_mode: "host" slave1: image: redis-slave-1 command: redis-server-port 6380-slaveof 192.168.1.254 6379-requirepass 123456-masterauth 123456 depends_on:-master ports :-"6380" network_mode: "host" slave2: image: redis:latest container_name: redis-slave-2 command: redis-server-- port 6381-- slaveof 192.168.1.254 6379-- requirepass 123456-- masterauth 123456 depends_on:-master ports:-"6381" network_mode: "host"
Start with the command in the current directory: docker-compose up-d
There are three redis, of which master (6379), slave (6380 and 6381)
2, (Sentinel scheduling file) sentinel/docker-compose.yml
# Example sentinel.conf can be downloaded from http://download.redis.io/redis-stable/sentinel.confversion: "3" services: sentinel1: image: redis:latest container_name: redis-sentinel-1 command: redis-sentinel / usr/local/etc/redis/sentinel.conf ports:-"26379" volumes:-"/ root/redis/sentinel1.conf:/usr/local/etc/redis/sentinel.conf" network_mode: " Host "sentinel2: image: redis:latest container_name: redis-sentinel-2 command: redis-sentinel / usr/local/etc/redis/sentinel.conf ports: -" 26380 "volumes: -" / root/redis/sentinel2.conf:/usr/local/etc/redis/sentinel.conf "network_mode:" host "sentinel3: image: redis:latest container_name: redis-sentinel-3 command: redis-sentinel / usr/ Local/etc/redis/sentinel.conf ports:-"26381" volumes:-"/ root/redis/sentinel3.conf:/usr/local/etc/redis/sentinel.conf" network_mode: "host"
Start with the command in the current directory: docker-compose up-d
There are three sentinels with ports of 26379, 26380 and 26381 respectively.
The contents of three sentinel profiles are the same except for different ports.
/ root/redis/sentinel1.conf
Port 26379 # other two nodes are 26380, 26381dir / tmpsentinel monitor mymaster 192.168.1.254 6379 2sentinel auth-pass mymaster 123456sentinel down-after-milliseconds mymaster 30000sentinel parallel-syncs mymaster 1sentinel failover-timeout mymaster 10000sentinel deny-scripts-reconfig yes respectively
The third line indicates that Redis monitors a master called mymaster running on 192.168.1.254 master 6379. If the vote reaches 2, it means master and is dead.
# Line 4 sets the authentication password of the primary node
# the fifth line indicates that the heartbeat sent by sentinel to master within a period of time is considered unavailable if PING does not reply.
# the parallel-syncs in line 6 indicates the number of slave that can be reconfigured to use the new master after the failover. The lower the number, the more time it takes to fail over, but if slaves is configured to serve old data, you may not want all slave to resynchronize master at the same time. Because master-slave replication is non-blocking for slave, there is a short delay when you stop loading bulk data from master. Make sure that only one slave at a time is unreachable by setting the option to 1.
# the seventh line indicates that mymaster has not been alive within 10 seconds, but master is considered to be down.
This is the list of containers after startup:
Second, integrate springboot projects
1. Introduce dependency
Org.springframework.boot spring-boot-starter-data-redis
2, configure application.yml
Spring: redis: sentinel: master: mymaster nodes:-"192.168.1.254 master 26379"-"192.168.1.254 max-wait 26380"-"192.168.1.254 max-wait 26381" host: 192.168.1.254 password: 123456 jedis: pool: min-idle: 8 max-active: 100 max-wait: 3000 max-idle: 100
3. Use RedisTemplate to operate redis
@ Autowired RedisTemplate redisTemplate; @ Test public void testRedisMasterSlave () throws Exception {ValueOperations valueOperations = redisTemplate.opsForValue (); ExecutorService es = Executors.newFixedThreadPool (3); for (int j = 0; j
< 3; j++) { es.submit(() ->{for (int I = 0; I < 1000; iTunes +) {try {String threadName = Thread.currentThread (). GetName (); valueOperations.set (threadName+i, I + ", 30L, TimeUnit.MINUTES) TimeUnit.MILLISECONDS.sleep (200L);} catch (InterruptedException e) {System.out.println ("error:" + e.getMessage ()) });} es.shutdown (); es.awaitTermination (30L, TimeUnit.MINUTES);}
Here, three threads are used to simulate and write to redis. In the process of writing, stop master and view the console output log.
2019-10-06 18 INFO 52Starting without optional epoll library2019 51.949 INFO 128972-[pool-1-thread-1] io.lettuce.core.EpollProvider: Starting without optional epoll library2019-10-06 18 io.lettuce.core.EpollProvider 51.951 INFO 128972-[pool-1-thread-1] io.lettuce.core.KqueueProvider: Starting without optional kqueue library2019-10-06 1852 INFO 53 55.979 INFO 128972-[xecutorLoop-1-8] i.l.core.protocol.ConnectionWatchdog: Reconnecting Last destination was / 192.168.1.254ioEventLoop-4 57.992 WARN 128972-[xecutorLoop-1-4] i.l.core.protocol.ConnectionWatchdog: Cannot reconnect: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: / 192.168.1.254i.l.core.protocol.ConnectionWatchdog 63792019-10-06 1854Vera 02.277 INFO 128972-- [xecutorLoop-1-7] Reconnecting Last destination was 192.168.1.254i.l.core.protocol.ConnectionWatchdog i.l.core.protocol.ConnectionWatchdog: Cannot reconnect: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: / 192.168.1.254ioEventLoop-4 63792019-10-06 1854i.l.core.protocol.ConnectionWatchdog 08.577 INFO 128972-[xecutorLoop-1-4] i.l.core.protocol.ConnectionWatchdog: Reconnecting Last destination was 192.168.1.254 i.l.core.protocol.ConnectionWatchdog 63792019-10-06 1815. 678 INFO 128972-[xecutorLoop-1-3] i.l.core.protocol.ConnectionWatchdog: Reconnecting: [ioEventLoop-4-8] i.l.core.protocol.ConnectionWatchdog: Cannot reconnect: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: / 192.168.1.254 i.l.core.protocol.ConnectionWatchdog 63792019-10-06 Last destination was 192.168.1.254ioEventLoop-4 5417.687 WARN 128972-[Cannot reconnect-6] i.l.core.protocol.ConnectionWatchdog: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: / 192.168.1.254ioEventLoop-4: 185422.877 INFO 128972-[xecutorLoop-1-7] i.l.core.protocol.ConnectionWatchdog: Reconnecting Last destination was 192.168.1.254 i.l.core.protocol.ConnectionWatchdog 63792019-10-06 1815 i.l.core.protocol.ConnectionWatchdog 24.885 WARN 128972-[xecutorLoop-1-2] i.l.core.protocol.ConnectionWatchdog: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: / 192.168.1.254 i.l.core.protocol.ConnectionWatchdog 63792019-10-06 1854 i.l.core.protocol.ConnectionWatchdog 29.078 Reconnecting-[xecutorLoop-1-1] i.l.core.protocol.ConnectionWatchdog: Reconnecting Last destination was 192.168.1.254 i.l.core.protocol.ReconnectionHandler 63792019-10-06 1818 i.l.core.protocol.ReconnectionHandler 54 i.l.core.protocol.ReconnectionHandler 29.085 INFO 128972-[192.168.1.254] Reconnected to 6380
You can see that the master node can no longer be connected (6379) at 1815-53, but the node has changed (6380), which proves that the Sentinel can switch between master nodes automatically.
Let's take a look at the data in redis. There are 3000 pieces of data, none of which is lost, proving that even in the process of switching between master and standby, the data will not be lost.
Then start the redis-master node that has just stopped, and find that it can no longer be started, and report an error.
2019-10-05T16:00:57.899358510Z 1 Connecting to MASTER 192.168.1.254 Connecting to MASTER 63802019-10-05T16:00:57.899381586Z 1V 57.897 * MASTER REPLICA sync started2019-10-05T16:00:57.899385377Z 1mer S 05 Oct 2019 1600V 57.897 * Non blocking connect for SYNC fired the event.2019-10-05T16:00:57.899388194Z 1S 05 Oct 2019 1600 Master replied to PING Replication can continue...2019-10-05T16:00:57.899391119Z 1 Non critical S 05 Oct 2019 16 Non critical * (NOAUTH Authentication required.2019) Master does not understand REPLCONF listening-port:-NOAUTH Authentication required.2019-10-05T16:00:57.899401303Z 1 V 05 Oct 2019 1600 V V 57.898 * (Non critical) Master does not understand REPLCONF capa:-NOAUTH Authentication required.2019-10-05T16:00:57.899404228Z 1V V 05 Oct 2019 1600 V V 57.898 * Partial Resynchronization not possible (no cached master) 2019-10-05T16:00:57.899406828Z 1 05T16:00:57.899406828Z S 05 Oct 2019 1600 Oct 57.898 # Unexpected reply to PSYNC from master:-NOAUTH Authentication required.2019-10-05T16:00:57.899409536Z 1V S 05 Oct 2019 1600V 57.898 * Retrying with SYNC...2019-10-05T16:00:57.899412136Z 1V S 05 Oct 2019 1600 MASTER aborted replication with an error: NOAUTH Authentication required.
At this time, it is already connected to the new master node as a slave node, but authorization and authentication failed because the master node was not configured with slaveof at all. Here I thought the Sentinel would automatically add the slaveof configuration, but it did not. Is it because the configuration file was not used?
The reason has been found and fully realized in this article.
At this point, the study on "how to use docker to build redis master-slave sentinel mode and integrate it into the springboot project" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.