In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Redis is a non-relational database and a high-performance key-value database. It is often used as a cache database of relational databases to improve server access speed, especially for high concurrency scenarios. In order to improve the high availability of Redis, we will first talk about the redis.conf configuration file, and then do the master-slave library configuration.
Detailed explanation of 1.redis.conf configuration file
1) basic configuration
Whether daemonize no is started as a background process
Number of database created by databases 16 (database 0 is selected by default)
When save 9001 # refreshes the snapshot to the hard drive, both requirements must be met before it will be triggered, that is, at least 1 keyword will change after 900 seconds.
Save 30010 # must be at least 10 keywords changed after 300 seconds.
Save 60 10000 # must be at least 10000 keywords changed after 60 seconds.
Stop-writes-on-bgsave-error yes # background storage error stops writing.
Rdbcompression yes # uses LZF to compress rdb files.
Rdbchecksum yes # verifies when storing and loading rdb files.
Dbfilename dump.rdb # sets the rdb file name.
Dir. / # sets the working directory to which the rdb file is written.
2) Master-slave configuration
Slaveof is set as a slave server for a machine
Password for masterauth to connect to the primary server
Slave-serve-stale-data yes # whether the slave server answers when the master / slave is disconnected or is in the process of replication
Slave-read-only yes # read-only from the server
Time interval between repl-ping-slave-period 10 # slave ping master (in seconds)
Repl-timeout 60 # master-slave timeout (the timeout is considered to be disconnected), which is larger than period
Slave-priority 100 # if master can no longer work properly, the slave with the lowest priority value will be selected among multiple slave to be promoted to master, and a priority value of 0 means that it cannot be promoted to master.
Whether the master end of repl-disable-tcp-nodelay no # merges data or not, send large chunks to slave
Slave-priority 100s from the priority of the server, when the master server dies, it will automatically pick the one with the smallest slave priority as the master server
3) Security
Requirepass foobared # requires a password
Rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # if you have a public environment, you can rename some sensitive commands such as config
4) restriction
Maxclients 10000 # maximum connections
Maxmemory # maximum memory usage
Maxmemory-policy volatile-lru # memory processing after reaching the limit
Volatile-lru-> LRU algorithm deletes expired key
Allkeys-lru-> LRU algorithm deletes key (does not distinguish or expire)
Volatile-random-> randomly delete expired key
Allkeys-random-> randomly delete key (indistinguishable and non-expired)
Volatile-ttl-> Delete a key that is about to expire
Noeviction-> do not delete, error message is returned
Explain that LRU ttl is an approximate algorithm, you can choose N, and then compare the data that are most suitable for T kick.
Maxmemory-samples 3
5) Log mode
Appendonly no # whether only logs are required
Appendfsync no # system buffering, unified writing, high speed
Appendfsync always # system does not buffer, direct write, slow, less data loss
Appendfsync everysec # compromise, write once per second
If no-appendfsync-on-rewrite no # is yes, the data of other threads will be stored in memory and written together (faster and easier to lose)
Auto-AOF-rewrite-percentage 100th current aof file is overridden when the last override is large N%
At least the size of auto-AOF-rewrite-min-size 64mb aof rewriting
6) slow query
Slowlog-log-slower-than 10000 # records slow queries with response time greater than 10000 microseconds
Slowlog-max-len 128 # record up to 128 entries
7) Server command
Time returns timestamp + microsecond
The number of key returned by dbsize
Bgrewriteaof rewrites aof
Bgsave background starts child process dump data
Save blocking process dump data
Lastsave
Slaveof host port acts as a slave server for host port (data emptying, copying new master content)
Slaveof no one becomes the master server (the original data is not lost, which is usually used after the failure of the main server)
Flushdb clears all data in the current database
Flushall clears all data from all databases (what if misused?)
Shutdown [save/nosave] shut down the server, save data, and modify AOF (if set)
Slowlog get acquires slow query log
Slowlog len acquires the number of slow query log entries
Slowlog reset empties slow query
Info []
Config get option (supports * wildcards)
Config set option valu
Config rewrite writes values to the configuration file
Config restart updates information for the info command
Debug object key # debugging options, depending on a key
Debug segfault # impersonation segment error, causing the server to crash
Object key (refcount | encoding | idletime)
Monitor # Open the console and observe the commands (for debugging)
Client list # lists all connections
Client kill # kills a connection CLIENT KILL 127.0.0.1 43501
Client getname # get the name of the connection default nil
Client setname "name" # sets the connection name for debugging
8) Connect command
Auth password # password login (if there is a password)
Ping # Test whether the server is available
Echo "some content" # Test whether the server interacts properly
Select 0/1/2... # Select a database
Quit # exit the connection
The common commands on the command line of the 2.Redis cache server are as follows:
127.0.0.1 6379 > config get * # get all configuration information of Redis server
127.0.0.1 config set loglevel notice 6379 > there are a total of 4 log levels to choose from
127.0.0.1 redis 6379 > config set requirepass "sky9899" # configure redis access password
127.0.0.1 purl 6379 > auth sky9899
Redis-cli-h host-p port-a password # remote connection to Redis database
[root@redis_slave bin] # redis-cli-h 192.168.153.142-p 6379-a sky9899 # instance
127.0.0.1 6379 > role # returns the role to which the master and slave instance belongs
127.0.0.1 info 6379 > get all kinds of information and statistics of redis server
127.0.0.1 slaveof 6379 > 192.168.153.143 6379 # specify the slave server of the server
3.Redis cluster actual combat
Redis master-slave replication, when the user writes data to the master side, sending the data file to slave,slave through the Redis sync mechanism will also perform the same operation to ensure data consistency.
Redis master library configuration: redis.conf
Daemonize no
Pidfile / var/run/redis.pid
Port 6379
Tcp-backlog 511
# bind 127.0.0.1
Timeout 0
Tcp-keepalive 0
Loglevel notice
Logfile ""
Databases 16
Save 900 1
Save 300 10
Save 60 10000
Stop-writes-on-bgsave-error yes
Rdbcompression yes
Rdbchecksum yes
Dbfilename dump.rdb
Dir. /
Slave-serve-stale-data yes
Slave-read-only yes
Repl-disable-tcp-nodelay no
Slave-priority 100
Appendonly no
Appendfilename "appendonly.aof"
Appendfsync everysec
No-appendfsync-on-rewrite no
Auto-aof-rewrite-percentage 100
Auto-aof-rewrite-min-size 64mb
Lua-time-limit 5000
Slowlog-log-slower-than 10000
Slowlog-max-len 128
Latency-monitor-threshold 0
Notify-keyspace-events ""
Hash-max-ziplist-entries 512
Hash-max-ziplist-value 64
List-max-ziplist-entries 512
List-max-ziplist-value 64
Set-max-intset-entries 512
Zset-max-ziplist-entries 128
Zset-max-ziplist-value 64
Hll-sparse-max-bytes 3000
Activerehashing yes
Client-output-buffer-limit normal 0 0 0
Client-output-buffer-limit slave 256mb 64mb 60
Client-output-buffer-limit pubsub 32mb 8mb 60
Hz 10
Aof-rewrite-incremental-fsync yes
From the library 192.168.153.143:redis.conf configuration
Daemonize no
Pidfile / var/run/redis.pid
Port 6379
Slaveof 192.168.153.142 6379 # specify the main library IP and port number
Tcp-backlog 511
# bind 127.0.0.1
Timeout 0
Tcp-keepalive 0
Loglevel notice
Logfile ""
Databases 16
Save 900 1
Save 300 10
Save 60 10000
Stop-writes-on-bgsave-error yes
Rdbcompression yes
Rdbchecksum yes
Dbfilename dump.rdb
Dir. / # sets the working directory, and the rdb file is written to the current directory, or you can specify a new directory.
Slave-serve-stale-data yes
Slave-read-only yes
Repl-disable-tcp-nodelay no
Slave-priority 100
Appendonly no
Appendfilename "appendonly.aof"
Appendfsync everysec
No-appendfsync-on-rewrite no
Auto-aof-rewrite-percentage 100
Auto-aof-rewrite-min-size 64mb
Lua-time-limit 5000
Slowlog-log-slower-than 10000
Slowlog-max-len 128
Latency-monitor-threshold 0
Notify-keyspace-events ""
Hash-max-ziplist-entries 512
Hash-max-ziplist-value 64
List-max-ziplist-entries 512
List-max-ziplist-value 64
Set-max-intset-entries 512
Zset-max-ziplist-entries 128
Zset-max-ziplist-value 64
Hll-sparse-max-bytes 3000
Activerehashing yes
Client-output-buffer-limit normal 0 0 0
Client-output-buffer-limit slave 256mb 64mb 60
Client-output-buffer-limit pubsub 32mb 8mb 60
Hz 10
Aof-rewrite-incremental-fsync yes
4. Test results:
1) 192.168.153.142:Redis main library operation
[root@redis_master bin] #. / redis-cli
127.0.0.1 purl 6379 > set sky9899 www.sky9899.com
OK
127.0.0.1 purl 6379 > set sky9890 www.sky9890.com
OK
127.0.0.1 purl 6379 > get sky9899
"www.sky9899.com"
127.0.0.1 purl 6379 > get sky9890
"www.sky9890.com"
2) 192.168.153.143: Redis slave operation
127.0.0.1purl 6379 >
[root@redis_slave bin] #. / redis-cli
127.0.0.1 purl 6379 > get sky9899
"www.sky9899.com"
127.0.0.1 purl 6379 > get sky9890
"www.sky9890.com"
127.0.0.1purl 6379 >
Summary. The result of the master-slave library operation shows that the master-slave library is configured successfully.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.