In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
System: Centos 6.6x64
Version: redis-3.2.4
Installation directory: /opt/
Master: 172.16.15.103
From: 172.16.15.104
1. Download and install:
Installation dependencies:
# yum install gcc tcl ruby -y
# wget http://download.redis.io/releases/redis-3.2.4.tar.gz
# tar xf redis-3.2.4.tar.gz
# mv redis-3.2.4 /opt/redis
# cd /opt/redis
# make
# make test
2, kernel modification configuration
echo "vm.overcommit_memory=1" >> /etc/sysctl.conf
/sbin/sysctl -p
3, iptables/selinux and kernel settings:
# cat /etc/sysconfig/iptables
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6379 -j ACCEPT
4. Create a data log directory
#mkdir -p redis/{log,data}
5,redis master-slave configuration
Main Service: 172.16.15.103
# cat redis.conf
bind 172.16.15.103
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile /opt/redis/log/redis.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /opt/redis/data
requirepass 1qaz@WSX
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
From Service: 172.16.15.104
# cat redis.conf
bind 172.16.15.104
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile /opt/redis/log/redis.log
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /opt/redis/data
slaveof 172.16.15.103 6379
masterauth 1qaz@WSX
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
6. Start the test:
# /opt/redis# nohup src/redis-server redis.conf &
# ps-ef|grep redis
# /usr/local/redis/bin/redis-cli ping
7, testing:
Creation is performed on the main service 103:
# /usr/local/redis/bin/redis-cli -h 172.16.15.103 -a 1qaz@WSX
>set test 123456
View from Service 104:
# /usr/local/redis/bin/redis-cli -h 172.16.15.104
>get test
performance test
# /usr/local/redis/bin/redis-benchmark
close the service
# /usr/local/redis/bin/redis-cli -p 6379 shutdown
Forcefully flushing data to disk [Redis defaults to asynchronous writing to disk]
# /usr/local/redis/bin/redis-cli -p 6379 save
redis resource information statistics view: info
# redis-cli -h 127.0.0.1 -a passwd
> info
Redis Fuzzy Search
keys *
select 2
Deleting all keys that start with user can be done like this:
# redis-cli keys "user*"
1) "user1"
2) "user2"
# redis-cli keys "user*" | xargs redis-cli del
(integer) 2
#Delete successfully
#Batch delete matching wildcard keys using Linux pipes and xargs parameters:
redis-cli keys "s*" | xargs redis-cli del
#If you need to make a database, you need to use the-n database number parameter, the following is to delete the key starting with s in the 2 database:
redis-cli -n 2 keys "s*" | xargs redis-cli -n 2 del
redis-cli keys "*" | xargs redis-cli del
#If redis-cli is not set as a system variable, specify the full path to redis-cli
#Example: /opt/redis/redis-cli keys "*"| xargs /opt/redis/redis-cli del
Delete all keys in the current database
flushdb
Delete all database keys
flushall
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.