In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Wget http://download.redis.io/releases/redis-3.0.0.tar.gz & & tar zxvf redis-3.0.0.tar.gz & & cd redis-3.0.0 & & make PREFIX=/usr/local/redis MALLOC=libc install
Mkdir-p / usr/local/redis/run
Mkdir-p / usr/local/redis/logs
Mkdir-p / usr/local/redis/rdb
Mkdir-p / usr/local/redis/etc
Mkdir-p / usr/local/redis/nodes
1. # redis cluster configuration file, distinguished by instance port
# Redis does not run as a daemon by default, but can be modified by this configuration item to enable the daemon using yes
Daemonize yes
# when Redis runs as a daemon, Redis puts the pid file in / var/run/redis.pid by default. When running multiple redis services, you need to specify different pid files and ports
Pidfile / usr/local/redis/run/redis_6381.pid
# configure redis port
Port 6381
Tcp-backlog 511
# timeout of client connection (in seconds). The connection will be closed after timeout
Timeout 300
Tcp-keepalive 0
# logging level, 4 optional values: debug, verbose, notice, warning
Loglevel notice
# configure the log file address, which is printed on the window of the command line terminal by default, or can be set to / dev/null to mask the log
Logfile "/ usr/local/redis/logs/6381.log"
# set the number of databases. You can use the SELECT command to switch databases
Databases 16
# set the frequency of database mirroring by Redis. When a keys changes within 900s of the policy for saving data to disk
Save 900 1
# set the frequency of database mirroring by Redis. When 10 keys changes within 30 seconds of the policy for saving data to disk
Save 300 10
# set the frequency of database mirroring by Redis. When 10000 keys changes within 60 seconds of the policy for saving data to disk
Save 60 10000
Stop-writes-on-bgsave-error yes
# whether to compress when making a mirror backup
Rdbcompression yes
Rdbchecksum yes
# File name of image backup file
Dbfilename dump6381.rdb
# the path where the files for database mirroring backup are placed. The default value is. /
Dir / usr/local/redis/rdb
Slave-serve-stale-data yes
Slave-read-only yes
Repl-diskless-sync no
Repl-diskless-sync-delay 5
Repl-disable-tcp-nodelay no
Slave-priority 100
# limit the number of customers who connect at the same time. When the number of connections exceeds this value, redis will not receive other connection requests, and clients will receive error information when they try to connect
Maxclients 10000
# maximum available memory. If exceeded, Redis attempts to delete the keys from the EXPIRE collection
Maxmemory 1G
# volatile-lru uses the LRU algorithm to delete expired set,allkeys-lru, delete any key,volatile-ttl that follows the LRU algorithm, delete the most recently expired key,volatile-random-> randomly delete the key in the expired set
Maxmemory-policy volatile-ttl
For processing redis memory, the LRU and minor TTL algorithms are not accurate, but approximate (estimated) algorithms. So we will check some samples # to achieve the purpose of memory check. The default number of samples is 3, and you can modify it.
Maxmemory-samples 3
# by default, Redis saves data to the hard drive asynchronously. If your application scenario allows the loss of the latest data due to extreme situations such as a system crash, then this is already very ok. Otherwise, you should turn on the 'appendonly' mode. When turned on, Redis will add every write to the # appendonly.aof file, which will be read when Redis starts to rebuild the dataset in memory.
Appendonly no
Appendfilename "appendonly6381.aof"
# no: no fsync, just tell OS whether flush depends on the performance of OS. Always: every time an append only log file is written, the performance of fsync is poor, but it is very safe. Everysec: there is no fsync compromise every second.
Appendfsync everysec
No-appendfsync-on-rewrite no
Auto-aof-rewrite-percentage 100
Auto-aof-rewrite-min-size 64mb
Aof-load-truncated yes
Lua-time-limit 5000
Cluster-enabled yes
Cluster-config-file / usr/local/redis/nodes/6381.conf
Cluster-node-timeout 5000
Slowlog-log-slower-than 10000
Slowlog-max-len 1024
Latency-monitor-threshold 0
Notify-keyspace-events ""
Hash-max-ziplist-entries 512
Hash-max-ziplist-value 64
List-max-ziplist-entries 512
List-max-ziplist-value 64
Set-max-intset-entries 512
Zset-max-ziplist-entries 128
Zset-max-ziplist-value 64
Hll-sparse-max-bytes 3000
Activerehashing yes
Client-output-buffer-limit normal 0 0 0
Client-output-buffer-limit slave 256mb 64mb 60
Client-output-buffer-limit pubsub 32mb 8mb 60
Hz 10
Aof-rewrite-incremental-fsync yes
2. Start the redis instance
/ usr/local/redis/bin/redis-server / usr/local/redis/etc/6380.conf &
3. Copy redis-trib.rb
Cp / root/redis-3.0.0/src/redis-trib.rb / usr/local/redis/bin
4. Start the cluster
/ usr/local/redis/bin/redis-trib.rb create 10.144.8.86:7000 10.144.8.86:7001 10.144.8.86:7002
If there are 6 instances, adding the parameter-replicas 1 to create will result in 3 master nodes and 3 slave nodes.
An error will be reported when executing the above command, because the script of ruby is executed and the environment of ruby is required.
Error content: / usr/bin/env: ruby: No such file or directory
Therefore, you need to install ruby in the environment. Yum install ruby installation is recommended here.
Yum install ruby
Then execute the create cluster command in step 6, and there will be an error indicating that the rubygems component is missing and install it using yum
Error content:
. / redis-trib.rb:24:in `require': no such file to load-- rubygems (LoadError)
From. / redis-trib.rb:24
Yum install rubygems
6.3 if you execute the command in step 6 again, there will be an error indicating that redis cannot be loaded because of the lack of interfaces for redis and ruby. Install using gem
Error content:
/ usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load-- redis (LoadError)
From / usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require'
From. / redis-trib.rb:25
Install using gem: gem install redis
5. Check the cluster information through the check subcommand of the redis-trib.rb tool
/ usr/local/redis/bin/redis-trib.rb check 127.0.0.1:6379
6. Read and write Redis Cluster through redis client. Currently, redis clients also implement cluster support, but use it in a slightly different way, that is, you need to add a-c parameter at startup.
/ usr/local/redis/bin/redis-cli-c-h 127.0.0.1-p 6379
Http://blog.csdn.net/xu470438000/article/details/42972123
Http://blog.csdn.net/myrainblues/article/details/25881535
Redis Cluster add Node
1: first start the nodes that need to be added
2: execute the following command to add this new node to the cluster
Cd / usr/local/redis3.0/src/
. / redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000
3: execute the command redis-cli-c-p 7000 cluster nodes to view the newly added node
4: after adding a new node, the new node can become a master node or a slave node
Turn this node into the master node, use the redis-trib program to move some hash slots in the cluster to the new node, and the new node becomes the real master node.
Execute the following command to move the hash slot in the cluster
Cd / usr/local/redis3.0/src
. / redis-trib.rb reshard 127.0.0.1:7000
The system will prompt us how many hash slots to move, 1000 here.
Then you need to specify which node to transfer these hash slots to.
Enter the ID of the node we just added
F32dc088c881a6b930474fc5b52832ba2ff71899
Then we need to specify which hash slots to transfer.
Enter all to indicate random transfer from all primary nodes, enough for 1000 hash slots
Then type the yes,redis cluster and start allocating hash slots.
At this point, a new master node has been added. Execute the command to check the status of the nodes in the current cluster.
Redis-cli-c-p 7000 cluster nodes
4.2: turn this node into a slave node
Now that we have added this new node to the cluster, we only need to execute the following command to make the new node the slave node of 127.0.0.1 7001. The node ID after the command is 127.0.0.1 7001
Redis-cli-c-p 7006 cluster replicate 0b00721a509444db793d28448d8f02168b94bd38
Use the following command to confirm that 127.0.0.1virtual 7006 has become the slave node of 127.0.0.1virtual 7001
Redis-cli-p 7000 cluster nodes | grep slave | grep 0b00721a509444db793d28448d8f02168b94bd38
If you see the situation in the picture below, it means that the addition is successful.
Redis cluster delete node
1: if the deleted node is the primary node, here we delete the 127.0.0.1 7006 node, which has 1000 hash slots
First, to transfer the hash slot in the node to another node, execute the following command
Cd / usr/local/redis3.0/src
. / redis-trib.rb reshard 127.0.0.1:7000
The system will prompt us how many hash slots to move, which is 1000 here, because there are 1000 hash slots in 127.0.0.1 purl 7006 nodes.
Then we are prompted to enter the ID of the node on which we want to receive these hash slots, here using the node ID of 127.0.0.1 7001
Then we are asked to choose to transfer the hash slot from those nodes. Here, we must enter the ID of 127.0.0.1 7006, and finally enter done to indicate that the input is finished.
As a final step, delete the node using the following command
Cd / usr/local/redis3.0/src/
. / redis-trib.rb del-node 127.0.0.1:7006 127.0.0.1:7006
2: if the node is from the node, just use the following command to delete it.
Cd / usr/local/redis3.0/src/
. / redis-trib.rb del-node 127.0.0.1:7006 127.0.0.1:7006
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.