In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
Twemproxy test architecture
1. Twemproxy is a redis proxy proxy developed by twitter. Through Twemproxy, multiple servers can be used to expand redis services horizontally, which can effectively avoid the problem of redis single point of failure.
The use of Twemproxy has a higher allocation of hardware resources; there is a certain loss in redis performance (about 20% in twitter testing) to improve the HA of the whole system.
2. Twemproxy deployment is simple and fast; you can read and write in proxy directly, and forward requests to the backend redis;, but it is not suitable for super-large traffic systems. Separate applications and use LVS clusters when doing it: achieve twemproxy load balancing and improve the availability and scalability of proxy
Advantages:
Lightweight Redis and memcached agents. Using it can reduce the number of connections to the cache server, and use it for sharding, resulting in no more than 20% performance loss. Actually, it's because of the use of pipeline. First, redis supports the use of pipeline batches.
Twemproxy establishes a connection with each redis server, and each connection implements two queues of FIFO, through which pipeline access to redis is achieved. Merging the access of multiple clients into a single connection not only reduces the number of connections to the redis server, but also improves access performance.
Disadvantages:
Although the node can be removed dynamically, the data of the removed node is lost. When the redis cluster dynamically adds nodes, twemproxy will not redistribute the existing data. The author in Maillist says that you need to write your own script to achieve the performance loss.
Twemproxy- nutcracker:
Ip:10.207.101.101
Ip:10.207.101.102
VIP:10.207.101.100
HA- keepalived
Ip:10.207.101.101
Ip:10.207.101.102
VIP:10.207.101.100
Redis
IP: 10.207.101.101
Port:6001/6002/6003
IP: 10.207.101.102
Port:6001/6002/6003
3. Deployment
Wget http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz
Wget https://codeload.github.com/twitter/twemproxy/zip/master
Yum install gcc gcc-c++ tcl ruby-y
Tar-xf autoconf-2.69.tar.gz
Cd autoconf-2.69/
. / configure
Make & & make install
Unzip unzip master.zip
Cd twemproxy-master
Autoreconf-fvi
. / configure-- enable-debug=full
Make
Make install
4. Configuration example:
# vim / etc/nutcracker.yml
Alpha:
Listen: 0.0.0.0:22121
Hash: fnv1a_64
Distribution: ketama
Auto_eject_hosts: true
Redis: true
Server_retry_timeout: 2000
Server_failure_limit: 1
Servers:-- address and port of two redis servers
-10.207.101.101purl 6001purl
-10.207.101.102VRO 6001V1
# configurations are all redis master addresses
Note:
Each parameter value in the .yml configuration file needs a space after the delimiter ":"
Different levels of parameters need to be indented, it is best to use tab indentation, otherwise the nutcracker process cannot be started.
When auto_eject_hosts: true, after a redis instance is closed, the data is written with the prompt "(error) ERR Connection refused". This is related to the fact that the server_retry_timeout parameter setting is too small, and the default value, 30000msec, is a good choice.
5. Start the Twemproxy service
Nutcracker-t nutcracker.yml
Checking that the configuration syntax does show OK.
# nutcracker-t twemproxy/conf/nutcracker.yml
Nutcracker: configuration file 'conf/nutcracker.yml' syntax is ok
Http://ip:22222 view of the startup service running on the web interface
You can use the nc command to view the Twemproxy status statement:
# nc 10.207.101.101 22222 | python-mjson.tool
6. Set up and start:
# cat start.sh
Nutcracker-d-c / opt/twemproxy-master/conf/nutcracker.yml-p / opt/twemproxy-master/run/redisproxy.pid-o / opt/twemproxy-master/run/redisproxy.log
Nutcracker usage and command options
Options:
-h,-help: view help documentation and display command options
-V,-version: view the nutcracker version
-t,-test-conf: test the correctness of the configuration script
-d,-daemonize: run as a daemon
-D,-describe-stats: print status description
-v,-verbosity=N: set the log level (default: 5, min: 0, max: 11)
-o,-output=S: sets the log output path. The default is standard error output (default: stderr)
-c,-conf-file=S: specify the configuration file path (default: conf/nutcracker.yml)
-s,-stats-port=N: sets the status monitoring port. Default is 22222 (default: 22222).
-a,-stats-addr=S: set status monitoring IP. Default is 0.0.0.0 (default: 0.0.0.0)
-I,-stats-interval=N: set the state aggregation interval (default: 30000 msec)
-p,-pid-file=S: specify the process pid file path, which is turned off by default (default: off)
-m,-mbuf-size=N: sets the mbuf block size in bytes units (default: 16384 bytes)
# cat stop.sh
Ps-ef | grep redis | grep-v grep | awk'{print $2}'| sed-e "s / ^ / kill-9 / g" | sh-
7. Detection process
Ps-ef | grep nutcracker | grep-v grep
8. Keepalived deployment HA static routing single point of failure
Yum install keepalived-y
# cat / etc/keepalived/keepalived.conf
! Configuration File for keepalived
Global_defs {
Router_id LVS_DEVEL
}
Vrrp_script check_twem {
# script "killall-0 redis"
Script "/ etc/keepalived/check_twem.sh"
Interval 2
Weight-3
Fall 3
Rise 2
}
Vrrp_instance VI_1 {
State MASTER
Interface eth0
Virtual_router_id 51
Priority 100
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass 1111
}
Virtual_ipaddress {
172.27.101.100/24 dev eth0 label eth0:1
}
Track_script {
Check_twem
}
}
Virtual_server 172.27.101.100 22121 {
Delay_loop 6
Protocol TCP
Real_server 172.27.101.101 22121 {
Weight 1
TCP_CHECK {
Connect_timeout 3
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
Real_server 172.27.101.102 22121 {
Weight 1
TCP_CHECK {
Connect_timeout 3
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
}
9. Nutcracker process detection-script check_twem.sh
# cat / etc/keepalived/check_twem.sh
#! / bin/bash
Twem=$ (ps-C nutcracker-- no-heading | wc-l)
If ["${twem}" = "0"]; then
Sh / opt/twemproxy-master/start.sh
Sleep 2
Twem=$ (ps-C nutcracker-- no-heading | wc-l)
If ["${twem}" = "0"]; then
/ etc/init.d/keepalived stop
Fi
Fi
10. Set test:
Pass the twemproxy test:
# redis-cli-h 172.27.101.100-p 22121-c 11-t set-d 11-l-Q
SET: 38167.94 requests per second
Test the backend redis directly:
# redis-cli-h 172.27.101.101-p 6001-c 11-t set-d 11-l-Q
Test the backend redis directly:
# redis-cli-h 172.27.101.102-p 6001-c 11-t set-d 11-l-Q
SET: 53191.49 requests per second
11. Get test:
Pass the twemproxy test:
# redis-cli-h 172.27.101.100-p 22121-c 11-t get-d 11-l-Q
GET: 37453.18 requests per second
Test the backend redis directly:
# redis-cli-h 172.27.101.101-p 22121-c 11-t get-d 11-l-Q
GET: 62111.80 requests per second
View the key value distribution:
# redis-cli info | grep db0
Db0:keys=51483,expires=0,avg_ttl=0
# redis-cli info | grep db0
Db0:keys=48525,expires=0,avg_ttl=0
12. Basic operation of redis-cli
Redis fuzzy search
Keys *
Select 2
Deleting all key that starts with user can be done like this:
# redis-cli keys "user*"
1) "user1"
2) "user2"
# redis-cli keys "user*" | xargs redis-cli del
(integer) 2
# deleted successfully
# deleting key matching wildcards in batch uses the pipe and xargs parameters in Linux:
Redis-cli keys "s*" | xargs redis-cli del
# if you need to develop a database, you need to use the-n database number parameter. The following is to delete the key starting with s in the 2 database:
Redis-cli-n 2 keys "s*" | xargs redis-cli-n 2 del
Redis-cli keys "*" | xargs redis-cli del
# if redis-cli is not set to a system variable, you need to specify the full path of redis-cli
# for example: / opt/redis/redis-cli keys "*" | xargs / opt/redis/redis-cli del
# Delete all Key in the current database
Flushdb
# Delete key from all databases
Flushall
13. Test questions
Read, writev and mbuf
All requests and responses are in mbuf. The default size of mbuf is 16K (512b-16M), which can be configured using-m or-mbuf-size=N. Each connection will get at least one mbuf, which means that the number of concurrent connections supported by nutcracker depends on the size of mbuf, small mbuf can control more connections, and large mbuf allows us to read or write more data to guide socker buffer. For scenarios with large concurrency, a relatively small mbuf (512or 1K) is recommended.
Mbuf-size=N
Each client connection preferably requires a mbuf, and a service request is at least two connections (client- > proxy, proxy- > server), so it requires at least two mbufs 1000 client connections: 1000*2*mbuf=32M, if each connection has 10 operations, this value will be 320m, assuming that the connection is 10000, then it will consume 3.2 gigabytes of memory! It is best to adjust the mbuf value in this scenario, such as 512b mbuf 1000m, 2m, 512m, 10m, which is why a small mbuf is used in scenarios with high concurrency.
Key length:
The upper limit of the length of memcached is 2500.There is no similar limit for redis, but nutcracker requires key to be stored in continuous memory, and because all requests and responses are in mbuf, the length of redis key will be limited to mbuf, that is, if your redis instance wants to operate with super-long key, you must increase the mbuf.
14. Simple deletion
# Test data
Redis > ZRANGE page_rank 0-1 WITHSCORES
1) "bing.com"
2) "8"
3) "baidu.com"
4) "9"
5) "google.com"
6) "10"
# remove a single element
Redis > ZREM page_rank google.com
(integer) 1
Redis > ZRANGE page_rank 0-1 WITHSCORES
1) "bing.com"
2) "8"
3) "baidu.com"
4) "9"
# remove multiple elements
Redis > ZREM page_rank baidu.com bing.com
(integer) 2
Redis > ZRANGE page_rank 0-1 WITHSCORES
(empty list or set)
# remove elements that do not exist
Redis > ZREM page_rank non-exists-element
(integer) 0
14. Command reference
Http://doc.redisfans.com/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.