In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
1. Introduction of Redis
Redis is a key-value storage system, the official site http://redis.io
Similar to memcached, but supports data persistence
Supports more value types, including hash, lists (linked lists), sets (collections) and sorted sets (ordered collections) data types in addition to and string
Redis uses two file formats: full data (RDB) and incremental request (aof). The full data format is to write the data in memory to disk so that the file can be read and loaded next time. The incremental request file serializes the data in memory into operation requests, which are used to read the file and replay to get the data.
Redis storage is divided into three parts: memory storage, disk storage and log files.
II. Redis installation
Cd / usr/local/srcwget https://codeload.github.com/antirez/redis/tar.gz/2.8.21mv 2.8.21 redis-2.8.21.tar.gztar xf redis-2.8.21.tar.gzcd redis-2.8.21make & & make PREFIX=/usr/local/redis installmkdir / usr/local/redis/etc
Error:
In the file contained from adlist.c:34:
Zmalloc.h:50:31: error: jemalloc/jemalloc.h: there is no such file or directory
Zmalloc.h:55:2: error: # error "Newer version of jemalloc required"
Make [1]: * * [adlist.o] error 1
Make [1]: Leaving directory `/ usr/local/src/redis-2.6.6/src'
Make: * * [all] error 2
Resolve:
Make MALLOC=libc
III. Redis configuration
Vim / usr/local/redis/etc/redis.conf= configuration file daemonize yespidfile / usr/local/redis/var/redis.pidport 6379timeout 300loglevel debuglogfile / usr/local/redis/var/redis.logdatabases 16save 900 1save 300 10save 60 10000rdbcompression yesdbfilename dump.rdbdir / usr/local/redis/var/appendonly noappendfsync always
The following is the meaning of the main configuration parameters of redis.conf:
Daemonize: whether to run in background daemon mode
Pidfile:pid file location
Port: the port number on which to listen
Timeout: request timeout
Loglevel:log information level
Logfile:log file location
Databases: number of databases opened
Save * *: how often the snapshot is saved. The first * indicates how long it takes, and the third * indicates how many writes are performed. The snapshot is automatically saved when a certain number of writes are performed within a certain period of time. Multiple conditions can be set.
Rdbcompression: whether to use compression
Dbfilename: data snapshot file name (file name only, not directory)
Dir: the directory where the data snapshot is saved (this is the directory)
Appendonly: whether or not to enable appendonlylog, each write operation will record a log, which will improve the anti-risk ability of the data, but affect the efficiency.
How to synchronize appendfsync:appendonlylog to disk (three options: forcing fsync to be called every time you write, enabling fsync once per second, and not calling fsync to wait for the system to synchronize itself)
= write a redis startup script
Vi / etc/init.d/redis / / add the following: #! / bin/sh## redis init file for starting up the redis daemon## chkconfig:-208 description: Starts and stops the redis daemon.# Source function library.. / etc/rc.d/init.d/functionsname= "redis-server" basedir= "/ usr/local/redis" exec= "$basedir/bin/$name" pidfile= "$basedir/var/redis.pid" REDIS_CONFIG= "$basedir/etc/redis.conf" [- e / etc/sysconfig/redis] & &. / etc/sysconfig/redislockfile=/var/lock/subsys/redisstart () {[- f $REDIS_CONFIG] | | exit 6 [- x $exec] | | exit 5 echo-n $"Starting $name:" daemon-- user ${REDIS_USER-redis} "$exec $REDIS_CONFIG" retval=$? Echo [$retval-eq 0] & & touch $lockfile return $retval} stop () {echo-n $"Stopping $name:" killproc-p $pidfile $name retval=$? Echo [$retval-eq 0] & & rm-f $lockfile return $retval} restart () {stop start} reload () {false} rh_status () {status-p $pidfile $name} rh_status_q () {rh_status > / dev/null 2 > & 1} case "$1" in start) rh_status_q & & exit 0 $1 Stop) rh_status_q | | exit 0 $1;; restart) $1;; reload) rh_status_q | | exit 7 $1;; force-reload) force_reload;; status) rh_status;; condrestart | try-restart) rh_status_q | | exit 0 restart *) echo $"Usage: $0 {start | stop | status | restart | condrestart | try-restart}" exit 2esacexit $?
Because the script was started with the redis user, you need to add the redis user
Useradd-s / sbin/nologin redismkdir / usr/local/redis/varchmod 777 / usr/local/redis/varchmod 755 / etc/init.d/redischkconfig-- add redischkconfig redis onservice redis start
4. Redis data type
String is the simplest type, you can understand the same type as Memcached, one key corresponds to one value, the operations supported on the ride are similar to those of Memcached, and it has more features. Sets objects that can be stored in binary.
/ usr/local/redis/bin/redis-cli
127.0.1 szk 6379 > mset key1 szk key2 love key3 ycOK127.0.0.1:6379 > mget key1 key2 key31) "szk" 2) "love" 3) "yc"
List is a linked list structure, and its main functions are push, pop, getting all values in a range, and so on. The key is understood as the name of the link in the operation. Using the List structure, we can easily achieve functions such as ranking the latest messages. Using the List structure, we can easily achieve functions such as ranking the latest messages. Another application of List is message queue, which can use the push operation of list to store the task in list, and then the worker thread uses the pop operation to fetch the task for execution.
127.0.0.1 lpush list1 6379 > lpush list1 aaa (integer) 2127.0.0.1 lpush list1 aaa 6379 > lpush list1 "12270.0.1" (integer) 3127.0.1 lpush list1 6379 > rpop list1 "127.0.0.16379 > rpop list1" aaa "127.0.16379 > rpop list1
Set is a set, which is similar to the concept of set in our mathematics, such as adding and deleting elements, intersection and difference of multiple sets, and so on. The key in the operation is understood as the name of the collection. For example, in the Weibo application, you can put all the followers of a user in a collection and all its fans in a collection. Because Redis is very humane for several to provide intersection, union, difference and other operations, then it can be very convenient to achieve, such as common concern, common preferences, second friends and other functions, for all the above set operations, you can also use different commands to choose to return the results to the client or save set to a new collection. QQ has a social function called "friend tags", which can be implemented using a collection of redis to store each user's tags in a collection.
127.0.0.1 sadd set1 zbc 6379 > sadd set1 szk (integer) 1127.0.0.1 sadd set1 szk 6379 > smembers set11) "zbc"
Sorted set is an ordered set, it has a weight parameter score more than set, so that the elements in the collection can be arranged in order by score, such as a Sorted Sets that stores the scores of the whole class, the set value can be the student number of the students, and the score can be the examination score, so that when the data is inserted into the collection, it has been sorted naturally.
127.0.0.1 zadd mset2 2 "cde 123" (integer) 1127.0.0.1 integer > zadd mset2 4 "a123a" (integer) 1127.0.1 "integer" 6379 > zadd mset2 24 "123-aaa" (integer) 1127.0.0.1 zadd mset2 0-11) "cde 123" 2) "a123a" 3) "123-aaa"
Hash: in memcached, structured information is often packaged into hashmap, which is serialized on the client and stored as a string value (usually in JSON format), such as the user's nickname, age, gender, points, and so on.
5. Redis persistence
Redis provides two persistence methods, namely RDB (Redis DataBase) and AOF (Append Only FIle).
RDB, in short, is to generate snapshots of the data stored in redis at different points in time and store them on media such as disks.
AOF, on the other hand, implements persistence from a different point of view, that is, all write instructions executed by redis are recorded, and data recovery can be achieved by repeating these write instructions from front to back the next time redis is restarted.
In fact, both RDB and AOF can be used at the same time. In this case, if redis is restarted, AOF will be preferred for data recovery. This is because the data recovery in AOF is more complete.
If you don't have the need for data persistence, you can also turn off RDB and AOF, so that redis will become a pure memory database, just like memcached.
VI. General configuration of Redis
Daemonize no # by default, redis does not run as daemon. The running form of redis can be controlled through the daemonize configuration item.
Pidfile / path/to/redis.pid # when running as daemon, redis generates a pid file, which is generated by default in / var/run/redis.pid
Bind 192.168.1.200 # specifies the bound IP, which can have multiple
Port 6379 # specify listening port
Unixsocket / tmp/redis.sock # can also listen to socket
Unixsocketperm 755 # when listening to sockets, you can specify permissions of 755
Timeout 0 # when a redis-client has not sent a request to the server, the server has the right to actively close the connection. You can set the "space timeout" through timeout. 0 means never close the connection.
The tcp-keepalive 0 # TCP connection survival policy can be set through the tcp-keepalive configuration item (in seconds). If it is set to 60 seconds, the server will issue an ACk request to the client whose connection is idle every 60 seconds to check whether the client has hung up, and the client that does not respond will close its connection. If set to 0, no survival test will be performed.
Loglevel notice # log level, there are four debug,verbose,notice,warning
Logfile "" # define log path
Syslog-ident redis # if you want the log to be printed to sysllog, control it through syslog-enabled
Syslog-facility local0 # specifies the device of syslog, which can be USER or local0-local7
Databases 16 # sets the total number of databases, select n selects databases, 0-15
7. Redis snapshot configuration (rdb persistence)
Save 9001 # means that persistence is triggered every 15 minutes and at least one key change
Save 30010 # triggers a persistence if there are at least 10 key changes every 5 minutes
Save 60 1000 # means that at least 10000 key changes occur every 60 seconds, triggering a persistence
Save "" # this disables rdb persistence
Stop-write-on-bgsave-error yes # rdb persistent writing to disk cannot avoid failure. By default, redis stops the write operation immediately if it fails. If you think it doesn't matter, you can use the option to turn off this feature.
Does rdbcompression yes # want to compress
Rdbchecksum yes # whether to perform data verification
Dir. / # define the storage path for snapshot files
VIII. Redis security-related configuration
Vim / usr/local/redis/etc/redis.conf # set the password for redis-server
# add the following configuration
Requirepass szk
/ usr/local/redis/bin/redis-cli-a szk #-a specify password login
Rename-command CONFIG szk.config # renames the CONFIG command to szk.config to avoid misoperation, but it is not recommended to enable this feature if AOF persistence is used
Rename-command CONFIG "" # can also be later defined as empty, thus disabling the CONFIG command
IX. Configuration related to Redis restrictions
Maxclients 10000 # limit the maximum number of client connections
Maxmemory # sets the maximum memory usage (in byte)
Maxmemory-policy volatile-lru # specify memory removal rules
The maxmemory-samples 3 # LRU algorithm for calculating large and minimum TTL is not an accurate algorithm, but an estimate. So you can set the sample size. If redis checks the three key by default and selects the one of the LRU, then you can change the number of key samples.
Configuration related to Redis AOF persistence
Appendonly no # if it is yes, enable aof persistence
Appendfilename "appendonly.aof" # specifies the name of the aof file, which is saved in the command specified by the dir parameter
Appendfsync everysec # specifies the fsync () invocation mode. There are three kinds of no (no fsync is called), always (fsync is called every time you write), and exerysec (fsync is called once per second). The first kind of data is the fastest, and the second kind of data is the most secure, but the performance is worse. The default is the third scheme, which takes into account both performance and security.
No-appendfsync-on-rewrite no # uses no to avoid disk IO blocking when the write load is very large
Auto-aof-rewrite-percentage 10 # specifies the circumstances under which aof rewriting is triggered. The value is a ratio, and 10 indicates that the rewriting mechanism will be triggered when the growth of the aof file reaches 10%.
Auto-aof-rewrite-min-size 64mb # rewriting will be subject to a condition that it cannot be lower than 64MB
11. Configuration related to Redis slow logs
For slow logs, you can set two parameters, one is the execution time, in microseconds, and the other is the length of the slow log. When a new command is written to the log, the oldest one is removed from the command log queue.
Slowlog-log-slower-than 10000 # if slower than 10000ms, log
Slowlog-max-len 128 # Log length
12. Redis master-slave configuration
Follow the steps described earlier to install redis and start it.
Master configuration file does not need to be moved
Add one line to the slave configuration file: slaveof 192.168.1.200 6379
Masterauth szk # if a password is set on the master, add this line
Start master and slave respectively
Tail / usr/local/redis/var/redis.log 3966] 18 Feb 15 Feb 02receiving 58.330 * MASTER SLAVE sync: receiving 192 bytes from master [3966] 18 Feb 15V 02R 58.330 * MASTER SLAVE sync: Flushing old data [3966] 18 Feb 15V 02R 58.330 * MASTER SLAVE sync: Loading DB in memory [3966] 18 Feb 15V 02V 58.330 * MASTER SLAVE sync: Finished with success [3966] 18 Feb 15V 03R 03.344-DB 0: 7 keys (0 volatile) in 8 slots HT. [3966] 18 Feb 15 clients connected 0314 03.344-1 slaves) 466840 bytes in use [3966] 18 Feb 15 volatile 03 08.396-DB 0: 7 keys (0 volatile) in 8 slots HT. [3966] 18 Feb 15 DB 0397-1 clients connected (0 slaves), 466848 bytes in use
Test:
/ usr/local/redis/bin/redis-cli-a szk Master
127.0.0.1 purl 6379 > set key1 szk
OK
127.0.0.1 purl 6379 > get key1
"szk"
/ usr/local/redis/bin/redis-cli from
127.0.0.1 purl 6379 > get key1
"szk"
OK
13. Redis master and slave other related configurations
Slave-read-only yes # transfer from read-only
Repl-ping-slave-period 10 # sets the frequency at which slave initiates ping to master, once every 10s
Repl-timeout 60 # times out after setting the number of seconds for different master of slave ping
Whether repl-disable-tcp-nodelay no # enables tcp_nodeay. When enabled, less bandwidth will be used, but there will be delay, so it is recommended to turn it off.
The length of the repl-backlog-size 1mb # synchronization queue. Backuplog is a buffer of master. After the master and slave are disconnected, master will first write the data to the buffer, and slave will synchronize the data from the buffer after connecting again.
The validity period of the buffer after the master-slave disconnect of repl-backlog-ttl 3600 #. Default is 1 hour.
Slave-priority 100# multiple slave can be set priority. The smaller the number, the higher the priority. It can be used in the cluster. Slave can be switched to mster, and the highest priority will be switched.
Min-slave-to-write 3 # is used in conjunction with the following, which means that master finds that there are more than 3 slave with a delay of more than 10s, then master temporarily stops writing. If either of these two values is 0, the feature is turned off. The default first value is 0.
Min-slaves-max-log 10
14. Common operations of string
127.0.0.1 szkOK 6379 > set key1 szk # assign key1 to szkOK 127.0.1 szkOK 6379 > get key1 # get this value value "szk" 127.0.0.1 szkOK 6379 > set key1 yc # one value for one key, assign values multiple times Will overwrite the previous valueOK127.0.0.1:6379 > get key1 "yc" 127.0.0.1 setex key3 6379 > setex key3 101 # to set the expiration time for key, ttl key3 view time OK127.0.0.1:6379 > mset key1 1 key2 2 key3 3 # simultaneously set multiple keyOK127.0.0.1:6379 > mget key1 key2 key3 1) "1" 2) "2" 3) "3"
15. Common operations of Hash
127.0.0.1 hset hash2 name szk (integer) 1127.0.0.1 hset hash2 age 6379 > hset hash2 job it (integer) 1127.0.0.1 hset hash2 job it 6379 > hgetall hash2 1) "name" 2) "szk" 3) "age" 4) "23" 5) "job" 6) "it" 127.0.1 > hmset hash3 name yc age 24 job teacher # batch create OK127 .0.0.1: 6379 > hgetall hash31) "name" 2) "yc" 3) "age" 4) "24" 5) "job" 6) "teacher" 127.0.0.1 > hdel hash3 job # Delete a value (integer) 1127.0.0.1 > hgetall hash31) "name" 2) "yc" 3) "age" 4) "24" 127.0.1 > hkeys hash3 # View all values Key1) "name" 2) "age" 127.0.0.1 age 6379 > hvals hash3 # View all values1) "yc" 2) "24" 127.0.0.1 values1 6379 > hlen hash3 # View how many filed (integer) 2 there are in hash
16. Common operations of list
127.0.0.1 lpush list1 a # insert from the left (integer) 3127.0.1 lpush list1 b # insert from the left (integer) 4127.0.1 lpush list1 c # insert from the left (integer) 5127.0.1 lpush list1 0-1 # list from left to right Insert first on the last side 1) "c" 2) "b" 3) "a" 127.0.0.1 rpush list1 6379 > lpop list1 # take out "c" 127.0.0.1 rpush list1 6379 > from the left to right list 1) "b" 2) a "127.0.0.1 rpush list1 1 # insert from the right (integer) 5127.0.0. 1rpush list1 6379 > rpush list1 2 # insert from the right (integer) 6127.0.0.1 rpush list1 3 # insert from the right (integer) 7127.0.0.1 lrange list1 0-1 # list from right to left Finally insert in the last side 1) "b" 2) "a" 3) "1" 4) "2" 5) "3" 127.0.0.1 linsert list1 before 6379 > lrange list1 3 5 # insert a 5 (integer) 8127.0.1) 6379 > lrange list1 0-11) "b" 2) "a" 3) "1" 4) "2" 5) "5" 6) "3" 127. 0.0.1 lrange list1 6379 > lset list1 7 6 # replace the seventh element starting with 0 with 6OK127.0.0.1:6379 > lrange list1 0-11) "b" 2) "a" 3) "456" 4) "123" 5) "1" 6) "2" 7) "5" 8) "6" 127.0.1 lrange list1 6379 > lindex list1 7 # View the seventh element "6 from 0" "127.0.0.1 llen list1 6379 > how many elements (integer) 8 are in the view list
XVII. Common operations of set data
127.0.0.1 sadd set1 1 # put elements (integer) 1127.0.0.1 sadd set1 6379 > sadd set1 2 (integer) 1127.0.0.1 integer 6379 > sadd set1 3 (integer) 1127.0.1integer 6379 > sadd set1 4 (integer) 1127.0.0.16379 > smembers set1 # View all elements in the collection 1) "zbc" 2) "1" 3) "szk" 4) " "5)" 3 "6)" 4 "127.0.0.1 6379 > spop set1 # randomly take an element Delete "szk" 127.0.0.1 set1 6379 > sdiff set1 seta # compare bad sets, compare mainly with set1 1) "zbc" 2) "4" 127.0.0.1 set1 6379 > sdiffstore set3 seta set1 # compare bad sets, store the results in set3 (integer) 2127.0.1 set1 6379 > SMEMBERS set3 # tab auto-completion, but show uppercase 1) "2" 2) "szk"
XVIII. Common operation of zset
127.0.0.1 zadd zset1 1 abc # create an ordered set (integer) 1127.0.0.1 integer > zadd zset1 10 aabc # create an ordered set (integer) 1127.0.0.1 aabc 6379 > zadd zset1 5 aaa # create an ordered set (integer) 1127.0.0.1 bbb # create an ordered set (integer) 1127.0.0.1 szk # create an ordered set (integer) Integer) 1127.0.0.1 6379 > ZRANGE zset1 0-1 # shows all elements Display in order 1) "abc" 2) "aaa" 3) "aabc" 4) "bbb" 5) "szk" 127.0.0.1 ZRANGE zset1 0-1 withscores # can take a score 1) "abc" 2) "1" 3) "aaa" 4) "5" 5) "aabc" 6) "10" 7) "bbb" 8) "88" 9) "szk" 10) "888" 127 .0.0.1: 6379 > ZREM zset1 abc # Delete the specified element (integer) 1127.0.0.1 withscores1 6379 > ZRANGE zset1 0-1 withscores1) "aaa" 2) "5" 3) "aabc" 4) "10" 5) "bbb" 6) "88" 7) "szk" 8) "888" 127.0.1 > zrevrank zset1 szk # returns the index value of the element Index values start at 0, sorted forward by score (integer) 0127.0.0.1 zrank zset1 szk # ditto, except that Check the number of elements in the collection (integer) 4 127.0.1 zcount zset1 1 20 # in reverse order of score (integer) 3127.0.0.1 zcount zset1 6379 > zcount zset1 1 20 # View the number of elements in the score range (integer) 2127.0.1 zcount zset1 1 100 elements in the range of branches 1) "aaa" 2) "5" 3) "aabc" 4) "10" 5) "bbb" 6) "88" 127.0.0.1 6379 > zrangebyscore zset1 0 10 # Delete elements with branches in the range 0-10 Sort by score 1) "aaa" 2) "aabc"
XIX. Key values and server commands
127.0.0.1 seta 6379 > keys * # list all keys 1) "key3" 2) "seta" 3) "hash2" 4) "list1" 5) "key2" 6) "zset1" 7) "mset2" 8) "set2" 9) "set1" 10) "key1" 11) "hash3" 12) "set3" 127.0.0.1 1V 6379 > keys key*1) "key3" 2) "key2" 3) "key1" 127.0.0.1 list1 6379 > EXISTS list1 # check to see if there is a list1 (integer) 1127.0.0.1 list1 6379 > del key1 # Delete key1 (integer) 1127.0.0.1 del key1 6379 > EXISTS key1 (integer) 0127.0.0.1 integer 6379 > EXPIRE key3 10 # set the expiration time (integer) 1127.0.0.1lug 6379 > get key3 "3" 127.0.0.1list1 6379 > ttl key3 # View the expiration time of key -1 does not exist expired-2 does not store key values (integer)-2127.0.0.1 integer 6379 > EXists key3 (integer) 0127.0.1 purl 6379 > select 0 # switch library Default 16 libraries OK127.0.0.1:6379 > select 1OK127.0.0.1:6379 [1] > set key1 111# create a new key value OK127.0.0.1:6379 [1] > keys * 1) "key1" 127.0.0.1 OK127.0.0.1:6379 6379 [1] > move set1 2 # move to library 2 (integer) 0127.0.0.1 OK127.0.0.1:6379 6379 [1] > select 2 # switching library 2OK127.0.0.1:6379 [2] > keys * 1) "key1" 127.0.0.1 EXPIRE key1 6379 [2] > set Expiration time (integer) 1127.0.0.1 EXPIRE key1 6379 [2] > ttl key1 (integer) 193127.0.1 EXPIRE key1 6379 [2] > PERSIST key1 # cancel Expiration time (integer) 1127.0.0.1 EXPIRE key1 6379 [2] > ttl key1 (integer)-1127.0.1 purl 6379 [2] > RANDOMKEY # randomly return a key "key1" 127.0.0.1 key 6379 [2] > RENAME key1 szk # change the name of a key OK127.0.0.1:6379 [2] > keys * 1) "szk" 127.0.0.1 RENAME key1 szk 6379 [2] > type szk # View the type of a key value string
Service-related operations
127.0.0.1 DBSIZE 6379 [2] > flushdb # View the number of keys in a library (integer) 1127.0.0.1 select 0OK127.0.0.1:6379 > select 0OK127.0.0.1:6379 > DBSIZE (integer) 10127.0.1 integer > info # View redis service information # Serverredis_version:2.8.21redis_git_sha1:00000000 slightly 127.0.0.1 DBSIZE 6379 > flushdb # clear all key OK127.0 in the current database .0.1 SELECT 1OK127.0.0.1:6379 6379 > keys * (empty list or set) 127.0.0.1 flushall # clear all keyOK127.0.0.1:6379 > SELECT 1OK127.0.0.1:6379 [1] > keys * (empty list or set) in all databases
21. Apply Redis in PHP
Cd / usr/local/srcwget http://pecl.php.net/get/redis-2.2.5.tgztar xf redis-2.2.5.tgzcd redis-2.2.5/usr/local/php/bin/phpize./configure-- with-php-config=/usr/local/php/bin/php-confimake Make installmv / usr/local/php/lib/php/extensions/no-debug-zts-20100525/redis.so / usr/lib64/vim / usr/local/php/php.iniextension_dir = / usr/lib64/extension = redis.sousr/local/php/bin/php-m | grep redisredis
# # after loading successfully, restart nginx to see the phpinfo page
22, Redis to achieve session sharing
Add to php.ini
Session.save_handler = "redis"
Session.save_path = "tcp://127.0.0.1:6379"
Or apache virtual host to join.
Php_value session.save_handler "redis"
Php_value session.save_path "tcp://127.0.0.1:6379"
Or add to the pool corresponding to php-fpm.conf
Php_ value [session. Save _ handler] = redis
Php_ value [session. Save _ path] = "tcp://127.0.0.1:6379"
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.