In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
1. Introduction of Redis
Redis is an in-memory cache database developed by Italian Salvatore Sanfilippo (net name: antirez). Redis full name: Remote Dictionary Server (remote data Service), the software is written in C language, Redis is a key-value storage system, it supports a wealth of data types, such as: string, list, set, zset (sorted set), hash. Similar to Memcache, but largely compensating for the shortcomings of Memcache, it supports relatively more value types of storage, including string, list, set, zset, and hash. These data types support push/pop, add/remove, and take intersection union and difference sets and richer operations. On this basis, Redis supports a variety of different sorting methods.
Redis uses memory as the data storage medium, so the efficiency of reading and writing data is much higher than that of the database. Take setting and getting a 256-byte string as an example, its read speed can be as high as 110000 times / s and write speed up to 81000 times / s. The storage of Redis is divided into three parts: memory storage, disk storage and log files. After restart, Redis can reload data from disk into memory, which can be configured through configuration files. Because of this, Redis can be persisted. Because Redis exchanges data quickly, it is often used to store some data that needs to be fetched frequently in the server, which can greatly save the cost of reading disk directly to get data, and more importantly, it can greatly improve the speed.
Memcache can only cache the data into memory and cannot write to the hard disk on a regular basis. As soon as the power is cut off or restarted, the memory is emptied and the data is lost. So the Memcache application scenario is suitable for caching data that does not need to be persisted.
Three main features that Redis inherits from many of its competitors:
The 1.Redis database is entirely in memory, and the disk is used only for persistence.
two。 Compared with many key-value data stores, Redis has a relatively rich set of data types.
3.Redis can copy data to any number of slave servers.
4. Redis can support more than 100K + read and write frequency per second.
5. The limit of a single value is 1GB Memcached for 1GB data.
Redis advantage
Unusually fast: Redis is very fast, executing about 110000 sets per second and about 81000 + records per second.
Support rich data types: Redis supports most developers already know such as lists, collections, ordered collections, hash data types. This makes it very easy to solve a variety of problems, because we know which problems can be handled better through its data types.
Operations are atomic: all Redis operations are atomic, which ensures that if two clients access the Redis server at the same time, they will get the updated value.
Multi-function utility: Redis is a multi-utility tool that can be used in multiple uses such as caching, message, queue use (Redis native support publish / subscribe), any short-lived data, applications, such as Web application sessions, web page hit counting, etc.
II. Installation of Redis
[root@localhost app] # lsredis-3.2.8.tar.gz [root@localhost app] # tar zxvf redis-3.2.8.tar.gz [root@localhost app] # cd redis-3.2.8 [root@localhost redis-3.2.8] # yum-y install gcc gcc-c++ [root@localhost redis-3.2.3] # make # make MALLOC=libc if memory management is not specified in make Will report an error: zmalloc.h:50:31: fatal error: jemalloc/jemalloc.h: there is no file or directory. Malloc manages memory fragmentation. Hint: It's a good idea to run 'make test';) make [1]: Leaving directory `/ app/redis-3.2.3/src' [root@localhost redis-3.2.3] #
After executing make, we will output (t's a good idea to run 'make test') in the final program, which suggests that we execute make test for testing. Then we enter make test to check whether there is a problem with the test, as shown in the figure: it means there is no problem with the test:
[root@localhost redis-3.2.3] # cd src/ [root@localhost src] # make testYou need tcl 8.5 or newer in order to run the Redis testmake: * * [test] Error 1 [root@localhost src] # cd / app/ [root@localhost app] # wget [root@localhost app] # tar zxvf tcl8.6.1-src.tar.gz [root@localhost app] # cd tcl8.6.1/unix/ [root@localhost unix] #. / configure [root@localhost Unix] # make & & make install [root@localhost unix] # cd / app/redis-3.2.3/ [root@localhost redis-3.2.3] # make clean [root@localhost redis-3.2.3] # make [root@localhost redis-3.2.3] # cd src/ [root@localhost src] # make test 215 seconds-integration/replication-3 212 seconds-integration/replication-4 98 seconds-unit/hyperloglog 151 seconds-unit/obuf-limits\ o / All tests passed without errorscleaning up: may take some time... OK [root@localhost src] # make install # make PREFIX=/app/redis install Note PREFIX capital Hint: It's a good idea to run 'make test';) INSTALL install [root@localhost src] #
Note: the following error is likely to occur at the make test step here:
Error one
!!! WARNING The following tests failed:*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tclExpected condition'[s-1 sync_partial_ok] > 0'to be true ([s-1 sync_partial_ok] > 0) Cleanup: may take some time... OKmake: * * [test] Error 1 [root@localhost src] #
There are two ways to avoid:
1. Modify tests/integration/replication-psync.tcl in the decompression directory to change after 100to after 500. this parameter seems to be the number of milliseconds to wait.
[root@localhost redis-3.2.3] # vim tests/integration/replication-psync.tcl if ($reconnect) {for {set j 0} {$j
< $duration*10} {incr j} { after 500 # catch {puts "MASTER [$master dbsize] keys, SLAVE [$slave dbsize] keys"}[root@localhost redis-3.2.3]# 2、用taskset来make test taskset -c 1 make test 错误二 [exception]: Executing test client: NOREPLICAS Not enough good slaves to write..NOREPLICAS Not enough good slaves to write.......Killing still running Redis server 63439Killing still running Redis server 63486Killing still running Redis server 63519Killing still running Redis server 63546Killing still running Redis server 63574Killing still running Redis server 63591I/O error reading reply......"createComplexDataset $r $ops" (procedure "bg_complex_data" line 4) invoked from within"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3]" (file "tests/helpers/bg_complex_data.tcl" line 10)Killing still running Redis server 21198make: *** [test] Error 1[root@localhost src]# vim ../tests/integration/replication-2.tcl start_server {tags {"repl"}} { start_server {} { test {First server should have role slave after SLAVEOF} { r -1 slaveof [srv 0 host] [srv 0 port] after 10000 #修改成10000 s -1 role } {slave} 错误三 [err]: Slave should be able to synchronize with the master in tests/integration/replication-psync.tclReplication not started. 这个错误我重新make test就可以了,只遇到过一次 三、redis的配置 启动: [root@localhost src]# redis-server ../redis.conf _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 3.2.3 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in standalone mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 | `-._ `._ / _.-' | PID: 35217 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | http://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' 35217:M 29 Mar 11:30:21.454 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.35217:M 29 Mar 11:30:21.454 # Server started, Redis version 3.2.335217:M 29 Mar 11:30:21.454 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.35217:M 29 Mar 11:30:21.454 * The server is now ready to accept connections on port 6379^C35217:signal-handler (1490758240) Received SIGINT scheduling shutdown...35217:M 29 Mar 11:30:40.180 # User requested shutdown...35217:M 29 Mar 11:30:40.180 * Saving the final RDB snapshot before exiting.35217:M 29 Mar 11:30:40.191 * DB saved on disk35217:M 29 Mar 11:30:40.191 * Removing the pid file.35217:M 29 Mar 11:30:40.191 # Redis is now ready to exit, bye bye...[root@localhost src]# 这里直接执行Redis-server 启动的Redis服务,是在前台直接运行的(效果如上图),也就是说,执行完该命令后,如果Lunix关闭当前会话,则Redis服务也随即关闭。正常情况下,启动Redis服务需要从后台启动,并且指定启动配置文件。 编辑conf文件,将daemonize属性改为yes(表明需要在后台运行) [root@localhost src]# vim ../redis.conf daemonize yes[root@localhost src]# redis-server ../redis.conf [root@localhost src]# netstat -anotp|grep :6379tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 35415/redis-server off (0.00/0/0)[root@localhost src]# redis-cli shutdown #停止redis服务[root@localhost src]# netstat -anotp|grep :6379 配置 为了方便管理,将Redis文件中的conf配置文件和常用命令移动到统一文件中 [root@localhost app]# mkdir -p redis6379/{log,conf,data,bin}[root@localhost src]# cp redis-server redis-benchmark redis-cli mkreleasehdr.sh redis-check-aof /app/redis6379/bin/[root@localhost src]# cp ../redis.conf /app/redis6379/conf/[root@localhost src]# pwd/app/redis-3.2.3/src[root@localhost src]# cd /app/redis6379/bin/[root@localhost bin]# redis-server ../conf/redis.conf [root@localhost bin]# netstat -antp|grep :6379tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 36334/redis-server [root@localhost bin]# cp /etc/sysctl.conf{,.bak}[root@localhost bin]# vim /etc/sysctl.confvm.overcommit_memory = 1[root@localhost bin]# sysctl -p redis-benchmark redis性能检测工具 测试Redis的读写性能: [root@localhost bin]# ./redis-benchmark -h localhost -p 6979 -c 100 -n 100000 #100个并发连接,100000个请求......====== MSET (10 keys) ====== 100000 requests completed in 0.94 seconds #100000个请求完成于 0.94 秒 100 parallel clients #每个请求有100个并发客户端 3 bytes payload #每次写入3字节 keep alive: 1 #保持1个连接99.92% [root@localhost bin]# ./redis-cli -h 10.15.97.136 -p 6979 -a ywbz97.136 -n 15 #登陆15号数据库[root@localhost bin]# ./redis-cli -h 10.15.97.136 -p 6979 -a ywbz97.136 info|grep "\" #过滤查询used_memory属性Warning: Using a password with '-a' option on the command line interface may not be safe.used_memory:902799[root@localhost bin]# 当used_memory_rss接近maxmemory或者used_memory_peak超过maxmemory时,要加大maxmemory 负责性能下降。 maxmemory: 不要用比设置的上限更多的内存。一旦内存使用达到上限,Redis会根据选定的回收策略(参见:maxmemmory-policy:内存策略设置)删除key。如果因为删除策略问题Redis无法删除key,或者策略设置为 "noeviction",Redis会回复需要更多内存的错误信息给命令。例如,SET,LPUSH等等。但是会继续合理响应只读命令,比如:GET。在使用Redis作为LRU缓存,或者为实例设置了硬性内存限制的时候(使用 "noeviction" 策略)的时候,这个选项还是满有用的。当一堆slave连上达到内存上限的实例的时候,响应slave需要的输出缓存所需内存不计算在使用内存当中。当请求一个删除掉的key的时候就不会触发网络问题/重新同步的事件,然后slave就会收到一堆删除指令,直到数据库空了为止。slave连上一个master的话,建议把master内存限制设小点儿,确保有足够的系统内存用作输出缓存。(如果策略设置为"noeviction"的话就不无所谓了),设置最大内存,达到最大内存设置后,Redis会先尝试清除已到期或即将到期的Key,当此方法处理后,任到达最大内存设置,将无法再进行写入操作。 设置内存分配策略vm.overcommit_memory = 1 ;否则Redis脚本在重启或停止redis时,将会报错,并且不能自动在停止服务前同步数据到磁盘上 /proc/sys/vm/overcommit_memory可选值:0、1、2。 0, 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。 1, 表示内核允许分配所有的物理内存,而不管当前的内存状态如何。 2, 表示内核允许分配超过所有物理内存和交换空间总和的内存 值得注意的一点是,redis在dump数据的时候,会fork出一个子进程,理论上child进程所占用的内存和parent是一样的,比如parent占用的内存为8G,这个时候也要同样分配8G的内存给child,如果内存无法负担,往往会造成redis服务器的down机或者IO负载过高,效率下降。所以这里比较优化的内存分配策略应该设置为 1(表示内核允许分配所有的物理内存,而不管当前的内存状态如何) redis服务关闭后,缓存数据会自动dump到硬盘上,硬盘地址为redis.conf中的配置项dbfilename dump.rdb所设定 redis配置文件简介 [root@localhost conf]# vim redis.conf# 1k =>1000 bytes# 1kb = > 1024 bytes# 1m = > 1000000 bytes# 1mb = > 1024 "1024 bytes# 1g = > 1000000000 bytes# 1gb = > 1024" 1024 "1024 bytes## units are case insensitive so 1GB 1Gb 1gB are all the same.# when memory size needs to be configured in the configuration, similar formats such as 1k, 5GB, 4m can be used. The memory units are converted in the following ways (case-insensitive, for example, 1gb 1Gb 1GB 1gB can be used) whether daemonize yes # runs as a background process Default is nopidfile / var/run/redis_6379.pid # if running as a background process, you need to specify a pid path. Default is / var/run/redis.pid port 6379 # listening port, and default is 6379 # unixsocket / tmp/redis.sock # to specify the path of the unxi socket used to listen for connections. There is no default value for this, so if not specified, Redis will not listen through the unix socket. # unixsocketperm 700 timeout 300 # timeout, when the client does not issue any instructions during this period, then close the connection. Default is 0 (seconds). Never timeout loglevel notice # logging level, there are 4 optional values, debug is suitable for development and testing, verbose more details, notice is suitable for production environment, warning only records warning or error messages logfile / app/redis6379/log/redis.log # logging method The default value is stdout# syslog-enabled no # whether to output logs to Syslog # syslog-ident redis # sets the identifier of linux Syslog syslog. If it is "syslog-enabled=no", this option has no effect. # syslog-facility local0 # sets the facility of syslog, which must be a value between USER or LOCAL0-LOCAL7. Slave-serve-stale-data yes # when the connection between slave and master is broken or when slave is synchronizing with master, if there is a slave request, when set to yes, slave still responds to the request. There may be a problem at this time. If you set no, slave will return "SYNC with master in progress" error message. Except for the INFO and SLAVEOF commands. The number of databases available for databases 16 #. The default value is 16jie # snapshot mode # save 9001 # 900s (15 minutes). If at least one key is changed within 15 minutes, at least 10 key are changed within 10 # 300 seconds (5 minutes) of snapshot save, and at least 10000 key are changed within 60 10000 # 60 seconds of snapshot save. Then, if you do not need to write to disk, comment out all "save" settings: whether rdbcompression yes # compresses data when storing to the local database. The default is yes. If you want to save some cpu in the child process, you can set it to no, but this dataset may be larger than dbfilename dump.rdb # local database file name. The default value is dump.rdb rdbchecksum yes # whether to verify the local database storage path of rdb file dir / app/redis6379/data #. The default value is. / working directory tcp-keepalive 60 # tcp heartbeat packet. A reasonable value is recommended as 60 seconds. Prevent dead peers# master-slave replication # slaveof # slaveof 10.0.0.12 6379 # when this machine is a slave service, set the IP and port # masterauth # masterauth justin # of the master service when master sets password authentication Slave uses this option to specify the master authentication password # repl-ping-slave-period 10 # Slaves to send ping commands to server within a predefined interval, which defaults to 10 seconds. You can set # repl-timeout 60 # through repl_ping_slave_period to set the expiration of master-slave replication of large chunks of data, request data from master and ping response, this value must be greater than repl-ping-slave-period, otherwise the transmission expiration time between master and slave is shorter than expected. # requirepass foobared# requirepass justin # set the redis password # maxclients 10000 # set the maximum number of simultaneous connections to clients. 0 means no limit, which is related to the number of file descriptors that the Redis process can open. Once this limit is reached, Redis closes all new connections and sends an error "maximum number of users (max number of clients reached)" # maxmemory # specifies the maximum memory limit for Redis Redis will load the data into memory at startup. After reaching the maximum memory, Redis will first try to clear the expired or expiring Key. When this method is processed, the maximum memory setting will still be reached, and the write operation can no longer be performed, but the read operation can still be performed. Redis's new vm mechanism stores Key in memory, and Value is stored in swap area appendonly no # to specify whether to log after each update operation. Redis writes data to disk asynchronously by default. If it is not enabled, it may result in data loss for a period of time when power is off. Because redis itself synchronizes data files according to the above save conditions, some data will only exist in memory for a period of time. The default is no# maxmemory-policy noeviction # if you use this policy, the action you can take is volatile-lru: default policy, only LRU algorithm deletion is used for key that sets expiration time, allkeys-lru: delete infrequently used key,volatile-random: randomly delete expired key,allkeys-random: randomly delete a key,volatile-ttl that is about to expire: delete key,noeviction that is about to expire: do not expire, write operation returns an error. # maxmemory-samples 5 # randomly selects 5 key by default to eliminate the least frequently used appendfilename "appendonly.aof" # update log file name. The default value is appendonly.aofappendfsync everysec # update log condition. There are 3 optional values. No means that the operating system synchronizes the data cache to disk. Always means to manually call fsync () to write data to disk after each update operation. Everysec indicates whether redis automatically rewrites the AOF log file by calling BGREWRITEAOF when the no-appendfsync-on-rewrite no # log file is about to grow to a specified percentage once a second. Whether really-use-vm yes vm-enabled yes # uses virtual memory or not. The default value is no,VM mechanism for paging data storage. Redis will swap cold data with less visits to disk, and pages with more visits will be automatically swapped out by disk to vm-swap-file / tmp/redis.swap # virtual memory file path in memory. Default is / tmp/redis.swap. The vm-max-memory 0 # vm size limit cannot be shared by multiple Redis instances. 0: no limit. 60-80% of the available memory is recommended. All data larger than vm-max-memory is stored in virtual memory, no matter how small the vm-max-memory setting is, all index data is stored in memory (Redis's index data is keys), that is, when vm-max-memory is set to 0, all value actually exists on disk. The default value is 0 vm-page-size 32 #, which is adjusted according to the cache content and defaults to 32 bytes. Redis swap files are divided into many page. An object can be stored on multiple page, but a page cannot be shared by multiple objects. Vm-page-size should be set according to the size of stored data. The author suggests that if many small objects are stored, the page size should be set to 32 or 64bytes. If you store a large number of objects, you can use a larger page, or if you are not sure, use the default value of vm-pages 134217728 # page. For every 8 page, 1 byte of memory is consumed. Set the number of page in the swap file, because the page table (a type of bitmap that indicates that the page is idle or used) is placed in memory, every 8 pages on disk will consume 1byte's memory. Vm-page-size # vm-pages equals the swap file size vm-max-threads 4 # vm maximum number of IO threads. Note: 0 flag forbids the use of vm, set the number of threads accessing swap files, it is best not to exceed the number of cores of the machine, if set to 0, then all operations on swap files are serial, which may cause a long delay. The default value is 4include / path/to/local.conf #, which specifies that other profiles can be used between multiple Redis instances on the same host, while each instance has its own specific profile
Maxmemory sets the maximum memory, and maxmemory is the bytes byte type. Note the conversion. If maxmemory is set, the expiration policy is generally set. The default policy, maxmemory-policy volatile-lru, deletes the key with the expiration time set by LRU (Least Recently Used uses the algorithm least recently). If the set does not add the expiration time, the data will be full of maxmemory, and the write operation will no longer be possible.
Volatile-lru-> is deleted based on the expiration time generated by the LRU algorithm.
Allkeys-lru-> Delete any key according to the LRU algorithm.
Volatile-random-> randomly delete key according to the expiration setting.
Allkeys- > random-> delete randomly without difference.
Volatile-ttl-> delete based on the most recent expiration time (supplemented by TTL)
Noeviction-> No one deletes it, and an error is returned during the write operation.
Maxmemory 16777216 # sets the maximum memory 16GB. It is generally recommended that Redis set the memory to 3/4 of the maximum physical memory.
Redis uses the maximum value that exceeds the setting. For the page in debug mode, an error is prompted: OOM command not allowed when used memory > 'maxmemory'.
There are two ways to persist redis.
1.rdb mode
In fact, it is to snapshot things in memory on a regular basis according to some policy, that is, snapshots. Rdb saves binary files, which is the default way of redis.
Save # persistence only after the change of at least 10 key values in 100s is it necessary for stop-writes-on-bgsave-error yes# to do compressed rdbcompression yes# data check to ensure the correctness of the rdbchecksum yes# snapshot file name dbfilename dump.rdb# where the snapshot is stored dir / var/lib/redis when the write operation is stopped when the snapshot fails to save?
The way of 2.Append only file (AOF)
It forces the commands of each operation to be saved to disk, which is more persistent, but it is not suitable for frequent writes and is not recommended. The configuration is as follows:
The name of the appendonly no#append only file defaults to appendonly.aofappendfilename "appendonly.aof" # when the log is rewritten, the command append operation is not performed, but is put in the buffer to avoid conflict with the command append on the DISK IO. No-appendfsync-on-rewrite yes
Force synchronization of data to disk: redis-cli save or redis-cli-p 6380 save (specify a server synchronization based on the port number)
Info command to view Redis memory usage
[root@localhost redis6379] # cd bin/ [root@localhost bin] #. / redis-cli 127.0.0.1 redis-cli 6379 > info# Serverredis_version:3.2.3redis_git_sha1:00000000redis_git_dirty:0redis_build_id:9f41557dffea5d14redis_mode:standaloneos:Linux 2.6.32-642.el6.x86_64 x86_64arch_bits:64 # 64-bit system multiplexing_api:epoll # Redis event handling mechanism gcc_version:4. 4.7process_id:1381 # current server process idrun_id:94e9454ecd837dc2b3fc6e41499ecc058679daae # Random identifier of the Redis server (for Sentinel and clustering) tcp_port:6379uptime_in_seconds:723584 # normal working time (seconds) uptime_in_days:8 # normal working days hz:10lru_clock:16736529executable:/app/redis6379/bin/redis-serverconfig_file:/app/redis6379/conf/redis.conf# Clientsconnected_clients:1 # number of client connections connected_slaves:0 # from the number of server connections client_longest_output_list:0 # of currently connected clients Longest output list client_biggest_input_buf:0 # number of currently connected clients with maximum input cache blocked_clients:0 # waiting for blocking commands (BLPOP, BRPOP, BRPOPLPUSH) # Memoryused_memory:289912 # Total memory allocated by the Redis allocator (memory occupied by edis data) Used_memory_human:283.12K # returns the total amount of memory allocated by Redis in byte, focusing on used_memory_rss:1196032 # returning the total amount of memory allocated by Redis (physical memory occupied by redis) from an operating system point of view. This value is consistent with the output of top, ps, and so on. Used_memory_rss_human:1.14Mused_memory_peak:289912 # redis uses peak physical memory (in bytes) amount of memory used by used_memory_peak_human:283.12Ktotal_system_memory:505806848total_system_memory_human:482.38Mused_memory_lua:37888 # Lua engines (in bytes) used_memory_lua_human:37.00Kmaxmemory:0maxmemory_human:0Bmaxmemory_policy:noevictionmem_fragmentation_ratio : 4.13 # memory fragmentation rate The ratio between used_memory_rss and used_memory the mem_allocator:jemalloc-4.0.3 # memory allocator version, the memory allocator used by the Redis specified at compile time. It can be libc, jemalloc, or tcmalloc. # ideally, the value of used_memory_rss should be only slightly higher than that of used_memory. # when rss > used, and there is a large difference between the two values, it indicates that there is memory fragmentation (internal or external). The ratio of memory fragmentation can be seen by the value of mem_fragmentation_ratio. # when used > rss, part of the memory of the Redis is swapped out to the swap space by the operating system. In this case, the operation may cause significant delay. When Redis frees memory, the allocator may or may not return memory to the operating system. If Redis frees memory but does not return it to the operating system, the value of used_memory may not match the Redis footprint displayed by the operating system. Check the value of used_memory_peak to verify that this is happening. # related information about Persistence RDB and AOF loading:0rdb_changes_since_last_save:0rdb_bgsave_in_progress:0 # number of processes in which data is saved asynchronously in the background rdb_last_save_time:1492407953rdb_last_bgsave_status:okrdb_last_bgsave_time_sec:-1rdb_current_bgsave_time_sec:-1aof_enabled:1 # whether pure accumulation mode aof is enabled _ rewrite_in_progress:0 aof_rewrite_scheduled:0aof_last_rewrite_time_sec:-1aof_current_rewrite_time_sec:-1aof_last_bgrewrite_status:okaof_last_write_status:okaof_current_size:0aof_base_size:0aof_pending_rewrite:0aof_buffer_length:0aof_rewrite_buffer_length:0aof_pending_bio_fsync:0aof_delayed_fsync:0# Stats General Statistics total_connections_received:1 # Total connections received total_commands_processed:1 # number of commands processed by the server instantaneous_ops_per_sec:0total_net_input_bytes:31total_net_output_bytes:5935693instantaneous_input_kbps:0.00instantaneous_output_kbps:0.00rejected_connections:0sync_full:0sync_partial_ok:0sync_partial_err:0expired_keys:0 # total number of failed key evicted_ Keys:0 # Total number of key deleted keyspace_hits:0 # Key hits keyspace_misses:0 # Key misses pubsub_channels:0 # subscription Information pubsub_patterns:0latest_fork_usec:0 # recent child process migrate_cached_sockets:0# Replication master / slave replication Information role:master # master master server slave is slave server connected_slaves:0master_repl_offset:0repl_backlog_active:0repl_backlog_size:1048576repl_backlog_first_byte_offset:0repl_backlog_histlen:0# CPUused_cpu_sys:386.43 # Cpu usage used_cpu_user:218.25used_cpu_sys_children:0.00used_cpu_user_children:0.00#commandstats Redis command Statistics # Cluster Redis Cluster Information cluster_enabled:0# Keyspace Database related Statistics db0:keys=2 Number of key saved in database expires=0,avg_ttl=0 # 1 And timeout time 127.0.0.1purl 6379 >
Configure Redis as a service
[root@localhost conf] # cp / usr/local/redis-3.2.8/utils/redis_init_script / etc/rc.d/init.d/redis [root@localhost conf] # cat / etc/rc.d/init.d/redis #! / bin/bash#chkconfig: 2345 10 90#description: Redis server is an open source Advanced key-value store.# source function librarysource / etc/rc.d/init.d/functionsport= "6379" pidfile= "/ var/run/redis_$port.pid" lockfile= "/ var/lock/subsys/redis-server" rootpath= "/ app/redis$port" config= "$rootpath/conf/redis.conf" binpath= "$rootpath/bin" [- r "$SYSCONFIG"] & & source "$SYSCONFIG" reids_status () {status-p $pidfile redis-server} start () {if [- e $pidfile] Then echo "Redis Server aleady running." Exit 1 else echo-n "Starting Redis Server." $binpath/redis-server $config value=$? [$value-eq 0] & & touch $lockfile & & echo "OK" return $value fi} stop () {echo-n "Stop Redis Server." Killproc redis-server# $binpath/redis-cli save & & $binpath/redis-cli shutdown value=$? [$value-eq 0] & & rm-rf $lockfile $pidfile return $value} restart () {stop start} case "$1" instart) start;;stop) stop;;restart) restart;;status) reids_status *) echo $"Usage: $0 {start | stop | restart | status}" esac [root@localhost conf] # chkconfig-- add redis [root@localhost conf] # chkconfig-- level 2345 redis on [root@localhost conf] # chmod + x / etc/rc.d/init.d/redis
III. Testing of redis
[root@localhost src] # redis-cli # redis-cli-h host-p port-a password127.0.0.1:6379 > set justin "WeChat ID:ityunwei2017" # insert key-value pair data into redis with the key justin The value is WeChat ID:ityunwei2017OK127.0.0.1:6379 > get justin # according to the key value "WeChat ID:ityunwei2017" 127.0.0.1 WeChat ID:ityunwei2017 6379 > exists justin # to check whether the key exists (integer) 1127.0.0.1 WeChat ID:ityunwei2017 6379 > exists justin1 (integer) 0127.0.0.1 WeChat ID:ityunwei2017 6379 > del justin # Delete the current key (integer) 1127.0.0.1 WeChat ID:ityunwei2017 6379 > exists justin (integer) 0127.0.0.1Ze6379 > keys * # View Total number of keys 127.0.0.1 dbsize 6379 > 127.0.0.1 root@localhost src 6379 > quit [root@localhost src] # redis-benchmark-h 127.0.0.1-p 6379-n 10-c 50
Redis-benchmark-h 127.0.0.1-p 6379-n 10-c 50 sends 10 requests to the redis server, each with 50 concurrent clients,-n requests,-c concurrent requests
Redis does not implement its own memory pool and does not add its own stuff to the standard system memory allocator. Therefore, the performance and fragmentation rate of the system memory allocator will have some impact on the performance of Redis. When Redis compiles, it first determines whether to use tcmalloc, and if so, it replaces the function implementation in the standard libc with the function corresponding to tcmalloc. Second, it will determine whether jemalloc makes it possible to use the memory management functions in the standard libc only if none of them are used.
From the latest version, jemalloc has been included in the source package as part of the source package, so it can be used directly. If you want to use tcmalloc, you need to install it yourself.
Compared with the malloc of the standard glibc library, TCMalloc has much higher efficiency and speed in memory allocation, which can greatly improve the performance of MySQL server in the case of high concurrency and reduce the system load.
Tcmalloc (Thread-Caching Malloc) is part of google-proftools (http://code.google.com/p/gperftools/downloads/list)), so we actually need to install google-proftools. 64-bit operating system needs to install libunwind library first (do not install 32-bit operating system)
The libunwind library provides basic stack unwinding functions for programs based on 64-bit CPU and operating systems, including API for output stack tracing, API for programmatically unstacking stack, and API that supports C++ exception handling mechanism.
Compiling environment
[root@justin ~] # yum-y install gcc gcc+ gcc-c++ openssl openssl-devel pcre pcre-devel
Install the tcmalloc package
Wget http://download.savannah.gnu.org/releases/libunwind/libunwind-0.99-alpha.tar.gz
Tar zxvf libunwind-0.99-alpha.tar.gz
Cd libunwind-0.99-alpha/
CFLAGS=-fPIC. / configure
Make CFLAGS=-fPIC
Make CFLAGS=-fPIC install
Install the google-preftools package
Wget http://google-perftools.googlecode.com/files/google-perftools-1.8.1.tar.gz
Tar zxvf google-perftools-1.8.1.tar.gz
Cd google-perftools-1.8.1/
. / configure-disable-cpu-profiler-disable-heap-profiler-disable-heap-checker-disable-debugalloc-enable-minimal
Make & & make install
Sudo echo "/ usr/local/lib" > / etc/ld.so.conf.d/usr_local_lib.conf # if you don't have this file, create one yourself
Sudo / sbin/ldconfig
Cd / usr/local/lib
Ln-sv libtcmalloc_minimal.so.4.1.2 libtcmalloc.so
Install the Redi package
Cd redis-4.11
Make PREFIX=/opt/redis USE_TCMALLOC=yes FORCE_LIBC_MALLOC=yes
Make install
For three memory allocators for tcmalloc,jemalloc and libc. Its performance and fragmentation rate can be tested by writing the same amount of data using redis-benchmark included with Redis.
1. The test data are all small data, that is to say, when a single data is not large, the fragmentation rate is the lowest when using tcmalloc, which is 1.01, while the fragmentation rate of libc allocator is 1.31.
2. By setting the-d parameter of benchmark and adjusting the value to 1k, the fragmentation rate is the lowest when using tcmalloc, which is 1.02, while the fragmentation rate of libc distributor is 1.04.
4. Configure multiple instances in Redis
Multiple instances are relatively simple. You can generate impassable instances by failing to access the configuration file. Here we generate 100 song instances as an example.
First configure the default configuration file, such as port, log name, data file name, etc., to be represented by the port number (defined according to your hobby)
[root@localhost ~] # vim / app/redis/etc/redis.conf...port 6979pidfile / var/run/redis_6979.pidlogfile / app/redis/logs/redis_6979.logdbfilename 6979.rdb.[ root@localhost ~] # for n in `seq 8000 8099`; do cp redis.conf redis$n.conf & & sed-I "s/6379/$n/g" redis$n.conf;done [root@localhost ~] # for n in `seq 8000 8099`; do / app/redis/bin/redis-server / app/redis/etc/redis$n.conf;done
Fifth, Redis master-slave configuration
The master-slave replication function of Redis is very powerful, a master can have multiple slave, and a slave can have multiple slave, thus forming a powerful multi-level server cluster architecture.
1. Redis master does not need special configuration, just follow the normal configuration.
2. Redis slave. You need to specify the redis master in the configuration file.
I configure multiple instances on one host as master and slave.
[root@localhost etc] # vim redis6380.conf# slaveof slaveof 127.0.0.1 637 authentication masterauth if master has set an authentication password, you also need to configure masterauthmasterauth justin
Start the redis of master and slave and execute the info command to view the result:
Master
[root@localhost etc] #.. / bin/redis-cli-h 127.0.0.1-p 6379-a justin127.0.0.1:6379 > info# Replicationrole:master # Master connected_slaves:3 # 3 slave connection information slave0:ip=127.0.0.1,port=6380,state=online,offset=2314790043,lag=1 slave1:ip=127.0.0.1,port=6381,state=online,offset=2314790043,lag=1slave2:ip=127.0.0.1,port=6382,state=online,offset=2314790043 Lag=1master_replid:10f3abe30f7ff3991dd6a15ccccd4ebdce1bb35amaster_replid2:0000000000000000000000000000000000000000master_repl_offset:2314790043second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_first_byte_offset:2313741468repl_backlog_histlen:1048576
Slave
[root@localhost etc] #.. / bin/redis-cli-p 6380127.0.0.1 info 6380 > info # Replicationrole:slave # from master_host:127.0.0.1master_port:6379master_link_status:up # status to UP description and main connection Otherwise, it is downmaster_last_io_seconds_ago:1 # from the last connection time master_sync_in_progress:0 # synchronizes the number of primary server processes slave_repl_offset:2320002171slave_priority:100slave_read_only:1connected_slaves:0master_replid:10f3abe30f7ff3991dd6a15ccccd4ebdce1bb35amaster_replid2:0000000000000000000000000000000000000000master_repl_offset:2320002171second_repl_offset:-1repl_backlog_active:1repl_backlog_size:1048576repl_backlog_ First_byte_offset:2318953596repl_backlog_histlen:1048576
At this point, the value of set in master can be obtained by get in slave, indicating that the configuration is successful.
Slave:6380 master:6379
[root@localhost bin] #. / redis-cli-p 6380-an abcdef127.0.0.1:6380 > keys * (empty list or set) 127.0.0.1abcdef127.0.0.1:6380 6380 > quit [root@localhost bin] #. / redis-cli-p 6379-a 123456127.0.1abcdef127.0.0.1:6380 6379 > keys * (empty list or set) 127.0.0.1abcdef127.0.0.1:6380 6379 > set justin 51ctoOK127.0.0.1:6379 > get justin "51cto" 127.0.16379 > quit [root@localhost bin ] #. / redis-cli-p 6379-a 123456127.0.0.1 quit 6379 > keys * 1) "justin" 127.0.0.1 quit [root@localhost bin] #. / redis-cli-p 6380-an abcdef127.0.0.1:6380 > keys * 1) "justin" 127.0.0.16380 > set justin1 51cto # Slave cannot be written You can only read (error) READONLY You can't write against a read only slave.127.0.0.1:6380 > quit [root@localhost bin] #. / redis-cli-p 6379-a 123456127.0.0.1quit 6379 > del justin (integer) 1127.0.0.1redis-cli 6379 > keys * (empty list or set) 127.0.0.1quit 6379 > exit [root@localhost bin] #. / redis-cli-p 6380-an abcdef127.0.0.1:6380 > keys * (empty list or set) 127.0.1purl 6380 >
VI. Maximum cache setting
Example: maxmemory 100mb
Unit: mb,gb.
By default, there is no limit. If new data is added and exceeds the maximum memory, it will cause redis to crash. It is best to set it to 3 / 4 of physical memory, or even less, because other services such as redis replication data also need caching. In case the cache data is too large to cause the redis to crash, causing the system to fail and become unavailable. After setting the maxmemory, you need to set the cache data collection policy.
When the maxmemory limit is reached, the exact behavior that Redis will take is configured by the maxmemory-policy configuration directive.
(1), noeviction: an error is returned when the memory limit is reached. When the client tries to execute the command, it results in more memory usage (most write commands, with the exception of DEL and some exceptions).
(2), allkeys-lru: recycle the least recently used keys (LRU) to make room for new data.
(3), volatile-lru: recycle the most recently used keys (LRU), but only those keys that are set to expire to make room for new data.
(4) allkeys-random: reclaim random keys to make room for new data.
(5), volatile-random: reclaim random keys, but only those keys that are set to expire to make room for new data.
(6), volatile-ttl: reclaim keys that are set to expire. Try to reclaim the keys that are the shortest time away from TTL to make room for new data.
If the data shows a power law distribution, that is, some data access frequency is high and some data access frequency is low, then use allkeys-lru, if the data is evenly distributed, that is, all data access frequencies are the same, then use allkeys-random.
Several alarm errors in the log when redis starts
1 、 The TCP backlog setting of 511 cannot be enforced because / proc/sys/net/core/somaxconn is set to the lower value of 128
The TCP backlog setting, 511, did not succeed because the / proc/sys/net/core/somaxconn setting is smaller than the 128 position Baklog parameter that actually controls the size of the accept queue that has been shaken successfully for 3 times.
Echo 511 > / proc/sys/net/core/somaxconn (write it to / etc/rc.local file)
2 、 overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory=1' to/etc/sysctl.conf and then reboot or run the command' sysctl vm.overcommit_memory=1' for this to take effect.
The overcommit_memory parameter is set to 0! The daemon save may fail in the case of insufficient memory. It is recommended that you modify overcommit_memory to 1 in the file / etc/sysctl.conf.
Echo "vm.overcommit_memory=1" > / etc/sysctl.conf
3. You have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix thisissue run the command 'echo never > / sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your / etc/rc.local in order to retain thesetting after a reboot. Redis must be restarted after THP is disabled.
Large transparent pages are used, which can cause redis latency and memory usage problems. Execute echo never > / sys/kernel/mm/transparent_hugepage/enabled to fix the problem.
Echo never > / sys/kernel/mm/transparent_hugepage/enabled (write it to / etc/rc.local file)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.