In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "the method of python operating redis transaction". The content of the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "the method of python operating redis transaction".
Five big data type and application scene type characteristics use scene string simple key-value type, value can be string and number routine count (Weibo number, fan number, etc.) hash is a string type field and value mapping table Hash is especially suitable for storing data that may need to be changed in the storage part of the object (such as user information) list ordered repeatable list message queue and other set unordered non-repeatable list storage and computing relationships (such as Weibo, followers or fans are stored in collections, which can be achieved through intersection, union, difference, etc.) sorted set collections with scores for each element various ranking transactions
Characteristics
1. Separate isolation operation: all commands in the transaction will be serialized and executed sequentially, and will not be interrupted by commands sent by other clients. No guarantee of atomicity: if there is a command execution failure in a transaction in redis, then other commands will still be executed without a rollback mechanism
Transaction command
1. MULTI # Open transaction mysql begin2, Command 1 # execute Command 3, Command 2. 4, EXEC # commit to the database to execute mysql commit4, DISCARD # cancel transaction mysql 'rollback'
Use steps
# start transaction 127.0.0.1 MULTIOK# 6379 > MULTIOK# command 1 queue 127.0.0.1 INCR n2QUEUED# 6379 > INCR n1QUEUED# command 2 queue 127.0.0.1 INCR n2QUEUED# submit to the database to execute 127.0.0.1 EXEC1) (integer) 12) (integer) 1
Command error handling in transaction
# 1. The syntax of the command is incorrect and the command queue fails. Discard exits the transaction automatically. This error occurs before the command is called. For example, this command may have a syntax error (the wrong number of arguments) Incorrect command name) processing scheme: discard is automatically executed if syntax error occurs: 127.0.0.1 error 6379 [7] > MULTIOK127.0.0.1:6379 [7] > get aQUEUED127.0.0.1:6379 [7] > getsss a (error) ERR unknown command 'getsss'127.0.0.1:6379 [7] > 127.0.0.1 error 6379 [7] > 127.0.0.1 error 6379 [7] > EXEC (error) EXECABORT Transaction discarded because of previous errors.# 2, The command syntax is correct. However, if the type operation is incorrect, the transaction fails after the call, and the transaction cannot be rolled back. We have performed a key operation due to the error of value (for example, the List command operation on the value of String type): there is no special way to deal with what happens after EXEC: even if some commands fail in the transaction, other commands will be executed. Case 127.0.0.1 OK2 > MULTIOK127.0.0.1:6379 > set num 10QUEUED127.0.0.1:6379 > LPOP numQUEUED127.0.0.1:6379 > exec1) (error) WRONGTYPE Operation against a key holding the wrong kind of value127.0.0.1:6379 > get num "10" 127.0.0.1
Think about why redis doesn't support rollback.
Pipeline pipeline
Definition: batch execution of redis commands to reduce communication io
Note: this is a client technology
Example
Import redis# creates a connection pool and connects to redispool = redis.ConnectionPool (host = '127.0.0.1)) r = redis.Redis (connection_pool=pool) pipe = r.pipeline () pipe.set (' fans',50) pipe.incr ('fans') pipe.incrby (' fans',100) pipe.execute ()
Performance comparison
# create a connection pool and connect to redispool = redis.ConnectionPool (host = '127.0.0.1)) r = redis.Redis (connection_pool=pool) def withpipeline (r): P = r.pipeline () for i in range (1000): key =' test1' + str (I) value = iTun1 p.set (key) Value) p.execute () def withoutpipeline (r): for i in range (1000): key = 'test2' + str (I) value = iTun1 r.set (key, value)
Python operates redis transactions
With r.pipeline (transaction=true) as pipe pipe.multi () pipe.incr ("books") pipe.incr ("books") values = pipe.execute () watch-optimistic lock
Function: during the transaction, you can listen to the specified key. When the command is submitted, if the corresponding value of the monitored key is not modified, the transaction can be committed successfully, otherwise it will fail.
> watch booksOK > multiOK > incr booksQUEUED > exec # after transaction execution failed (nil) watch, open a terminal to enter redis > incr books # modify book value (integer) 1
Python operation watch
# operate on an account at the same time, and the current balance * 2 data is persisted
Persistence definition
Store data from a power loss-prone device to a permanent storage device
Why do you need persistence?
Because all the data is in memory, you must persist the RDB mode (on by default) 1, save the real data 2, save all the database data contained in the server to the hard disk in the form of binary files, 3, default file name: / var/lib/redis/dump.rdb
Two ways to create rdb files
* * method 1: * use SAVE or BGSAVE commands in redis terminals
127.0.0.1 SAVEOK# 6379 > characteristics 1. During the execution of the SAVE command, the redis server will be blocked and unable to process the command request sent by the client. After the execution of the SAVE command, the server will start processing the command request sent by the client again. 2. If the RDB file already exists Then the server will automatically use the new RDB file to replace the old RDB file. # A file 127.0.0.1 BGSAVEBackground saving started# will be saved regularly and persistently in the work. The execution process is as follows: 1, the client sends BGSAVE to the server, the server immediately returns Background saving started to the client 3, the server fork () child process does this, and the server continues to provide services 5. After the child process has created the RDB file, inform the Redis server # configuration file related / etc/redis/redis.conf263 line: dir / var/lib/redis # indicates the rdb file storage path 253line: dbfilename dump.rdb # file name # two commands are faster than SAVE than BGSAVE Additional memory is consumed because child processes need to be created. # add: you can see what redis has done by looking at log files. # log files: search logfilelogfile / var/log/redis/redis-server.log in configuration files
Method 2: set the configuration file to be automatically saved when the conditions are met (most used)
# redis configuration file default 218line: save 900line 1219: save 10000 10 means that if it has been 300 seconds since the last creation of the RDB file, and all the databases on the server have been modified not less than 10 times, then automatically execute the BGSAVE command line 220: save 60 3001, as long as any of the three conditions are met, the server will automatically execute BGSAVE 2, after each creation of the RDB file The time counters and times counters set by the server to achieve automatic persistence will be cleared to zero and start counting again, so the effects of multiple save conditions will not be superimposed # this configuration item can also be executed on the command line [not recommended] redis > save 60 10000
Shortcomings of RDB
1. To create a RDB file, you need to save the data of all the databases of the server, which is a very resource-consuming and time-consuming operation, so it takes a while for the server to create a new RDB file, that is to say, the creation of a RDB file cannot be performed too frequently, otherwise it will seriously affect the performance of the server. 2, data may be lost AOF (AppendOnlyFile) 1, commands are stored Not real data 2, not enabled by default # enable mode (modify configuration file) 1, / etc/redis/redis.conf line 672: appendonly yes # change no to yes line 676: appendfilename "appendonly.aof" 2, restart service sudo / etc/init.d/redis-server restart
The principle and advantages of AOF persistence
# principle 1. Whenever a command to modify the database is executed, 2. Because all the database modification commands executed by the server are stored in the AOF file, given an AOF file, the server can achieve the purpose of restoring the database by re-executing all the commands contained in the AOF file. # advantages: users can adjust AOF persistence according to their own needs Let Redis lose no data in the event of an unexpected outage, or only lose data for one second, which is much less than RDB persistence.
Special instructions
# because although the server executes a command to modify the database, it will write the executed command to the AOF file, but this does not mean that the persistence of the AOF file will not lose any data. In the current common operating systems, when the system calls the write function to write some content to a file, in order to improve efficiency, the system usually does not write the content directly to the hard disk. Instead, the contents are put into a memory cache (buffer), and the contents stored in the buffer are not actually written to the hard disk until the buffer is filled. # so 1. AOF persistence: when a command is actually written to the hard disk, the command will not be accidentally lost due to downtime. 2. The number of commands lost in AOF persistence during downtime Depending on the time when the command was written to the hard disk 3, the earlier the command is written to the hard disk, the less data will be lost in the event of an unexpected outage, and vice versa
Policy-Profil
# Open the configuration file: / etc/redis/redis.conf, and find the relevant policies as follows: every time the alwarys server writes a command, it writes the commands in the buffer to the hard disk. Even if the server shuts down unexpectedly, it will not lose any command data that has been successfully executed. 2,702 lines: everysec (# default) server writes commands in the buffer to the hard disk every second. In this mode, even if the server encounters an unexpected outage, only 3703 lines of data for 1 second are lost at most: the no server does not actively write commands to the hard disk, and the operating system decides when to write the commands in the buffer to the hard disk. The number of commands lost is uncertain # compared to always: slow everysec and no are very fast, the default is everysec
AOF rewriting
Consider: will there be a lot of redundant commands in the AOF file?
In order to keep the size of the AOF file within a reasonable range and avoid random growth, redis provides the AOF rewrite function, through which the server can generate a new AOF file-- the database data recorded by the new AOF file is exactly the same as that recorded by the original AOF file-- the new AOF file uses as few commands as possible to record the database data. As a result, references to new AOF files are usually much smaller-during AOF rewriting, the server is not blocked and can normally handle command requests sent by the client
Example
AOF file select 0SELECT 0sadd myset peiqiSADD myset peiqi qiaozhi danni lingyangsadd myset qiaozhiSET msg 'hello tarena'sadd myset danniRPUSH mylist 2 3 5sadd myset lingyang after the original AOF file is rewritten
INCR number
INCR number
DEL number
SET message 'hello world'
SET message 'hello tarena'
RPUSH mylist 1 2 3
RPUSH mylist 5
LPOP mylist
AOF rewrite-trigger
1. The client sends the BGREWRITEAOF command 127.0.0.1 BGREWRITEAOF 6379 > BGREWRITEAOF Background append only file rewriting started2 to the server, modifies the configuration file to let the server automatically execute the BGREWRITEAOF command auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb # explanation 1, rewrites only when the increment of the AOF file is greater than 100% That is, it triggers # first rewrite added: 64m # second rewrite added: 128m # third rewrite added: 256m (128m)
Comparison of RDB and AOF persistence
RDB persistent AOF persistent full backup, save the entire database incremental backup at a time, save one command to modify the database at a long interval, the default is one second, the data restore speed is fast, the data recovery speed is general, there are many redundant commands, and the slow restore speed will block the server when the SAVE command is executed, but the manual or automatic triggered BGSAVE will not block the server either normally or during AOF rewriting. Will not block the server.
# use redis to store real data, each of which cannot be lost. You need to use always, some for caching and some for saving real data. I can open multiple redis services. Different businesses use different persistence. Sina has four redis services on each server. There are thousands of redis services in the whole business, different businesses, and each persistence level is different.
Data recovery (no manual operation required)
There are both dump.rdb and appendonly.aof. Who are you looking for when you recover? Find appendonly.aof first.
Summary of common configuration of configuration files
# set password 1, requirepass password# open remote connection 2, bind 127.0.0.1:: 1 comment out 3, protected-mode no change the default yes to no# rdb persistence-default configuration 4, dbfilename 'dump.rdb'5, dir / var/lib/redis# rdb persistence-automatic trigger (condition) 6, save 90017, save 300108, save 60 1000 persistent aof persistence on 9, appendonly yes10, appendfilename' appendonly.aof'# aof persistence policy 11, Appendfsync always12, appendfsync everysec # default 13, appendfsync no# aof rewrite trigger 14, auto-aof-rewrite-percentage 10015, auto-aof-rewrite-min-size 64mb# set to slave server 16, salveof
Storage path of Redis-related files
1. Configuration file: / etc/redis/redis.conf2, backup file: / var/lib/redis/*.rdb | * .aof3, log file: / var/log/redis/redis-server.log4, startup file: / etc/init.d/redis-server# / etc/ to store configuration file # / etc/init.d/ to store service startup file Redis master-slave copy
Define
1. A Redis service can have multiple replicas of the service. This Redis service becomes master, and other replicas become slaves2. Master will always synchronize its own data updates to slaves, keeping master-slave synchronization. 3. Only master can execute write commands, and slave can only execute read commands.
Action
Share the pressure of reading (high concurrency)
Principle
Execute the read commands sent by the client from the server, such as GET, LRANGE, SMEMMBERS, HGET, ZRANGE, etc. The client can connect to slaves to execute the read request to reduce the read pressure on master.
Mode of realization
Method 1 (Linux command line implementation)
Redis-server-slaveof-masterauth
# from the server redis-server-- port 6300-- slaveof 127.0.0.1 637 slave from the client redis-cli-p 6300127.0.0.1slaveof 6300 > keys * # found to be copied from the original port 6379 redis data 127.0.0.1 slaveof 6300 > set mykey 123 (error) 6300 > # Slave server can only read data, not write data
Method 2 (Redis command line implementation)
# two commands 1, > slaveof IP PORT2, > slaveof no one# server starts redis-server-- port 630 clients connect to tarena@tedu:~$ redis-cli-p 6301127.0.0.1 set newkey 6301 > keys * 1) "myset" 2) "mylist" 127.0.0.1 tarena@tedu:~$ redis-cli 6301 > set mykey 123OK# switch to 127.0.0.1 6379OK127.0.0.1:6301 > set newkey 456 (error) READONLY You can't write against a read only slave.127 .0.0.1: 6301 > keys * 1) "myset" 2) "mylist" # switch back to 127.0.0.1 slaveof no oneOK127.0.0.1:6301 > set name helloOK
Method 3 (using configuration files)
# every redis service There is one profile corresponding to him # two redis services 1, 6379-> / etc/redis/redis.conf 2, 6300-> / home/tarena/redis_6300.conf# modify profile vi redis_6300.confslaveof 127.0.0.1 6379port 630 start redis service redis-server redis_6300.conf# client connection test redis-cli-p 6300127.0.1home/tarena/redis_6300.conf# 6300 > hset user:1 username guods (error) READONLY You can't write against a read only slave.
Question: what if master hangs up?
1. A Master can have multiple Slaves2 and Slave offline, but the processing performance of read requests degrades 3. Master offline, write requests cannot be executed. 4. One of the Slave uses the SLAVEOF no one command to become Master, and the other Slaves executes the SLAVEOF command to point to the new Master. From it, the above process of synchronizing data # is manual and can be realized automatically, which requires Sentinel sentinels to achieve failover Failover operation.
Demo
1. Launch port 6400redis, set to 6379 slave redis-server-- port 6400redis-cli-p 6400redis > slaveof 127.0.0.1 63792, launch port 6401redis Set to 6379 slave redis-server-- port 6401 redis-cli-p 6401 redis > slaveof 127.0.0.1 63793, close 6379redis sudo / etc/init.d/redis-server stop4, set 6400redis to master redis-cli-p 6400redis > slaveof no one5, set 6401 redis to 6400redis salve redis-cli-p 6401 redis > slaveof 127.0.0.1640 it is manual, inefficient, and takes time, is there any automatic? Sentinel Sentinel
Sentinel of Redis-sentinel
1. Sentinel will constantly check whether Master and Slaves are normal. 2. Each Sentinel can monitor any number of Master and Slaves under this Master.
Case demonstration
* * 1, * * Environment Construction
# Service 1 for a total of 3 redis, start 6379 redis server sudo / etc/init.d/redis-server start2, start 6380 redis server, set to 6379 from redis-server-- port 6380 tarena@tedu:~$ redis-cli-p 6380 127.0.1 purl 6380 > slaveof 127.0.1 6379 OK3, start 6381 redis server Set to 6379 from redis-server-- port 6381 tarena@tedu:~$ redis-cli-p 6381 127.0.0.1 tarena@tedu:~$ redis-cli 6381 > slaveof 127.0.0.1 6379
* * 2, * * install and build sentinel Sentinel
# 1. Install redis-sentinelsudo apt install redis-sentinel verification: sudo / etc/init.d/redis-sentinel stop# 2, create a new configuration file sentinel.confport 26379sentinel monitor tedu 127.0.0.1 6379 sentinel#4, start sentinel method 1: redis-sentinel sentinel.conf method 2: redis-server sentinel.conf-- sentinel#4, terminate the redis service of master, and check whether it will be promoted from master sudo / etc/init.d/redis-server stop# to master by 6381 The other two set the new value on 6381 for slave #, 6380 view 127.0.0.1 set name teduOK# 6381 > start 6379, observe the log, and find that 6381 slave master and slave + sentry is basically enough.
Sentinel.conf interpretation
# sentinel listening port. Default is 26379. You can modify port 2637 to tell sentinel to listen on a master with the address ip:port. The master-name here can be customized. Quorum is a number that indicates how many sentinel consider a master to be invalid sentinel monitor # if master has a password, you need to add the configuration sentinel auth-pass # master before it is considered unavailable. The default is 30 seconds sentinel down-after-milliseconds.
Python gets master
From redis.sentinel import Sentinel# generate Sentinel connection sentinel = Sentinel ([('localhost', 26379)], socket_timeout=0.1) # initialize master connection master = sentinel.master_for (' tedu',socket_timeout=0.1, db=1) slave = sentinel.slave_for ('tedu',socket_timeout=0.1, db=1) # use the redis-related command master.set (' mymaster', 'yes') print (slave.get (' mymaster')) Thank you for reading The above is the content of "the method of python operating redis transaction". After the study of this article, I believe you have a deeper understanding of the method of python operating redis transaction, and the specific use needs to be verified by practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 235
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.