In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "what are the knowledge points of Redis interview". Friends who are interested might as well take a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what are the knowledge points of Redis interview"?
What is Redis?
Interviewer: tell me what Redis is first.
Me: (this is a summary of the definition and characteristics of Redis) Redis is an open source (BSD protocol) high-performance key-value pair (key-value) in-memory database developed by C language, which can be used as database, cache, message middleware, etc.
It is a kind of NoSQL (not-only sql, generally refers to non-relational database) database.
I paused and went on to say that Redis as an in-memory database:
Excellent performance, data in memory, read and write very fast, support concurrent 10W QPS.
Single-process, single-thread, thread-safe, using IO multiplexing mechanism.
Rich data types, supporting string (strings), hashes (hashes), lists (lists), sets (set), ordered collection (sorted sets), etc.
Data persistence is supported.
You can save in-memory data on disk and load it on restart.
Master-slave copy, Sentinel, high availability.
Can be used as a distributed lock.
It can be used as message middleware to support publish and subscribe.
Five data types
Interviewer: that's a good summary. It seems to be well prepared. I just heard you mention that Redis supports five data types. Can you briefly describe these five data types? Me: of course, but before I say that, I think it's necessary to understand how Redis's internal memory management describes these five data types.
With that, I took a pen and drew a picture for the interviewer:
Me: first of all, Redis uses a redisObject object internally to represent all key and value. The main information of redisObject is shown in the figure above: type represents the specific data type of a value object, and encoding is how different data types are stored within Redis. For example: type=string means that value stores a normal string, then encoding can be raw or int. I paused and went on to say, let me briefly talk about five data types: ① String is the most basic type of Redis, which can be understood as exactly the same type as Memcached, with one Key corresponding to one Value. Value is not only a String, but also a number. The String type is binary safe, meaning that the String type of Redis can contain any data, such as jpg images or serialized objects. Values of type String can store up to 512m. A ② Hash is a collection of key-value. Redis's Hash is a mapping table of String's Key and Value, and Hash is particularly suitable for storing objects. Common commands: hget,hset,hgetall, etc. A ③ List list is a simple list of strings sorted in the order in which they are inserted. You can add an element to the head (left) or tail (right) of the list: lpush, rpush, lpop, rpop, lrange (get list snippet), and so on. Application scenarios: there are many List application scenarios, and it is also one of the most important data structures of Redis, such as Twitter's follow list and fan list can be implemented using List structure. Data structure: List is a linked list and can be used as a message queue. Redis provides Push and Pop operations of List, as well as API to operate on a certain segment, which can directly query or delete elements of a certain segment. Implementation: the implementation of Redis List is a two-way linked list, which can support reverse lookup and traversal, which is more convenient to operate, but brings additional memory overhead. ④ Set is an unordered collection of type String. Collections are implemented through hashtable. The elements in Set are out of order and are not repeated. Common commands: sdd, spop, smembers, sunion, etc. Application scenario: the external function provided by Redis Set is the same as List, except that Set automatically removes duplicates, and Set provides to determine whether a member is in a Set collection. ⑤ Zset, like Set, is a collection of elements of type String, and duplicate elements are not allowed. Common commands: zadd, zrange, zrem, zcard, etc. Usage scenario: Sorted Set can sort members by providing an extra score parameter for the user, and it is inserted in order, that is, automatic sorting. When you need an ordered and non-repeating list of collections, you can choose the Sorted Set structure. Compared with Set, Sorted Set associates a parameter Score of Double type weight, so that the elements in the set can be arranged in order according to Score. Redis sorts the members of the set from small to large through scores. Implementation: Redis Sorted Set internal use HashMap and jump table (skipList) to ensure data storage and order, HashMap is put in the member to Score mapping. All the members are stored in the jump table, and the sorting is based on the Score stored in HashMap. Using the structure of the jump table, the search efficiency can be higher, and the implementation is relatively simple.
Summary of data type application scenarios:
Interviewer: I can't believe you've done a lot of work at ordinary times, so you must have used the Redis cache, right? Me: used. Interviewer: can you tell me how you use it? I use it in conjunction with Spring Boot. There are generally two ways, one is to use it directly through RedisTemplate, and the other is to use Spring Cache to integrate Redis (that is, annotations).
Redis caching
Use it directly through RedisTemplate, using Spring Cache to integrate Redis pom.xml with the following dependencies:
Org.springframework.boot spring-boot-starter-data-redis org.apache.commons commons-pool2 org.springframework.boot spring-boot-starter-web org.springframework.session spring-session-data-redis org.projectlombok lombok true org.springframework.boot spring-boot-starter-test test
Spring-boot-starter-data-redis: after Spring Boot 2.x, the underlying layer no longer uses Jedis, but instead uses Lettuce. Commons-pool2: used as a Redis connection pool, an error will be reported if no startup is introduced. Spring-session-data-redis:Spring Session is introduced as a shared Session.
Configuration of the profile application.yml:
Server: port: 8082 servlet: session: timeout: 30msspring: cache: redis redis: host: 127.0.0.1 port: 6379 password: # redis there are 16 shards by default. The specific shards are configured here. The default is 0 database: 0 lettuce: pool: # the maximum number of connections in the connection pool (using a negative number means there is no limit), and the default is 8 max-active: 100.
Create the entity class User.java:
Public class User implements Serializable {private static final long serialVersionUID = 662692455422902539L; private Integer id; private String name; private Integer age; public User () {} public User (Integer id, String name, Integer age) {this.id = id; this.name = name; this.age = age;} public Integer getId () {return id } public void setId (Integer id) {this.id = id;} public String getName () {return name;} public void setName (String name) {this.name = name;} public Integer getAge () {return age;} public void setAge (Integer age) {this.age = age } @ Override public String toString () {return "User {" + "id=" + id + ", name='" + name +'\'+ ", age=" + age +'}';}}
How to use RedisTemplate
By default, templates can only support RedisTemplate, that is, they can only be stored in strings, so it is necessary to customize templates.
Add the configuration class RedisCacheConfig.java:
@ Configuration@AutoConfigureAfter (RedisAutoConfiguration.class) public class RedisCacheConfig {@ Bean public RedisTemplate redisCacheTemplate (LettuceConnectionFactory connectionFactory) {RedisTemplate template = new RedisTemplate (); template.setKeySerializer (new StringRedisSerializer ()); template.setValueSerializer (new GenericJackson2JsonRedisSerializer ()); template.setConnectionFactory (connectionFactory); return template;}}
Test class:
@ RestController@RequestMapping ("/ user") public class UserController {public static Logger logger = LogManager.getLogger (UserController.class); @ Autowired private StringRedisTemplate stringRedisTemplate; @ Autowired private RedisTemplate redisCacheTemplate; @ RequestMapping ("/ test") public void test () {redisCacheTemplate.opsForValue () .set ("userkey", new User (1, "Zhang San", 25)); User user = (User) redisCacheTemplate.opsForValue () .get ("userkey") Logger.info ("current acquisition object: {}", user.toString ());}
Then visit the browser and observe the background log http://localhost:8082/user/test
Image
Integrate Redis with Spring Cache
Spring Cache has good flexibility. It can not only use SPEL (spring expression language) to define cache Key and various Condition, but also provide temporary cache storage scheme out of the box, and also support integration with mainstream professional caches such as EhCache, Redis and Guava. Define the interface UserService.java:
Public interface UserService {User save (User user); void delete (int id); User get (Integer id);}
API implementation class UserServiceImpl.java:
@ Servicepublic class UserServiceImpl implements UserService {public static Logger logger = LogManager.getLogger (UserServiceImpl.class); private static Map userMap = new HashMap (); static {userMap.put (1, new User (1, "Xiao Zhan", 25)); userMap.put (2, new User (2, "Wang Yibo", 26)); userMap.put (3, new User (3, "Yang Zi", 24)) } @ CachePut (value = "user", key = "# user.id") @ Override public User save (User user) {userMap.put (user.getId (), user); logger.info ("enter save method, current storage object: {}", user.toString ()); return user } @ CacheEvict (value= "user", key = "# id") @ Override public void delete (int id) {userMap.remove (id); logger.info ("enter the delete method and delete successfully") } @ Cacheable (value = "user", key = "# id") @ Override public User get (Integer id) {logger.info ("enter the get method, the current acquisition object: {}", userMap.get (id) = = null?null:userMap.get (id). ToString ()); return userMap.get (id);}}
In order to demonstrate the operation of the database, a Map userMap is defined directly here. At the heart of this are three comments:
@ Cachable
@ CachePut
@ CacheEvict
Test class: UserController
@ RestController@RequestMapping ("/ user") public class UserController {public static Logger logger = LogManager.getLogger (UserController.class); @ Autowired private StringRedisTemplate stringRedisTemplate; @ Autowired private RedisTemplate redisCacheTemplate; @ Autowired private UserService userService; @ RequestMapping ("/ test") public void test () {redisCacheTemplate.opsForValue () .set ("userkey", new User (1, "Zhang San", 25)); User user = (User) redisCacheTemplate.opsForValue (). Get ("userkey") Logger.info ("current acquisition object: {}", user.toString ());} @ RequestMapping ("/ add") public void add () {User user = userService.save (new User (4, "Li Xian", 30)); logger.info ("added user Information: {}", user.toString ()) } @ RequestMapping ("/ delete") public void delete () {userService.delete (4);} @ RequestMapping ("/ get/ {id}") public void get (@ PathVariable ("id") String idStr) throws Exception {if (StringUtils.isBlank (idStr)) {throw new Exception ("id is empty");} Integer id = Integer.parseInt (idStr) User user = userService.get (id); logger.info ("user information obtained: {}", user.toString ());}}
Note that when using the cache, start the class with a note to open the cache:
@ SpringBootApplication (exclude=DataSourceAutoConfiguration.class) @ EnableCachingpublic class Application {public static void main (String [] args) {SpringApplication.run (Application.class, args);}}
① calls the add API first.
Image
② then calls the query API to query the user information of id=4:
Image
You can see that the data has been obtained from the cache, because in the previous step, the add method has put the user data of id=4 into the Redis cache, called the delete method, deleted the user information of id=4, and cleared the cache:
Image
④ calls the query API again to query the user information of id=4:
Image
There is no cache, so you go into the get method and get it from userMap.
Cache comment
① @ Cacheable caches the result based on the request parameters of the method:
Key: cached Key, which can be empty, if specified to be written according to the SPEL expression, or if not specified, combined according to all the parameters of the method.
Value: the name of the cache, at least one must be specified (such as @ Cacheable (value='user') or @ Cacheable (value= {'user1','user2'}))
Condition: the condition for caching, which can be empty, written in SPEL, returns true or false, and caches only for true.
② @ CachePut caches the results of a method based on its request parameters, unlike @ Cacheable, which triggers a call to the real method every time. For the description of the parameters, see above. ③ @ CacheEvict clears the cache according to the condition:
Key: ditto.
Value: ditto.
Condition: ditto.
AllEntries: whether to clear all cache contents. The default is false. If specified as true, all caches will be cleared immediately after the method call.
BeforeInvocation: whether to clear the cache before the method is executed. The default is false, and if specified as true, the cache is cleared before the method is executed. By default, if the method executes and throws an exception, the cache is not cleared.
Cache issu
Interviewer: take a look at your Demo. It's easy to understand. Do you know what problems you will encounter or what problems you will encounter when using cache in actual projects?
Me: cache and database data consistency issues: caching and data consistency between databases are very easy to occur in a distributed environment, so do not use caching if the project's requirements for caching are strongly consistent. We can only adopt appropriate strategies to reduce the probability of data inconsistency between the cache and the database, but can not guarantee the strong consistency between the two. Appropriate strategies include appropriate cache update strategy, timely update of the cache after updating the database, and adding a retry mechanism when the cache fails.
Interviewer: do you understand Redis avalanche?
Me: as far as I know, the current e-commerce home page and hot data will be cached. Generally, the cache is refreshed by fixed-time tasks, or it can not be found to update the cache. There is a problem with scheduled task refreshing. For example, Chestnut: if the expiration time of all the Key on the home page is 12 hours and refreshes at 12: 00 noon, I have a large influx of users at zero. Assuming 6000 requests per second, the cache could have resisted 5000 requests per second, but all the Key in the cache failed. At this time, all the 6000 requests per second fall on the database, and the database will not be able to bear it. The real situation may not be reflected by the DBA and hung up directly. At this time, if there is no special plan to deal with, DBA is very anxious to restart the database, but the database is immediately killed by new traffic. This is what I understand as a cache avalanche.
I thought to myself: at the same time a large area of failure, instant Redis is the same as none, then this number of requests directly to the database is almost catastrophic. If you think about it, if you hang up a library of user services, almost all the interfaces of other libraries that depend on it will report errors. If you do not do a circuit breaker and other strategies are basically instant hanging rhythm, no matter how you restart the user will hang you, and when you restart well, the user will go to bed early, before going to bed, swearing "what junk product".
The interviewer touched his hair: well, not bad, so how do you deal with this situation?
Me: it is easy to deal with cache avalanche. When storing data in batch to Redis, just add a random value to the expiration time of each Key, which can ensure that the data will not fail in a large area at the same time.
SetRedis (key, value, time+Math.random () * 10000)
If Redis is a cluster deployment, distributing hot spot data evenly among different Redis libraries can also avoid all failures. Or set the hotspot data to never expire, and update the cache as long as there is an update operation (for example, if the operation and maintenance update the home page, then you can brush the cache and do not set the expiration time). This operation can also be used for the data of the home page of e-commerce. Insurance.
Interviewer: do you know anything about cache penetration and penetration? can you tell me the difference between them and avalanches?
Me: well, I see, let's start with cache penetration, which refers to data that is not in the cache or database, while users (hackers) keep making requests. For example, Chestnut: the id of our database increases from 1 to 1. If you initiate id=-1 data or id data that does not exist, such constant attacks will cause great pressure on the database and seriously destroy the database.
I went on to say: as for cache breakdown, this is a bit like a cache avalanche, but it is a little different. The cache avalanche is due to a large area of cache invalidation, which collapses DB. The difference between cache breakdown and cache breakdown is that a Key is very hot, constantly carrying a large number of requests, and large concurrency focuses on accessing this point. When the Key expires, the persistent large concurrency directly falls on the database, breaking the cache at this Key point.
The interviewer looked gratified: how do they solve it respectively?
Me: cache traversal I will add verification in the interface layer, such as user authentication, parameter verification, illegal verification directly return, such as id for basic verification, id
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.