In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the relevant knowledge of "SpringBoot+Redis Bloom filter to prevent malicious traffic from breaking through the cache". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
The details are as follows:
What is malicious traffic penetration
Suppose we have a group of users' registered email in our Redis, using email as the Key, and it corresponds to some fields of the User table in the DB.
Generally speaking, when a reasonable request comes, we will first determine whether the user is a member in Redis, because reading data from the cache returns quickly. If this member does not exist in the cache, then we will look it up in DB.
Now imagine that there are thousands of different IP requests (don't think no, we encountered them in 2018 and 2019, because the cost of the attack is very low) to visit your website with a key that doesn't exist in Redis. Let's imagine:
Request arrives at Web server
Request to be dispatched to application layer-> micro-service layer
Request to go to Redis for data. This Key does not exist in Redis.
So the request arrives at the DB layer and makes a query after DB establishes the connection
Tens of millions or even hundreds of millions of DB connection requests, not to mention whether Redis can hold up or not, DB will be blown up in an instant. This is "Redis penetration", also known as "cache breakdown", which can burst your cache or burst with DB, causing a series of "avalanche effects".
How to prevent
That is, using the Bloom filter, you can put all the key query fields in the user table into Redis's bloom filter. Some people will say, this is not crazy, I have 40 million members? So what!
It is an exaggeration for you to put 4000 members in Redis. Do some websites have 80 million or 100 million members? So I didn't ask you to put it directly in the Redis, but in the Bloom filter!
Key,value is not directly put into the Bloom filter, it stores something like this:
BloomFilter is a spatial efficiency probabilistic data structure proposed by Burton Howard Bloom in 1970. It is usually used to determine whether an element is in a collection. It has extremely high space efficiency, but it will lead to false positive (False positive) errors.
False positive&&False negatives
Because BloomFiter sacrifices a certain degree of accuracy for spatial efficiency. So it brings the problem of False positive.
False positive
When BloomFilter judges that an element is in the collection, there will be a certain error rate, which is called False positive. It is usually abbreviated to fpp.
False negatives
BloomFilter determines the error rate when an element is not in the collection. If BloomFilter determines that the element is not in the collection, the element must not be in the collection. Therefore, the False negatives probability is 0.
BloomFilter uses a byte array of m bit, uses k hash functions, adds an element: maps the element to k positions in the byte array through k times hash, and sets the byte of the corresponding position to 1.
Query whether the element exists: hash the element k times to get k positions. If the bit of the corresponding k locations is 1, it is considered to exist, otherwise it does not exist.
Because it is full of bit, so the amount of data will be very small, to what extent? When I was writing this blog, I plugged 1 million email messages into Redis, and the bloom filter only took up less than 3Mb.
Bloom Filter will have several key values, from which you can roughly calculate how many pieces of data you put and how much system resources will be consumed by its injury rate. This algorithm has a URL: https://krisives.github.io/bloom-calculator/, we put in 1 million pieces of data, assuming that the injury rate is 0.001%. Look, it automatically finds out how much system memory resources Redis needs to apply for?
So how to solve this accident rate? Quite simply, when there is an accidental injury, the business or the operator will report the accidental injury rate. At this time, all you have to do is add a small whitelist. Compared with 1 million pieces of data, 1000 whitelists are not a problem. And the return speed of bloom filter exceeds the block, and the Key will be returned to the caller within 80-100 milliseconds.
Another use of force in the Bloom filter
Suppose I crawled 400m url using python crawler, need to be weighed?
Look, the Bloom filter is used for this scene.
Let's start our journey to Redis BloomFilter.
Install Bloom Filter for Redis
Redis has not supported bloom filter since 4. 0, so we are using Redis5.4 in this example.
The bloom filter download address for Redis is here: https://github.com/RedisLabsModules/redisbloom.git
Git clone https://github.com/RedisLabsModules/redisbloom.gitcd redisbloommake # compilation
There are two ways to load bloom filter when Redis starts:
Manual loading:
Redis-server-- loadmodule. / redisbloom/rebloom.so
Each startup self-loading:
Edit the redis.conf file of Redis and add:
Loadmodule / soft/redisbloom/redisbloom.so
Like this:
Using Bloom Filter in Redis
Basic directives:
Bf.reserve {key} {error_rate} {size}
127.0.0.1 6379 > bf.reserve userid 0.01 100000OK
The above command is to create an empty Bloom filter and set an expected error rate and initial size. The error rate of {error_rate} filter is between 0 and 1. If you want to set 0.1%, it should be 0.001. The closer this number is to 0, the greater the memory consumption and the higher the cpu utilization.
Bf.add {key} {item}
127.0.0.1 6379 > bf.add userid '181920' (integer) 1
The above command is to add elements to the filter. If key does not exist, the filter is created automatically.
Bf.exists {key} {item}
127.0.0.1 6379 > bf.exists userid '101310299' (integer) 1
The above command is to determine whether the value of the specified key exists in the bloomfilter. Exist: returns 1, does not exist: returns 0.
Use in conjunction with SpringBoot
Much of what is written on the Internet is either operated directly using jedis, or an external process in java execute calls Redis's bloom filter instruction. Many of them can not be adjusted or at the same level of helloworld, so they cannot be applied at the production level at all.
The code given by the author ensures that the reader is fully available!
The author is not a mathematician, so I borrowed google's guava package to implement the core algorithm. The core code is as follows:
BloomFilterHelper.java
Package org.sky.platform.util; import com.google.common.base.Preconditions;import com.google.common.hash.Funnel;import com.google.common.hash.Hashing; public class BloomFilterHelper {private int numHashFunctions; private int bitSize; private Funnel funnel; public BloomFilterHelper (Funnel funnel, int expectedInsertions, double fpp) {Preconditions.checkArgument (funnel! = null, "funnel cannot be empty"); this.funnel = funnel BitSize = optimalNumOfBits (expectedInsertions, fpp); numHashFunctions = optimalNumOfHashFunctions (expectedInsertions, bitSize);} int [] murmurHashOffset (T value) {int [] offset = new int [numHashFunctions]; long hash74 = Hashing.murmur3_128 (). HashObject (value, funnel). AsLong (); int hash2 = (int) hash74 Int hash3 = (int) (hash74 > > 32), for (int I = 1; I sky-common- > nacos-parent dependency structure).
Put the springboot configuration of redis in the org.sky.config package of the redis-practice project
RedisConfig.java
Package org.sky.config; import com.fasterxml.jackson.annotation.JsonAutoDetect;import com.fasterxml.jackson.annotation.PropertyAccessor;import com.fasterxml.jackson.databind.ObjectMapper;import org.springframework.cache.CacheManager;import org.springframework.cache.annotation.CachingConfigurerSupport;import org.springframework.cache.annotation.EnableCaching;import org.springframework.context.annotation.Bean;import org.springframework.context.annotation.Configuration;import org.springframework.data.redis.cache.RedisCacheManager;import org.springframework.data.redis.connection.RedisConnectionFactory;import org.springframework.data.redis.core.* Import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;import org.springframework.data.redis.serializer.StringRedisSerializer; @ Configuration@EnableCachingpublic class RedisConfig extends CachingConfigurerSupport {/ * choose redis as the default caching tool * * @ param redisTemplate * @ return * / @ Bean public CacheManager cacheManager (RedisTemplate redisTemplate) {RedisCacheManager rcm = new RedisCacheManager (redisTemplate) Return rcm;} / * retemplate related configuration * * @ param factory * @ return * / @ Bean public RedisTemplate redisTemplate (RedisConnectionFactory factory) {RedisTemplate template = new RedisTemplate (); / / configure connection factory template.setConnectionFactory (factory) / / use Jackson2JsonRedisSerializer to serialize and deserialize the value of redis (using JDK serialization by default) Jackson2JsonRedisSerializer jacksonSeial = new Jackson2JsonRedisSerializer (Object.class); ObjectMapper om = new ObjectMapper () / / specify the fields to be serialized, field,get and set, and the range of modifiers. ANY includes private and public om.setVisibility (PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY) / / specify the type of serialized input. Classes must be non-final-decorated, final-decorated classes, such as String,Integer, which will run exception om.enableDefaultTyping (ObjectMapper.DefaultTyping.NON_FINAL); jacksonSeial.setObjectMapper (om); / / json serialized template.setValueSerializer (jacksonSeial) / / use StringRedisSerializer to serialize and deserialize the key value of redis template.setKeySerializer (new StringRedisSerializer ()); / / set hash key and value serialization mode template.setHashKeySerializer (new StringRedisSerializer ()); template.setHashValueSerializer (jacksonSeial); template.afterPropertiesSet (); return template } / * * data manipulation on hash type * * @ param redisTemplate * @ return * / @ Bean public HashOperations hashOperations (RedisTemplate redisTemplate) {return redisTemplate.opsForHash () } / * * operate on redis string type data * * @ param redisTemplate * @ return * / @ Bean public ValueOperations valueOperations (RedisTemplate redisTemplate) {return redisTemplate.opsForValue () } / * * data operations on linked list types * * @ param redisTemplate * @ return * / @ Bean public ListOperations listOperations (RedisTemplate redisTemplate) {return redisTemplate.opsForList () } / * * data operations on unordered collection types * * @ param redisTemplate * @ return * / @ Bean public SetOperations setOperations (RedisTemplate redisTemplate) {return redisTemplate.opsForSet () } / * * data operations on ordered collection types * * @ param redisTemplate * @ return * / @ Bean public ZSetOperations zSetOperations (RedisTemplate redisTemplate) {return redisTemplate.opsForZSet ();}}
This configuration not only implements springboot automatic discovery of redis configuration in application.properties, but also adds a lot of encapsulation of redis basic data structure operations.
To do this, we need to package another set of Redis Util widgets, which are located in the sky-common project.
RedisUtil.java
Package org.sky.platform.util; import org.springframework.beans.factory.annotation.Autowired;import org.springframework.data.redis.core.RedisTemplate;import org.springframework.stereotype.Component; import java.util.Collection;import java.util.Date;import java.util.Set;import java.util.concurrent.TimeUnit;import java.util.stream.Collectors;import java.util.stream.Stream;import com.google.common.base.Preconditions;import org.springframework.data.redis.core.RedisTemplate @ Componentpublic class RedisUtil {@ Autowired private RedisTemplate redisTemplate; / * default expiration duration, in seconds * / public static final long DEFAULT_EXPIRE = 60 * 60 * 24; / * do not set expiration duration * / public static final long NOT_EXPIRE =-1 Public boolean existsKey (String key) {return redisTemplate.hasKey (key) } / * the same name as key. If newKey already exists, the original value of newKey is overwritten * * @ param oldKey * @ param newKey * / public void renameKey (String oldKey, String newKey) {redisTemplate.rename (oldKey, newKey) Rename * * @ param oldKey * @ param newKey * @ return only if / * newKey does not exist. Successfully modified, true * / public boolean renameKeyNotExist (String oldKey, String newKey) {return redisTemplate.renameIfAbsent (oldKey, newKey) is returned. } / * * Delete key * * @ param key * / public void deleteKey (String key) {redisTemplate.delete (key);} / * Delete multiple key * * @ param keys * / public void deleteKey (String... Keys) {Set kSet = Stream.of (keys) .map (k-> k) .map (Collectors.toSet ()); redisTemplate.delete (kSet) Delete the collection of Key * * @ param keys * / public void deleteKey (Collection keys) {Set kSet = keys.stream () .map (k-> k) .delete (Collectors.toSet ()); redisTemplate.delete (kSet) } / * set the life cycle of key * * @ param key * @ param time * @ param timeUnit * / public void expireKey (String key, long time, TimeUnit timeUnit) {redisTemplate.expire (key, time, timeUnit) } / * specifies that key expires on the specified date * * @ param key * @ param date * / public void expireKeyAt (String key, Date date) {redisTemplate.expireAt (key, date) } / * query the life cycle of key * * @ param key * @ param timeUnit * @ return * / public long getKeyExpire (String key, TimeUnit timeUnit) {return redisTemplate.getExpire (key, timeUnit) } / * set key to be permanently valid * * @ param key * / public void persistKey (String key) {redisTemplate.persist (key) } / * add values according to the given Bloom filter * / public void addByBloomFilter (BloomFilterHelper bloomFilterHelper, String key, T value) {Preconditions.checkArgument (bloomFilterHelper! = null, "bloomFilterHelper cannot be empty"); int [] offset = bloomFilterHelper.murmurHashOffset (value) For (int I: offset) {redisTemplate.opsForValue () .setBit (key, I, true) }} / * determine whether the value exists according to the given Bloom filter * / public boolean includeByBloomFilter (BloomFilterHelper bloomFilterHelper, String key, T value) {Preconditions.checkArgument (bloomFilterHelper! = null, "bloomFilterHelper cannot be empty"); int [] offset = bloomFilterHelper.murmurHashOffset (value) For (int I: offset) {if (! redisTemplate.opsForValue () .getBit (key, I)) {return false;}} return true;}}
RedisKeyUtil.java
Package org.sky.platform.util The key form of public class RedisKeyUtil {/ * redis is: table name: primary key name: primary key value: column name * * @ param tableName table name * @ param majorKey primary key name * @ param majorKeyValue primary key value * @ param column column name * @ return * / public static String getKeyWithColumn (String tableName String majorKey, String majorKeyValue, String column) {StringBuffer buffer = new StringBuffer () Buffer.append (tableName) .append (":"); buffer.append (majorKey) .append (":"); buffer.append (majorKeyValue) .append (":"); buffer.append (column); return buffer.toString () The key form of} / * redis is: table name: primary key value * * @ param tableName table name * @ param majorKey primary key name * @ param majorKeyValue primary key value * @ return * / public static String getKey (String tableName, String majorKey String majorKeyValue) {StringBuffer buffer = new StringBuffer () Buffer.append (tableName) .append (":"); buffer.append (majorKey) .append (":"); buffer.append (majorKeyValue) .append (":"); return buffer.toString ();}}
Then there is the BloomFilterHelper.java of how to use BloomFilter in redis, which is also located in the sky-common folder, and the source code has been posted above, so I won't repeat it here.
Finally, we put a UserVO in sky-common for demonstration.
UserVO.java
Package org.sky.vo; import java.io.Serializable; public class UserVO implements Serializable {private String name; private String address; private Integer age; private String email = ""; public String getEmail () {return email;} public void setEmail (String email) {this.email = email } public String getName () {return name;} public void setName (String name) {this.name = name;} public String getAddress () {return address;} public void setAddress (String address) {this.address = address } public Integer getAge () {return age;} public void setAge (Integer age) {this.age = age;}}
Here are the contents of the pom.xml files for all the nacos-parent we depend on in gitrepo. This time we added "spring-boot-starter-data-redis", which follows our global springboot version:
Pom.xml of parent project
4.0.0 org.sky.demo nacos-parent 0.0.1-SNAPSHOT pom Demo project for Spring Boot Dubbo Nacos 1.8 1.5.15.RELEASE 2.7.3 4.0.1 2.8.0 1.1.20 27.0.1-jre 1.2.59 2.7.3 1.1.4 5.1.46 3.4.2 1.8.13 0.0.1-SNAPSHOT 1.8.14 -RELEASE 0.0.1-SNAPSHOT ${java.version} ${java.version} 3.8.1 3.2.3 3.1.2 UTF-8 UTF-8 Org.springframework.boot spring-boot-starter-web ${spring-boot.version} org.springframework.boot Spring-boot-dependencies ${spring-boot.version} pom import org.apache.dubbo Dubbo-spring-boot-starter ${dubbo.version} org.slf4j Slf4j-log4j12 org.apache.dubbo dubbo ${dubbo.version} org.apache.curator curator-framework ${curator-framework.version} Org.apache.curator curator-recipes ${curator-recipes.version} mysql mysql-connector-java ${mysql-connector-java.version} com.alibaba druid ${druid.version} Com.lmax disruptor ${disruptor.version} com.google.guava Guava ${guava.version} com.alibaba fastjson ${fastjson.version} Org.apache.dubbo dubbo-registry-nacos ${dubbo-registry-nacos.version} com.alibaba .nacos nacos-client ${nacos-client.version} org.aspectj aspectjweaver ${aspectj .version} org.springframework.boot spring-boot-starter-data-redis ${spring-boot.version} Org.apache.maven.plugins maven-compiler-plugin ${compiler.plugin.version} ${java.version} ${java.version} org.apache.maven.plugins maven-war-plugin ${war.plugin.version} org.apache.maven.plugins maven-jar-plugin ${jar.plugin.version}
Pom.xml file in sky-common
4.0.0 org.sky.demo skycommon 0.0.1-SNAPSHOT org.sky.demo nacos-parent 0.0.1-SNAPSHOT org.apache.curator curator-framework Org.apache.curator curator-recipes org.springframework.boot spring-boot-starter-test test Org.spockframework spock-core test org.spockframework spock-spring org.springframework.boot Spring-boot-configuration-processor true org.springframework.boot spring-boot-starter-log4j2 org.springframework.boot spring -boot-starter-web org.springframework.boot spring-boot-starter-logging Org.aspectj aspectjweaver com.lmax disruptor redis.clients jedis Com.google.guava guava com.alibaba fastjson org.springframework.boot Spring-boot-starter-data-redis
At this point, our springboot+redis basic framework, util class, bloomfilter components have been built, and then we will focus on our demo project.
Demo project: redis-practice description
The pom.xml file, which relies on nacos-parent and also references sky-common
4.0.0 org.sky.demo redis-practice 0.0.1-SNAPSHOT Demo Redis Advanced Features org.sky.demo nacos-parent 0.0.1-SNAPSHOT org.springframework.boot Spring-boot-starter-jdbc org.springframework.boot spring-boot-starter-logging Org.apache.dubbo dubbo org.apache.curator curator-framework org.apache.curator Curator-recipes mysql mysql-connector-java com.alibaba druid Org.springframework.boot spring-boot-starter-test test org.spockframework spock-core test Org.spockframework spock-spring org.springframework.boot spring-boot-configuration-processor true org.springframework.boot Spring-boot-starter-data-redis org.springframework.boot spring-boot-starter-log4j2 org.springframework.boot spring-boot-starter-web Org.springframework.boot spring-boot-starter-logging Org.springframework.boot spring-boot-starter-tomcat org.aspectj aspectjweaver com.lmax Disruptor redis.clients jedis com.google.guava guava Com.alibaba fastjson org.sky.demo skycommon ${skycommon.version} org.springframework.boot Spring-boot-starter-data-redis src/main/java src/test/java org.springframework.boot spring-boot-maven-plugin Src/main/resources src/main/webapp META-INF/resources * / * src/main/resources true Application.properties application-$ {profileActive} .properties
Application.java for startup
Package org.sky; import org.springframework.boot.SpringApplication;import org.springframework.boot.autoconfigure.EnableAutoConfiguration;import org.springframework.context.annotation.ComponentScan;import org.springframework.transaction.annotation.EnableTransactionManagement; @ EnableTransactionManagement@ComponentScan (basePackages = {"org.sky"}) @ EnableAutoConfigurationpublic class Application {public static void main (String [] args) {SpringApplication.run (Application.class, args);}}
Then we made a controller called UserController, and there are two methods in this controller:
Public ResponseEntity addUser (@ RequestBody String params), which is used to accept an external api post and insert an email address into the bloomfilter of the redis
Public ResponseEntity findEmailInBloom (@ RequestBody String params), which is used to accept external api post and then go to the bloomfilter of redis to verify that the email address in the externally entered user information exists in millions of email records
This is used to verify how much memory is consumed by millions of records in the bloom filter stuffed into redis and how fast it is to query a record using bloom filter.
UserController.java
Package org.sky.controller; import java.util.HashMap;import java.util.Map;import java.util.concurrent.TimeUnit; import javax.annotation.Resource; import org.sky.platform.util.BloomFilterHelper;import org.sky.platform.util.RedisUtil;import org.sky.vo.UserVO;import org.springframework.data.redis.core.RedisTemplate;import org.springframework.data.redis.core.ValueOperations;import org.springframework.http.HttpHeaders;import org.springframework.http.HttpStatus;import org.springframework.http.MediaType;import org.springframework.http.ResponseEntity Import org.springframework.web.bind.annotation.PostMapping;import org.springframework.web.bind.annotation.RequestBody;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RestController; import com.alibaba.fastjson.JSON;import com.alibaba.fastjson.JSONObject;import com.google.common.base.Charsets;import com.google.common.hash.Funnel; @ RestController@RequestMapping ("user") public class UserController extends BaseController {@ Resource private RedisTemplate redisTemplate @ Resource private RedisUtil redisUtil; @ PostMapping (value = "/ addEmailToBloom", produces = "application/json") public ResponseEntity addUser (@ RequestBody String params) {ResponseEntity response = null; String returnResultStr; HttpHeaders headers = new HttpHeaders (); headers.setContentType (MediaType.APPLICATION_JSON_UTF8); Map result = new HashMap () Try {JSONObject requestJsonObj = JSON.parseObject (params); UserVO inputUser = getUserFromJson (requestJsonObj) BloomFilterHelper myBloomFilterHelper = new BloomFilterHelper ((Funnel) (from, into)-> into.putString (from, Charsets.UTF_8) .putString (from, Charsets.UTF_8), 1500000, 0.00001); redisUtil.addByBloomFilter (myBloomFilterHelper, "email_existed_bloom", inputUser.getEmail ()) Result.put ("code", HttpStatus.OK.value ()); result.put ("message", "add into bloomFilter successfully"); result.put ("email", inputUser.getEmail ()); returnResultStr= JSON.toJSONString (result); logger.info ("returnResultStr= >" + returnResultStr) Response = new ResponseEntity (returnResultStr, headers, HttpStatus.OK);} catch (Exception e) {logger.error ("add a new product with error:" + e.getMessage (), e); result.put ("message", "add a new product with error:" + e.getMessage ()) ReturnResultStr = JSON.toJSONString (result); response = new ResponseEntity (returnResultStr, headers, HttpStatus.INTERNAL_SERVER_ERROR);} return response;} @ PostMapping (value = "/ checkEmailInBloom", produces = "application/json") public ResponseEntity findEmailInBloom (@ RequestBody String params) {ResponseEntity response = null String returnResultStr; HttpHeaders headers = new HttpHeaders (); headers.setContentType (MediaType.APPLICATION_JSON_UTF8); Map result = new HashMap (); try {JSONObject requestJsonObj = JSON.parseObject (params); UserVO inputUser = getUserFromJson (requestJsonObj) BloomFilterHelper myBloomFilterHelper = new BloomFilterHelper ((Funnel) (from, into)-> into.putString (from, Charsets.UTF_8) .putString (from, Charsets.UTF_8), 1500000, 0.00001) Boolean answer= redisUtil.includeByBloomFilter (myBloomFilterHelper, "email_existed_bloom", inputUser.getEmail ()); logger.info ("answer=" + answer); result.put ("code", HttpStatus.OK.value ()) Result.put ("email", inputUser.getEmail ()); result.put ("exist", answer); returnResultStr= JSON.toJSONString (result); logger.info ("returnResultStr= >" + returnResultStr); response = new ResponseEntity (returnResultStr, headers, HttpStatus.OK) } catch (Exception e) {logger.error ("add a new product with error:" + e.getMessage (), e); result.put ("message", "add a new product with error:" + e.getMessage ()); returnResultStr = JSON.toJSONString (result) Response = new ResponseEntity (returnResultStr, headers, HttpStatus.INTERNAL_SERVER_ERROR);} return response;} private UserVO getUserFromJson (JSONObject requestObj) {String userName = requestObj.getString ("username"); String userAddress = requestObj.getString ("address"); String userEmail = requestObj.getString ("email") Int userAge = requestObj.getInteger ("age"); UserVO u = new UserVO (); u.setName (userName); u.setAge (userAge); u.setEmail (userEmail); u.setAddress (userAddress); return u;}}
Notice the use of BloomFilterHelper in UserController. I declared the space that can be used to store 1.5 million data in Redis's bloomfilter. What if the stored data is larger than the space you applied for in advance? Then it will increase the "accident rate".
Let's run this project and see how it works.
Run the redis-practice project
After running
We can do a little experiment with postman first.
We use ", addEmailToBloom" to insert a "yumi@yahoo.com" email into the redis bloom filter.
Next we will use "/ checkEmailInBloom" to verify that the email address exists
We use redisclient to connect to our redis to check, and this value is indeed inserted into the bloom filter.
Use the pressure testing tool to feed 1.2 million pieces of data into Redis Bloomfilter to see the actual effect.
Next, we use jmeter to feed about 1.2 million data into "/ addEmailToBloom", and then let's see how our system behaves after bloom filter is fed in 1.2 million email according to Bloom algorithm.
I use apache-jmeter5.0 here. In order to be lazy, I use the _ RandomString function in apache-jmeter to dynamically create a 16-character email. The user name and address information here are constant, that is, email is different every time, it is a string of 16-bit random characters + "@ 163.com".
The function of BeanShell generating 16-bit characters in jmeter to randomly form email
Useremail= "${_ _ RandomString (16meme abcdefghijklmnopje)}" + "@ 163.com"; vars.put ("random_email", useremail)
The jmeter test plan is set to 75 threads, running continuously for 30 minutes (in practice, the author runs 3 30 minutes, because it is a demo environment, insert about 400000 pieces of data at a time for 30 minutes)
Jmeter post request
Then we use the jmeter command line to run the test plan:
Jmeter-n-t add_randomemail_to_bloom.jmx-l add_email_to_bloom\ report\ 03-result.csv-j add_email_to_bloom\ logs\ 03-log.log-e-o add_email_to_bloom\ html_report_3
It represents:
-t specify the path where the jmeter execution plan file is located
-l generates a directory for report, which is created if it does not exist and must be an empty directory
-j generates a directory for log, which is created if it does not exist and must be an empty directory
-e generates a html report, which is used with the-o parameter
-o the path where the html report is generated. If it does not exist, it must be an empty directory.
After entering the car, it started to run.
Execute until the process is complete and jump out of the command command.
Let's look at the jmeter html report we generated with-e-o. As mentioned earlier, I ran it three times. The first time was 70059 pieces of data in 10 minutes, the second time was more than 400,000 pieces of data in 30 minutes, and the third time was more than 700,000 pieces of data in 45 minutes. I inserted a total of 1200790 email.
The total memory consumption of the 1.2 million data in redis does not exceed that of 8mb. See the zabbix recording of the demo environment below.
After 1.2 million pieces of data are plugged in, we then randomly find an email where logger.info lives from the output of our log4j, such as: egpoghnfjekjajdo@163.com to see how well redis bloomfilter found this record. 76ms, I ran it many times, on average, around 80ms:
Through such an example above, you can see how small the memory is and how efficient the query is after email is hash and stored in bloomfilter in the form of bit.
Often in production, we often put tens of millions or hundreds of millions of records "load" into bloomfilter, and then use it to do "anti-penetration" or de-weight action.
As long as the key that does not exist in bloomfilter returns directly to the client false, with the dynamic expansion of nginx, cdn, waf and interface layer caching, it is very simple for the whole website to resist 6-digit or even 7-digit concurrency.
This is the end of the content of "SpringBoot+Redis Bloom filter to prevent malicious traffic from breaking through the cache". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.