In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
Most people do not understand the knowledge points of this article "java how to achieve high-performance second-kill system", so the editor summarizes the following content, detailed content, clear steps, and has a certain reference value. I hope you can get something after reading this article. Let's take a look at this "java how to achieve high-performance second-kill system" article.
First, take a look at the final architecture diagram:
Let's briefly talk about the flow of the request according to this diagram, because no matter how much it is improved later, these will remain the same:
The frontend request enters the Web layer, and the corresponding code is Controller.
After that, the real inventory verification, order issuance and other requests are sent to the Service layer, in which the RPC call still uses the Dubbo, but is updated to the * * version.
The Service layer then lands the data and issues an order to complete it.
* system
Leaving aside the second kill scenario, a normal order issuing process can be simply divided into the following steps:
Check inventory
Deduct inventory
Create an order
Pay for
Based on the architecture above, we have the following implementation, let's first look at the structure of the actual project:
It's still the same as before:
Provide an API for Service layer implementation, as well as Web layer consumption.
The Web layer is simply a Spring MVC.
The Service layer is the real data landing.
SSM-SECONDS-KILL-ORDER-CONSUMER is the Kafka consumption that will be mentioned later.
The database also has only two simple tables to simulate placing an order:
CREATE TABLE `stock` (`id` int (11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar (50) NOT NULL DEFAULT''COMMENT' name', `count` int (11) NOT NULL COMMENT 'inventory', `sale` int (11) NOT NULL COMMENT 'sold', `version`int (11) NOT NULL COMMENT 'optimistic lock, version number', PRIMARY KEY (`id`)) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 CREATE TABLE `stock_ order` (`id` int (11) unsigned NOT NULL AUTO_INCREMENT, `sid` int (11) NOT NULL COMMENT 'inventory ID', `name` varchar (30) NOT NULL DEFAULT' 'COMMENT' commodity name', `create_ time`order` creation time', PRIMARY KEY (`id`) ENGINE=InnoDB AUTO_INCREMENT=55 DEFAULT CHARSET=utf8
Web layer Controller implementation:
@ Autowired private StockService stockService; @ Autowired private OrderService orderService; @ RequestMapping ("/ createWrongOrder/ {sid}") @ ResponseBody public String createWrongOrder (@ PathVariable int sid) {logger.info ("sid= [{}]", sid); int id= 0; try {id= orderService.createWrongOrder (sid);} catch (Exception e) {logger.error ("Exception", e) } return String.valueOf (id);}
Web is used as a consumer call to see the Dubbo service provided by OrderService.
The Service layer, the OrderService implementation, starts with the implementation of API (the interface is provided in API):
@ Service public class OrderServiceImpl implements OrderService {@ Resource (name = "DBOrderService") private com.crossoverJie.seconds.kill.service.OrderService orderService; @ Override public int createWrongOrder (int sid) throws Exception {return orderService.createWrongOrder (sid);}}
Here is just a simple call to the implementation in DBOrderService, DBOrderService is the real data landing, that is, write to the database.
DBOrderService implementation:
Transactional (rollbackFor = Exception.class) @ Service (value = "DBOrderService") public class OrderServiceImpl implements OrderService {@ Resource (name = "DBStockService") private com.crossoverJie.seconds.kill.service.StockService stockService; @ Autowired private StockOrderMapper orderMapper; @ Override public int createWrongOrder (int sid) throws Exception {/ / check inventory Stock stock = checkStock (sid); / / deduct inventory saleStock (stock) / / create order int id = createOrder (stock); return id;} private Stock checkStock (int sid) {Stock stock = stockService.getStockById (sid); if (stock.getSale (). Equals (stock.getCount () {throw new RuntimeException ("insufficient inventory");} return stock } private int saleStock (Stock stock) {stock.setSale (stock.getSale () + 1); return stockService.updateStockById (stock);} private int createOrder (Stock stock) {StockOrder order = new StockOrder (); order.setSid (stock.getId ()); order.setName (stock.getName ()); int id = orderMapper.insertSelective (order); return id }}
Ten items of inventory are initialized in advance. Manually call the createWrongOrder/1 API to discover:
Inventory table
Order form
Everything seems to be all right and the data are normal. But when testing concurrently with JMeter:
The test configuration is 300 threads concurrent. Test two rounds to see the results in the database:
All the requests were answered successfully, and the inventory was indeed deducted, but the order generated 124 records. This is obviously a typical oversold phenomenon.
In fact, calling the API manually now will return insufficient inventory, but it is too late.
Optimistic lock update
How to avoid the above phenomenon? The easiest thing to do is, of course, to be optimistic about locking. Let's take a look at the implementation:
In fact, nothing else has changed, mainly the Service layer:
@ Override public int createOptimisticOrder (int sid) throws Exception {/ / check inventory Stock stock = checkStock (sid); / / optimistic lock update inventory saleStockOptimistic (stock); / / create order int id = createOrder (stock); return id;} private void saleStockOptimistic (Stock stock) {int count = stockService.updateStockByOptimistic (stock) If (count = = 0) {throw new RuntimeException ("concurrent update inventory failed");}}
Corresponding XML:
Update stock sale = sale + 1, version = version + 1, WHERE id = # {id,jdbcType=INTEGER} AND version = # {version,jdbcType=INTEGER}
For the same test conditions, let's do the above test / createOptimisticOrder/1:
This time it is found that both inventory orders are OK.
Check the log and find:
Many concurrent requests respond to errors, which works.
Improve throughput
In order to further improve the throughput and response efficiency of the second kill, both Web and Service here have been scaled out:
Web takes advantage of Nginx for load.
Service is also a number of applications.
The effect can be seen intuitively when testing with JMeter.
Since I tested on a small water pipe server in Aliyun, and the configuration is not high and the applications are all in the same server, it does not fully show the advantage in performance (Nginx will also increase additional network consumption when forwarding load).
Shell script implements simple CI
Due to the application of multiple deployments, I believe I have experienced the pain of manual release testing.
This time I don't have the energy to build a complete CICD, but I just write a simple script to achieve automated deployment, hoping to bring some inspiration to students who are inexperienced in this field.
Build the Web:
#! / bin/bash # build web Consumer # read appname appname= "consumer" echo "input=" $appname PID=$ (ps-ef | grep $appname | grep-v grep | awk'{print $2}') # traversal kills pid for var in ${PID [@]}; do echo "loop pid= $var" kill-9$ var done echo "kill $appname success" cd.. Git pull cd SSM-SECONDS-KILL mvn-Dmaven.test.skip=true clean package echo "build war success" cp / home/crossoverJie/SSM/SSM-SECONDS-KILL/SSM-SECONDS-KILL-WEB/target/SSM-SECONDS-KILL-WEB-2.2.0-SNAPSHOT.war / home/crossoverJie/tomcat/tomcat-dubbo-consumer-8083/webapps echo "cp tomcat-dubbo-consumer-8083/webapps ok!" Cp / home/crossoverJie/SSM/SSM-SECONDS-KILL/SSM-SECONDS-KILL-WEB/target/SSM-SECONDS-KILL-WEB-2.2.0-SNAPSHOT.war / home/crossoverJie/tomcat/tomcat-dubbo-consumer-7083-slave/webapps echo "cp tomcat-dubbo-consumer-7083-slave/webapps ok!" Sh / home/crossoverJie/tomcat/tomcat-dubbo-consumer-8083/bin/startup.sh echo "tomcat-dubbo-consumer-8083/bin/startup.sh success" sh / home/crossoverJie/tomcat/tomcat-dubbo-consumer-7083-slave/bin/startup.sh echo "tomcat-dubbo-consumer-7083-slave/bin/startup.sh success" echo "start $appname success"
Build the Service:
# build service provider # read appname appname= "provider" echo "input=" $appname PID=$ (ps-ef | grep $appname | grep-v grep | awk'{print $2}') # if [$?-eq 0]; then # echo "process id:$PID" # else # echo "process $appname not exit" # exit # fi # kill pid for var in ${PID [@]} Do echo "loop pid= $var" kill-9$ var done echo "kill $appname success" cd.. Git pull cd SSM-SECONDS-KILL mvn-Dmaven.test.skip=true clean package echo "build war success" cp / home/crossoverJie/SSM/SSM-SECONDS-KILL/SSM-SECONDS-KILL-SERVICE/target/SSM-SECONDS-KILL-SERVICE-2.2.0-SNAPSHOT.war / home/crossoverJie/tomcat/tomcat-dubbo-provider-8080/webapps echo "cp tomcat-dubbo-provider-8080/webapps ok!" Cp / home/crossoverJie/SSM/SSM-SECONDS-KILL/SSM-SECONDS-KILL-SERVICE/target/SSM-SECONDS-KILL-SERVICE-2.2.0-SNAPSHOT.war / home/crossoverJie/tomcat/tomcat-dubbo-provider-7080-slave/webapps echo "cp tomcat-dubbo-provider-7080-slave/webapps ok!" Sh / home/crossoverJie/tomcat/tomcat-dubbo-provider-8080/bin/startup.sh echo "tomcat-dubbo-provider-8080/bin/startup.sh success" sh / home/crossoverJie/tomcat/tomcat-dubbo-provider-7080-slave/bin/startup.sh echo "tomcat-dubbo-provider-8080/bin/startup.sh success" echo "start $appname success"
After that, whenever I have an update, I just need to execute these two scripts to help me build automatically. These are the most basic Linux commands, which I believe everyone can understand.
Optimistic lock update + distributed current limit
The above results seem to be no problem, but in fact, there is still a long way to go. There is no problem with only 300 concurrency simulations here, but what about when the request reaches 3000mai 3WPY 300W?
Although it is possible to scale out to support more requests, can you solve the problem with the least resources?
After careful analysis, we will find that if there are only 10 stocks of my goods, no matter how many people you buy, in the end, no matter how many people you buy, at most 10 people will be able to issue orders successfully. So 99% of the requests are invalid.
As we all know: most application databases are a straw that overwhelms camels. Use the monitoring of Druid to see if the database was previously requested:
Because Service is two applications:
The database also has more than 20 connections. How to optimize it? In fact, it is easy to think of distributed current restrictions.
We keep concurrency within a controllable range, and then fail quickly so that we can protect the system to a certain extent.
① distributed-redis-tool ⬆ v1.0.3
Because all requests go through Redis after adding this component, you should also be very careful with the use of Redis resources.
② API updates
The modified API is as follows:
@ Configuration public class RedisLimitConfig {private Logger logger = LoggerFactory.getLogger (RedisLimitConfig.class); @ Value ("${redis.limit}") private int limit; @ Autowired private JedisConnectionFactory jedisConnectionFactory; @ Bean public RedisLimit build () {RedisLimit redisLimit = new RedisLimit.Builder (jedisConnectionFactory, RedisToolsConstant.SINGLE) .limit (limit) .build (); return redisLimit;}}
Here the builder uses JedisConnectionFactory instead, so it has to be used with Spring.
It also shows whether the incoming Redis is deployed in a cluster or a stand-alone machine during initialization (clustering is strongly recommended. There will still be some pressure on the Redis after the current limit).
Realization of ③ current limit
Now that the API is updated, the implementation naturally needs to be modified:
/ * limit traffic * @ return if true * / public boolean limit () {/ / get connection Object connection = getConnection (); Object result = limitRequest (connection); if (FAIL_CODE! = (Long) result) {return true;} else {return false;} private Object limitRequest (Object connection) {Object result = null String key = String.valueOf (System.currentTimeMillis () / 1000); if (connection instanceof Jedis) {result = ((Jedis) connection) .eval (script, Collections.singletonList (key), Collections.singletonList (String.valueOf (limit); ((Jedis) connection). Close ();} else {result = ((JedisCluster) connection) .eval (script, Collections.singletonList (key), Collections.singletonList (String.valueOf (limit) Try {((JedisCluster) connection). Close ();} catch (IOException e) {logger.error ("IOException", e);}} return result;} private Object getConnection () {Object connection; if (type = = RedisToolsConstant.SINGLE) {RedisConnection redisConnection = jedisConnectionFactory.getConnection (); connection = redisConnection.getNativeConnection () } else {RedisClusterConnection clusterConnection = jedisConnectionFactory.getClusterConnection (); connection = clusterConnection.getNativeConnection ();} return connection;}
If it is a native Spring application, you have to use the @ SpringControllerLimit (errorCode=200) annotation.
The actual use is as follows: Web:
/ * * optimistic lock update inventory limit * @ param sid * @ return * / @ SpringControllerLimit (errorCode = 200) @ RequestMapping ("/ createOptimisticLimitOrder/ {sid}") @ ResponseBody public String createOptimisticLimitOrder (@ PathVariable int sid) {logger.info ("sid= [{}]", sid); int id= 0; try {id= orderService.createOptimisticOrder (sid) } catch (Exception e) {logger.error ("Exception", e);} return String.valueOf (id);}
There is no update on the Service side, and the optimistic lock is still used to update the database.
Pressure test again to see the effect / createOptimisticLimitOrderByRedis/1:
First of all, there is no problem with the results, and then there is a significant decline in database connections and the number of concurrent requests.
Optimistic lock update + distributed current limit + Redis cache
A close look at the Druid monitoring data shows that the SQL has been queried many times:
In fact, this is a real-time inventory query SQL, mainly in order to determine whether there is any stock before each order is placed.
This is also an optimization point. This kind of data can be stored in memory, and it is much more efficient than in the database.
Because our application is distributed, it is obvious that in-heap caching is not appropriate, and Redis is very suitable.
This time, the main transformation is the Service layer:
Go to Redis every time you check the inventory.
Update Redis when withholding inventory.
Inventory information needs to be written to Redis in advance. (either manually or automatically)
The main code is as follows:
@ Override public int createOptimisticOrderUseRedis (int sid) throws Exception {/ / check inventory and get Stock stock = checkStockByRedis (sid) from Redis; / / optimistic lock update inventory and update Redis saleStockOptimisticByRedis (stock); / / create order int id = createOrder (stock); return id } private Stock checkStockByRedis (int sid) throws Exception {Integer count = Integer.parseInt (redisTemplate.opsForValue (). Get (RedisKeysConstant.STOCK_COUNT + sid)); Integer sale = Integer.parseInt (redisTemplate.opsForValue (). Get (RedisKeysConstant.STOCK_SALE + sid)); if (count.equals (sale)) {throw new RuntimeException ("insufficient Redis currentCount=" + sale) } Integer version = Integer.parseInt (redisTemplate.opsForValue (). Get (RedisKeysConstant.STOCK_VERSION + sid)); Stock stock = new Stock (); stock.setId (sid); stock.setCount (count); stock.setSale (sale); stock.setVersion (version); return stock Redis * @ param stock * / private void saleStockOptimisticByRedis (Stock stock) {int count = stockService.updateStockByOptimistic (stock); if (count = = 0) {throw new RuntimeException ("concurrent inventory update failed") } / / self-increasing redisTemplate.opsForValue (). Increment (RedisKeysConstant.STOCK_SALE + stock.getId (), 1); redisTemplate.opsForValue () .increment (RedisKeysConstant.STOCK_VERSION + stock.getId (), 1);}
Pressure test to see the actual effect / createOptimisticLimitOrderByRedis/1:
* found that there is no problem with the data, and the requests and concurrency of the database have come down.
Optimistic lock update + distributed current limit + Redis cache + Kafka async
How do you want to improve throughput and performance again? All of our examples above are actually synchronous requests, and we can use synchronous to asynchronous to improve performance.
Here we asynchronize the operations of writing orders and updating inventory, using Kafka to decouple and queue.
Every time a request passes the current limit, reaches the Service layer and passes the inventory check, the order information is sent to Kafka, so that a request can be returned directly.
The consumer program stores the data and lands on the ground. Because it is asynchronous, you eventually need to use callbacks or other reminders to remind the user that the purchase is complete.
Here more code will not be posted, the consumer program is actually rewriting the logic of the previous Service layer, but using Spring Boot.
The above is the content of this article on "how to achieve a high-performance second-kill system in java". I believe we all have a certain understanding. I hope the content shared by the editor will be helpful to you. If you want to know more related knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.