In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-23 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
0. Summary of experience
1. If the overall CPU occupancy rate is too high and is basically occupied by business threads, the reason for the high CPU occupancy rate is not directly related to the size of JVM parameters, but to the specific business logic.
two。 When JVM heap memory is set too small, frequent GC will lead to more business thread pauses, lower TPS and lower CPU occupancy.
3. When the JVM heap memory is set to be too large, the number of GC decreases, TPS increases, and CPU occupancy increases immediately.
4.Dom4J is a powerful xml parsing tool, but when dealing with xml text with many nodes and levels, the overall parsing efficiency will still be the bottleneck of business processing.
I. background note
Recently, a new project has been launched, and a HTTP interface in the project needs to be stress tested to ensure the stability of the interface performance. The main business involved in this interface is to receive the HTTP request, obtain the parameters of the xml message in the request, and store the xml message in the MySQL database after parsing. The business process of the interface is as follows:
The configuration of the server deployed by the business interface is the same as that of the server deploying the MySQL component, which is a 4-core 8GPY 50G ordinary hard disk and is in the same private network segment. We estimate that the performance index should reach 200concurrency and 500TPS.
In the process of stress testing, we focus on indicators such as TPS, the number of GC, CPU occupancy and interface response time.
Second, the testing process
After completing the deployment of the project, we began to edit the jemeter test script, set the stress test standard to 200 concurrent threads, all started in 10 seconds for 15 minutes, and then started the jemeter script for testing.
1. The first stress test (1) JVM configuration
Garbage collection strategies include enabling the CMS garbage collection algorithm in the old era, enabling the ParNew garbage collection algorithm in the new generation, using the CMS algorithm when the maximum survival period of the new generation is 15 times minorGC,FullGC, and turning on the parallel tags in the CMS.
JVM memory allocation: Max / Min heap memory 512MB Die Eden and Survivor 8:2, permanent initialization 64MB, maximum 128MB.
The JVM configuration parameters are as follows:
-XX:+UseConcMarkSweepGC-XX:+UseParNewGC-XX:MaxTenuringThreshold=15-XX:+ExplicitGCInvokesConcurrent
-XX:+CMSParallelRemarkEnabled-Xms512m-Xmx512m-XX:SurvivorRatio=8-XX:PermSize=64m-XX:MaxPermSize=128m
(2) performance indicator monitoring
The top command observes the CPU utilization of the java thread (us represents the user process and sy represents the system process):
TPS and interface response time output from the jemeter tool:
Jstat-gcutil {pid} {period_time} output GC situation
According to the monitoring situation of the above metrics, we can see that the current CPU occupancy rate is very high. Each CPU business thread takes up more than 90% of the CPU time. The younger generation has frequent GC times, with an average of about 8 times per second, but TPS is only about 400 times.
When we first saw this, we thought it was a lack of JVM heap memory allocation that led to frequent GC, which led to high CPU occupancy. So we adjusted the heap memory parameters and did a second stress test.
2. The second stress test (1) JVM configuration
JVM memory allocation: Max / Min heap memory 2048MB ·Eden / Survivor ratio 8:2, permanent initialization 512MB, maximum 512MB.
The JVM configuration parameters are as follows:
-XX:+UseConcMarkSweepGC-XX:+UseParNewGC-XX:MaxTenuringThreshold=15-XX:+ExplicitGCInvokesConcurrent
-XX:+CMSParallelRemarkEnabled-Xmx2048m-Xms2048m-Xmn1024m-XX:NewSize=640m-XX:MaxNewSize=640m
-XX:SurvivorRatio=8-XX:PermSize=512m-XX:MaxPermSize=512m
(2) performance indicator monitoring
The top command observes the CPU utilization of the java thread (us represents the user process and sy represents the system process):
TPS and interface response time output from the jemeter tool:
Jstat-gcutil {pid} {period_time} output GC:
According to the monitoring situation of the above indicators, we can see that after the adjustment of JVM parameters, with the expansion of heap memory, the number of GC of the younger generation decreased, averagely about 2 times per second, and the TPS increased to about 600. However, the CPU occupancy rate is still very high, and all are occupied by business processes.
From this performance result, the increase of heap memory can reduce the GC frequency and increase the TPS. However, the CPU occupancy rate has barely changed, and two possible reasons are expected:
First, there is a calculation operation of consuming CPU in the business logic.
Second, there are locks in the business code, resulting in a large number of threads waiting for the lock.
Based on this guess, we decided to print out a snapshot of the JVM thread to see if we could find information about the thread waiting for the lock.
The jstack-l {pid} > / log_dir/stack_log.txt command outputs thread snapshot information to the specified directory file.
Look for the thread record with the status of BLOCKED in the thread snapshot file, and find that the thread with more BLOCKED status is:
From the perspective of thread snapshot, a large number of xml parsing threads are in BLOCKED state, and the business of xml parsing is in blocking state, which reduces the interface processing efficiency.
Then we block other logic codes in the interface code, leaving only the xml parsing code, and find that the CPU occupancy rate is still more than 90%. Once the xml parsing code is blocked, leaving other business codes, the CPU occupancy rate is immediately reduced to 70% GC increased, and the number of TPS decreased and remained stable.
From the results of the above processing, the reason why the CPU occupancy is too high is not directly related to the size of the JVM parameter, but to the parsing of the xml parameter, because the xml parameter message contains more than a dozen nodes and many levels, and the large objects are generated after parsing.
When JVM heap memory is set to small, frequent GC will cause business threads to stop, TPS to decline, and finally the CPU occupancy rate to be low.
When the JVM heap memory is too large, the number of GC decreases, the TPS increases, and the CPU occupancy immediately increases to more than 95%.
Because we use dom4j method for xml parameter parsing, there is no way to optimize xml parsing, we can only deal with JVM parameters and concurrency.
Finally, in order to balance the indicators of CPU occupancy, TPS and GC, and considering the actual business scenario, we set the JVM heap memory to 1.5g and limit TPS to 200g.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.