In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly explains "how to locate memory leaks". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's ideas to study and learn "how to locate memory leaks".
Brief introduction of production-Consumer Model
We tried a variety of multithreading schemes in the previous section, and there are always all kinds of strange problems.
So finally decided to use the production-consumer model to achieve.
The implementation is as follows:
Here we use AtomicLong to do a simple count.
UserMapper.handle2 (Arrays.asList (user)); this method is the previous method of my colleagues, and of course it makes a lot of simplification.
There is no modification, and the input parameter is a list. Here, for compatibility, we simply encapsulate it with Arrays.asList ().
Import com.github.houbb.thread.demo.dal.entity.User; import com.github.houbb.thread.demo.dal.mapper.UserMapper; import com.github.houbb.thread.demo.service.UserService; import java.util.Arrays; import java.util.List; import java.util.concurrent.*; import java.util.concurrent.atomic.AtomicLong / * paging query * @ author binbin.hou * @ since 1.0.0 * / public class UserServicePageQueue implements UserService {/ / Page size private final int pageSize = 10000; private static final int THREAD_NUM = 20; private final Executor executor = Executors.newFixedThreadPool (THREAD_NUM); private final ArrayBlockingQueue queue = new ArrayBlockingQueue (2 * pageSize, true); / / Analog injection private UserMapper userMapper = new UserMapper () / * * calculate total * / private AtomicLong counter = new AtomicLong (0) / / consumption thread task public class ConsumerTask implements Runnable {@ Override public void run () {while (true) {try {/ / blocks until the element User user = queue.take (); userMapper.handle2 (Arrays.asList (user)) is obtained Long count = counter.incrementAndGet ();} catch (InterruptedException e) {e.printStackTrace ();} / initialize the consumer process / / start five processes to handle private void startConsumer () {for (int I = 0; I)
< THREAD_NUM; i++) { ConsumerTask task = new ConsumerTask(); executor.execute(task); } } /** * 处理所有的用户 */ public void handleAllUser() { // 启动消费者 startConsumer(); // 充值计数器 counter = new AtomicLong(0); // 分页查询 int total = userMapper.count(); int totalPage = total / pageSize; for(int i = 1; i = limit) { try { // 根据实际的情况进行调整 Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } else { break; } } } }测试验证 当然这个方法在集成环境跑没有任何的问题。 于是就开始直接上生产验证,结果开始很快,然后就可以变慢了。 一看 GC 日志,梅开二度,FULL GC。 可恶,圣斗士竟然会被同一招打败 2 次吗? FULL GC 的产生 一般要发现 full gc,最直观的感受就是程序很慢。 这时候你就需要添加一下 GC 日志打印,看一下是否有 full gc 即可。 这个最坑的地方就在于,性能问题是测试一般无法验证的,除非你进行压测。 压测还要同时满足两个条件: (1)数据量足够大,或者说 QPS 足够高。持续压 (2)资源足够少,也就是还想马儿跑,还想马儿不吃草。 好巧不巧,我们同时赶上了两点。 那么问题又来了,如何定位为什么 FULL GC 呢? 内存泄露 程序变慢并不是一开始就慢,而是开始很快,然后变慢,接着就是不停的 FULL GC。 这就和自然的想到是内存泄露。 如何定位内存泄露呢? 你可以分成下面几步: (1)看代码,是否有明显存在内存泄露的地方。然后修改验证。如果无法解决,则找出可能存在问题的地方,执行第二步。 (2)把 FULL GC 时的堆栈信息 dump 下来,分析到底是什么数据过大,然后结合 1 去解决。 接下来,让我们一起看一下这个过程的简化版本记录。 问题定位看代码 最基本的生产者-消费者模式确认了即便,感觉没啥问题。 于是就要看一下消费者模式中调用其他人的方法问题。 方法的核心目的 (1)遍历入参列表,执行业务处理。 (2)把当前批次的处理结果写入到文件中。 方法实现 简化版本如下: /** * 模拟用户处理 * * @param userList 用户列表 */ public void handle2(List userList) { String targetDir = "D:\\data\\"; // 理论让每一个线程只读写属于自己的文件 String fileName = Thread.currentThread().getName()+".txt"; String fullFileName = targetDir + fileName; FileWriter fileWriter = null; BufferedWriter bufferedWriter = null; User userExample; try { fileWriter = new FileWriter(fullFileName); bufferedWriter = new BufferedWriter(fileWriter); StringBuffer stringBuffer = null; for(User user : userList) { stringBuffer = new StringBuffer(); // 业务逻辑 userExample = new User(); userExample.setId(user.getId()); // 如果查询到的结果已存在,则跳过处理 List userCountList = queryUserList(userExample); if(userCountList != null && userCountList.size() >0) {return;} / / other processing logic / / record the final result stringBuffer.append ("user") .append (user.getId ()) .append ("synchronization result completed"); bufferedWriter.newLine () BufferedWriter.write (stringBuffer.toString ());} / / the processing result is written to the file bufferedWriter.newLine (); bufferedWriter.flush (); bufferedWriter.close (); fileWriter.close ();} catch (Exception exception) {exception.printStackTrace () } finally {try {if (null! = bufferedWriter) {bufferedWriter.close ();} if (null! = fileWriter) {fileWriter.close ();}} catch (Exception e) {}
What do you say about this kind of code? it's probably ancestral code. I don't know if you've seen it or written it.
We don't have to look at the file section, the core part actually has:
User userExample; for (User user: userList) {/ / Business logic userExample = new User (); userExample.setId (user.getId ()); / / if the query result already exists, skip processing List userCountList = queryUserList (userExample); if (userCountList! = null & & userCountList.size () > 0) {return;} / / other processing logic} code
What do you think is wrong with the above code?
Where could there be a memory leak?
How should it be improved?
Look at the stack
If you see that the code has identified the confusion, the next step is to look at the stack and verify your conjecture.
How to view the stack
There are many ways to view the jvm stack, and here we take the jmap command as an example.
(1) find the pid of the java process
You can execute jps or ps ux, etc., and choose the one you like.
We tested windows locally (the actual production is usually a linux system):
D:\ Program Files\ Java\ jdk1.8.0_192\ bin > jps 11168 Jps 3440 RemoteMavenServer36 4512 11660 Launcher 11964 UserServicePageQueue
UserServicePageQueue is the test program we executed, so pid is 11964
(2) execute jmap to get stack information
Command:
Jmap-histo 11964
The effect is as follows:
D:\ Program Files\ Java\ jdk1.8.0_192\ bin > jmap-histo 11964 num # instances # bytes class name-1: 161031 20851264 [C 2: 157949 3790776 java.lang.String 3: 1709 3699696 [B 4: 3472 3688440 [I 5: 139358 3344592 com.github.houbb.thread.demo.dal.entity.User 6: 139614 2233824 java.lang.Integer 7: 12716 508640 java.io.FileDescriptor 8: 12714 406848 java.io.FileOutputStream 9: 7122 284880 java .lang.ref.Finalizer 10: 12875 206000 java.lang.Object...
Of course, there are many more below, and you can use the head command to filter.
Of course, if the server does not support this command, you can output the stack information to a file:
Jmap-histo 11964 > > dump.txt stack analysis
We can obviously find something unreasonable:
[C here refers to chars, which has 161031.
String is a string with 157949.
And, of course, the User object, which has 139358.
We have 1W pages at a time, compared with a maximum of 19999 in queue, which is obviously unreasonable for so many objects.
Why are there so many chars and String problems in the code?
The first feeling of the code is to write files that have nothing to do with business logic.
Many of you must have thought of using TWR to simplify your code, but there are two problems:
(1) can all the implementation results be recorded in the final document?
(2) is there a better way?
For question 1, the answer is no. Although we created a file for each thread, the actual test found that the file would be overwritten.
In fact, rather than writing our own files, we should use log to record the results, which is more elegant.
So, finally, simplify the code as follows:
/ / User userExample; for (User user: userList) {/ / Business logic userExample = new User (); userExample.setId (user.getId ()); / / if the query result already exists, skip processing List userCountList = queryUserList (userExample); if (userCountList! = null & & userCountList.size () > 0) {/ / Log return } / / other processing logic / / logging result}
Why are there so many user objects here?
Let's take a look at the core business code:
User userExample; for (User user: userList) {/ / Business logic userExample = new User (); userExample.setId (user.getId ()); / / if the query result already exists, skip processing List userCountList = queryUserList (userExample); if (userCountList! = null & & userCountList.size () > 0) {return;} / / other processing logic}
Here, when judging whether it exists or not, we construct a User query condition commonly used in mybatis, and then determine the list size of the query.
Here are two questions:
(1) it is best to use count instead of judging the size of the list to determine whether it exists or not.
(2) the scope of User userExample is as small as possible.
The adjustments are as follows:
For (User user: userList) {/ / business logic User userExample = new User (); userExample.setId (user.getId ()); / / if the query result already exists, skip processing int count = selectCount (userExample); if (count > 0) {return;} / / other business logic} adjusted code
The System.out.println here is actually used instead of log, just for demonstration purposes.
/ * simulated user processing * * @ param userList user list * / public void handle3 (List userList) {System.out.println ("input parameter:" + userList); for (User user: userList) {/ / business logic User userExample = new User (); userExample.setId (user.getId ()) / / if the query result already exists, skip processing int count = selectCount (userExample); if (count > 0) {System.out.println ("skip processing if the query result already exists"); continue;} / / other business logic System.out.println ("business logic processing result") }} production verification
After all the changes, redeploy and verify, and everything went well.
Thank you for your reading, the above is the content of "how to locate memory leaks". After the study of this article, I believe you have a deeper understanding of how to locate memory leaks, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.