In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
This paper mainly records a tomcat process, which leads to excessive CPU occupation of CPU troubleshooting records due to too many TCP connections.
Problem description
Under linux system, the cpu occupancy rate of a tomcat web service is very high, and the top shows that the result is more than 200%. The request could not be answered. Repeated restarts are still the same phenomenon.
Problem troubleshooting
1. Obtain process information
You can quickly find out the jvm process through the jps command provided by jdk
Jps pid
2. View jstack information
Jstack pid
It is found that there are a large number of log4j threads block in waiting lock state.
Org.apache.log4j.Category.callAppenders (org.apache.log4j.spi.LoggingEvent) @ bci=12, line=201 (Compiled frame)
Search for relevant information and find that there is a deadlock problem in log4j version 1.x.
If you find a problem, adjust the log4j configuration, open only the error level log, and restart tomcat. At this point, the block thread in the stack disappears, but the process cpu occupancy is still high.
3. Further investigation
To analyze the cpu usage of each thread, here you need to introduce a script contributed by God to calculate the cpu usage of each thread in the java process.
#! / bin/bashtypeset top=$ {1pgrep USER java 10} typeset pid=$ {2USER java USER java} typeset tmp_file=/tmp/java_$ {pid} _ $$.trace $JAVA_HOME/bin/jstack $pid > $tmp_fileps H-eo user,pid,ppid,tid,time,%cpu-- sort=%cpu-- no-headers\ | tail-$top\ | awk-v "pid=$pid"'$2==pid {print $4 "\ t" $6}'| while read line Do typeset nid=$ (echo "$line" | awk'{printf ("0x%x", $1)}') typeset cpu=$ (echo "$line" | awk'{print $2}') awk-v "cpu=$cpu" / nid=' "$nid" /, / ^ $/ {print $0 "\ t" (isF++? "": "cpu=" cpu "%");}'$tmp_filedonerm-f $tmp_file
Scope of application of script
Because the% CPU data statistics in ps come from / proc/stat, this data is not real-time, but depends on how often OS updates it, typically 1s. So that's why the data statistics you see are inconsistent with the information from jstack. But this information is very useful for troubleshooting problems caused by a few threads of persistent LOAD, because these fixed threads will continue to consume CPU resources, even if there is a time difference, it is caused by these threads anyway.
In addition to this script, a simpler way is to check out the process id and check the resource usage of each thread in the process with the following command
Top-H-p pid
Get the pid (thread id) from here, convert it to hexadecimal, and then look up the thread information of the object in the stack information.
Through the above method, it is found that the cumulative cpu occupancy rate of the threads corresponding to the tomcat process is about 80%, which is much less than the 200% + given by top.
It shows that there is no thread that occupies cpu for a long time, and it should belong to cpu-intensive computing with many transitions. And then wonder if jvm is out of memory, caused by frequent gc.
Jstat-gc pid
It is found that the memory usage of jvm is not abnormal, and the number of gc has skyrocketed.
After checking the memory, because it is a network program, further check the network connection.
4. Problem positioning
Query the tcp links of the corresponding port of tomcat and find that there are a large number of EASTABLISH links and some connections in other states, totaling 400 +.
Netstat-anp | grep port
Further looking at the source of these connections, it is found that it is the application side of the tomcat service, and there are a large number of background threads polling the service frequently, causing the number of tomcat connections to the service to be full and unable to receive requests.
Netstat status description:
LISTEN: listen for connection requests from remote TCP ports SYN-SENT: wait for matching connection requests after sending connection requests (check if there are a large number of such status packets) SYN-RECEIVED: wait for the other party to confirm the connection request after receiving and sending a connection request (if there are a large number of such status packets) Estimated by flood***) ESTABLISHED: on behalf of an open connection FIN-WAIT-1: waiting for remote TCP connection interruption request Or confirmation of previous connection break request FIN-WAIT-2: wait for connection break request from remote TCP CLOSE-WAIT: wait for connection break request sent from local user CLOSING: wait for confirmation of connection break by remote TCP LAST-ACK: wait for confirmation of original connection break request sent to remote TCP (not a good thing, this appears Check if the TIME-WAIT has been *: wait enough time to ensure that the remote TCP receives an acknowledgement of the connection disconnection request CLOSED: no connection status
5. Root cause analysis
The direct trigger is that the client polling, request exception, and continue the rotation; the client continues to have new background threads to join the polling team, resulting in the server tomcat connection is full.
This is the end of this article about recording a tomcat process cpu takes up too much problem troubleshooting record, more related tomcat process cpu takes up too much content, please search the previous articles or continue to browse the following related articles hope that you will support more in the future!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.