In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
What is the situation of oom caused by max-http-header-size? aiming at this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and feasible way.
Record an online OOM accident
According to user feedback, a service cannot provide a service, and then we troubleshoot. The process id is still there, but the log is not output. Check the memory usage through jstat-gcutil; notice that Full GC is present.
Check memory usage ps-ef | grep-- color=auto project name | grep-- color=auto-v "grep" | awk'{print $2}'| xargs-I jstat-gcutil {} 2000
S0: survivor Zone 1 current usage ratio S1: survivor Zone 2 current usage ratio E: Eden Park usage ratio O: old year usage ratio M: metadata Zone usage ratio CCS: compression usage ratio YGC: young generation garbage collection times FGC: old years garbage collection times FGCT: old years garbage collection consumption time GCT: total garbage collection time
When the memory of the younger generation is full in that year, a normal GC will be triggered, and the GC will only collect the younger generation. It needs to be emphasized that the younger generation refers to the full Eden, and the full Survivor will not trigger GC.
When the old generation is full, it will trigger Full GC,Full GC to recycle the younger generation and the older generation at the same time.
Dump file for generating heap jmap-dump:format=b,file=812.hprof 15968 package file tar-czf 814.tar.gz 814.hprofMAT analysis memory
There's a problem at a glance.
View Histogram
Make a reference view
Through mat analysis, we can see that this object has a large memory footprint of about 10m, so focus on this class analysis to see what has been done.
The class org.apache.coyote.Response#outputBuffer is the class that tomcat outputs to socket.
The problem lies here, the memory of the object here is about 10m!
How did this value come from? Old colleagues set it, and I don't know why I set this value in the first place. This reason is given later.
Server: max-http-header-size: 10000000
Source code analysis, rear high energy! View the tomcat call link
Org.apache.catalina.connector.CoyoteAdapter#service
Finally, tomcat returns the result through socket
Org.apache.coyote.http11.Http11OutputBuffer#commit
Protected void commit () throws IOException {response.setCommitted (true); if (headerBuffer.position () > 0) {/ / Sending the response header buffer headerBuffer.flip (); try {socketWrapper.write (isBlocking (), headerBuffer);} finally {headerBuffer.position (0) .limit (headerBuffer.capacity ()) }}}
That is, a request will return 10m of data to the front end, and when using jemter100 concurrent tests, the heap memory will directly fill up. The service is stopped.
Why is this value max-http-header-size set?
Because the api interface provided by the project is provided to third-party platforms, user identity verification is done in header. What if the third-party platform does not pass the value according to the specification? An error occurred in the system. Error parsing HTTP request header
Org.apache.coyote.http11.Http11Processor#service
Public SocketState service (SocketWrapperBase socketWrapper) throws IOException {RequestInfo rp = request.getRequestProcessor (); rp.setStage (org.apache.coyote.Constants.STAGE_PARSE); / / Setting up the Imax O setSocketWrapper (socketWrapper); inputBuffer.init (socketWrapper); outputBuffer.init (socketWrapper); / / Flags keepAlive = true; openSocket = false; readComplete = true; boolean keptAlive = false SendfileState sendfileState = SendfileState.DONE While (! getErrorState (). IsError () & & keepAlive & &! isAsync () & & upgradeToken = = null & & sendfileState = = SendfileState.DONE & &! endpoint.isPaused ()) {/ / Parsing the request header try {/ / this line of code reports an error if (! inputBuffer.parseRequestLine (keptAlive)) { If (inputBuffer.getParsingRequestLinePhase ()) =-1) {return SocketState.UPGRADING } else if (handleIncompleteRequestLineRead ()) {break;}. } catch (IOException e) {if (log.isDebugEnabled ()) {/ / print error. Log.debug (sm.getString ("http11processor.header.parse"), e);} setErrorState (ErrorState.CLOSE_CONNECTION_NOW, e); break;} catch (Throwable t) {ExceptionUtils.handleThrowable (t); UserDataHelper.Mode logMode = userDataHelper.getNextMode () If (logMode! = null) {String message = sm.getString ("http11processor.header.parse"); switch (logMode) {case INFO_THEN_DEBUG: message + = sm.getString ("http11processor.fallToDebug") / / $FALL-THROUGH$ case INFO: log.info (message, t); break; case DEBUG: log.debug (message, t) SetErrorState (ErrorState.CLOSE_CLEAN, t); getAdapter (). Log (request, response, 0);}. } org.apache.tomcat.util.net.NioChannel#read reported the wrong place
Org.apache.tomcat.util.net.NioChannel#read
Protected SocketChannel sc = null;@Override public int read (ByteBuffer dst) throws IOException {return sc.read (dst);}
Org.apache.coyote.http11.Http11InputBuffer#init
The reason for the error is that the data of SocketChannel.read is larger than the accepted buffer. The default buffer is 16kb, and an error is reported if it is exceeded. When tomcat encounters this error, it does not throw the exception, but logs it, and the output is code=400.
And old colleagues through the search log found that Error parsing HTTP request header Baidu search, decisively adjust max-http-header-size. When the max-http-header-size is adjusted to 10m, the third-party platform calls the interface normally and returns a verification failure error; when the third-party platform discovers the verification rule, it does not report the situation to the old colleague; it buries a deep hole for the oom
The answer to the question about what the oom caused by max-http-header-size is shared here. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.