In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
In order to solve the problem of how to deploy Java microservices in the Docker container, this article introduces the corresponding analysis and solutions in detail, hoping to help more partners who want to solve this problem to find a more simple and easy way.
Basic positioning
It is not difficult to determine the direct cause of the docker container memory overrun problem. Go directly to the docker container and execute the top command, and we find that the host is an 8-core 16GB machine, and docker does not block this information, that is, JVM will think that it is working on a machine with 16GB memory. Looking at the Dockerfile of the demo service, it is found that there is no memory limit on the JVM when running the service, so the JVM will work according to the default setting-- the maximum heap memory is 1max 4 of the physical memory (the description here is not entirely accurate, because the default heap memory limit ratio of JVM actually varies according to the physical memory, please search for details), while the ServiceStage pipeline created based on the template When deploying the application stack, the memory quota of the docker container is set to 512m by default, so the container will exceed its memory limit at startup. As for the reason why we haven't encountered this problem before, it's just that such a high-specification ECS server has not been used for pipelined deployment of the application stack before.
After querying the relevant information, we have found two solutions to the problem. One is to directly add the-Xmx parameter to the jar package run command to specify the maximum heap memory, but this method can only limit the JVM heap memory to a fixed value. Another way is to add the-XX:+UnlockExperimentalVMOptions-XX:+UseCGroupMemoryLimitForHeap parameter to the execution of the jar package so that JVM can perceive the cgroup limit set by the docker container and adjust its heap memory size accordingly, but this feature is available only in versions above JDK 8u131.
Finally, we remind the students in the ServiceStage pipeline to improve the creation template of CSEJavaSDK demo, upgrade the packaged basic image version from java:8u111-jre-alpine to openjdk:8u181-jdk-alpine in Dockerfile, and add the-Xmx256m parameter to the command to run the service jar package. The problem has been solved so far.
3. Further exploration
Although the problem has been solved, driven by curiosity, I still intend to find my own demo to actually trigger the problem, and see if the solution found on the Internet works:)
3.1 preparation work
Create a project on cloud
First of all, you need to create a cloud-based project in Huawei Cloud ServiceStage.
In ServiceStage-> Application Development-> Micro Service Development-> Project Management-> create Cloud Project, select "create based on template", select Java for language, select CSE-Java (SpringMVC) for framework, select "Cloud Container engine CCE" for deployment system, give your cloud project a name, such as test-memo-consuming, and finally select the warehouse where the code is stored to complete the creation of cloud project.
After that, Cloud Project will automatically generate scaffolding code according to your options, upload it to the code repository you specify, and create an pipeline for you to compile, build, package and archive the image package, and deploy an application stack in the CCE cluster using the finished docker image package.
Creating cloud engineering and pipelining is not the focus of this article, so I won't go into details:). Multiple instances of the same application stack can be deployed, and here, for experimental convenience, follow the default value of 1.
Log in to the container where the demo service is deployed, and use the curl command to call the helloworld interface of the demo service. You can see that the service is working normally.
Add experimental code
In order to trigger the micro service instance to consume more memory, I added the following API to the project code. When the / allocateMemory API is called, the micro server pragmatic example will keep requesting memory until JVM throws an OOM error or the container memory exceeds the limit by kill.
Private HashMap cacheMap = new HashMap (); @ GetMapping (value = "/ allocateMemory") public String allocateMemory () {LOGGER.info ("allocateMemory () is called"); try {for (long I = 0; true; + + I) {cacheMap.put ("key" >)
The basic image used to play the mirror package at this time is the-Xmx256m parameter added to the openjdk:8u181-jdk-alpine,jar package startup command.
After the application stack is successfully deployed, call the / allocateMemory API to trigger the micro service instance to consume memory until JVM throws an OOM error. You can select the appropriate application in "ServiceStage"-> "Application launch"-> Application Management, and click to enter the overview page to view the memory usage of the application.
The time when the memory used by the application dropped sharply from 800m + was the time when I repackaged and deployed, and then due to the call to the / allocateMemory interface, the memory usage rose to nearly 400m and stabilized at this level, showing that the-Xmx256m parameter played the expected role.
3.2 recurrence of problems
Now modify the Dockerfile in the demo project, change the basic image to java:8u111-jre-alpine, delete the-Xmx256m parameter in the startup command, submit it as a noLimit_oldBase branch, and push it to the code repository. Then edit the pipeline, change the code branch used by the task in the source phase to the noLimit_oldBase branch, save and rerun the pipeline, and package and deploy the new code to the application stack.
After querying the endpoint IP of the new micro-service instance in the list of micro-service instances, call the / allocateMemory API to observe the memory situation. The time when the memory suddenly drops from nearly 400m to about 450m is the time when the modified code micro-service instance is successfully deployed. Then the memory usage suddenly decreases because the container memory limit is dropped by kill due to calling the / allocateMemory API.
If you use the docker logs-f command to view the container log in advance, the log looks something like this
2018-11-23 15 INFO SCBEngine:154 40 INFO SCBEngine:154-receive MicroserviceInstanceRegisterTask event, check instance Id...2018-11-23 15 INFO SCBEngine:154 40 INFO SCBEngine:154-instance registry succeeds for the first time, will send AFTER_REGISTRY event.2018-11-23 15 15 check instance Id...2018 04925 WARN VertxTLSBuilder:116-keyStore [server.p12] file not exist, please checkpoints 2018-11-23 15 15 WARN VertxTLSBuilder:136 40 WARN VertxTLSBuilder:136-trustStore [trust.jks] file not exist, please checkpoints 2018-11-23 1540 INFO DataFactory:62-Monitor data sender started. Configured data providers is {com.huawei.paas.cse.tcc.upload.TransactionMonitorDataProvider,com.huawei.paas.monitor.HealthMonitorDataProvider,} 2018-11-23 15 INFO ServiceCenterTask:51 40 INFO ServiceCenterTask:51-read MicroserviceInstanceRegisterTask status is FINISHED2018-11-23 15 15 read MicroserviceInstanceRegisterTask status is FINISHED2018 40 INFO ServiceCenterTask:51-INFO TestmemoconsumingApplication:57-Started TestmemoconsumingApplication in 34.81 seconds (JVM running for 38.752) 2018-11-23 15 15 from service center success. Service=default/CseMonitoring/latest, old revision=null, new revision=28475010.12018-11-23 15 INFO AbstractServiceRegistry:266-service id=8b09a7085f4011e89f130255ac10470c, instance id=8b160d485f4011e89f130255ac10470c, endpoints= [rest: / / 100.125.0.198:30109?sslEnabled=true] 2018-11-23 15 INFO AbstractServiceRegistry:266 40 INFO SPIServiceUtils:76-Found SPI service javax.ws.rs.core.Response$StatusType, count=0.2018-11-23 1547 INFO TestmemoconsumingImpl:39-allocateMemory () is calledKilled
You can see that the allocateMemory method is called, and before JVM can throw an OOM error, the entire container is dropped by kill.
Here is also a warning: don't think that everything will be all right if your service container starts up. If there are no specific restrictions, JVM will continue to apply for heap memory at run time, which may cause the amount of memory to exceed the quota of the docker container!
Make JVM aware of cgroup limitations
As mentioned earlier, there is another way to solve the problem of JVM memory overrun, which makes JVM automatically perceive the cgroup limit of the docker container and dynamically adjust the heap memory size, which feels good. Let's try this method and see how it works.)
Go back to the master branch of the demo project code, replace the-Xmx256m of the startup command parameter in Dockerfile with-XX:+UnlockExperimentalVMOptions-XX:+UseCGroupMemoryLimitForHeap, submit it as a useCGroupMemoryLimitForHeap branch, and push it to the code repository. Run the pipeline again for build deployment.
After the demo service is successfully deployed, call the / allocateMemory API again. The memory footprint of the container is shown above (the continuous curve on the far right). After the memory rises to a certain extent, JVM throws an OOM error and does not continue to apply for heap memory. It seems that this method is also effective. However, if you look closely at the memory footprint of the container, you can see that the container uses less than 300m of memory, and our memory quota for this container is limited to 512m, that is, 200m + is idle and will not be utilized by JVM. This utilization is lower than the memory utilization of-Xmx256m directly set above: (. The guess is that because JVM does not perceive that it is deployed in a docker container, it treats the current environment as a physical machine with only 512m of physical memory, proportionally limits its maximum heap memory, and the other part is idle.
This is the answer to the question about how to deploy Java microservices in the Docker container. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel for more related knowledge.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 238
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.