Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to run Java in Docker

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces the knowledge of "how to run Java in Docker". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Background: it is well known that when we execute a Java application without any tuning parameters (such as "java-jar mypplication-fat.jar"), JVM automatically adjusts several parameters to achieve the best performance in the execution environment.

But many developers find that if you let JVM ergonomics (i.e. JVM ergonomics for automatic selection and behavior adjustment) use the default settings for garbage collector, heap size, and running compilers, the Java process running in the Linux container (docker,rkt,runC,lxcfs, etc.) will not behave as expected.

Lazy ultra-compact reading version:

A.JVM cannot do the memory limit. Once the resource limit is exceeded, the container will make an error.

b. Even if you give more memory resources, it will be of no use. It will only add to the mistake.

c. Solution: use environment variables in Dockfile to define additional parameters for JVM

d. One step further: define Java applications using the base Docker image provided by the Fabric8 community, which will always resize the heap according to the container

Full text:

We tend to use the container as a virtual machine and let it define some virtual CPU and virtual memory. In fact, the container is more like an isolation mechanism: it completely isolates resources in one process (CPU, memory, file system, network, etc.) from resources in another process. The cgroups function in the Linux kernel is used to achieve this isolation.

However, some applications that gather information from the execution environment were executed before cgroups existed. Tools such as "top", "free", "ps", or even JVM are not optimized for executing highly restricted Linux processes within the container.

1. Existing problems

To demonstrate, I used "docker-machine create-d virtualbox- virtualbox-memory '1024' docker1024" to create a docker daemon in a virtual machine in 1GB RAM. Next, run three different Linux distribution in a container with virtual memory of 100MB, execute the "free-h" command, and the result is that they all show the total memory of 995MB.

Even in Kubernetes / OpenShift clusters, the results are similar.

I ran a Kubernetes Pod in a cluster of 15GB memory and limited the memory of Pod to 512m (via the "kubectl run mycentos-image=centos-it-limits='memory=512Mi'" command), but the total memory shown was 14GB.

If you want to know why this happened, I suggest you read the blog "Memoryinside Linux containers-Or why don't free and top work in a Linux container?" (https://fabiokung.com/2014/03/13/memory-inside-linux-containers/)

Docker switches (- memory-swap and-memory-swap) and kubernetes switch (- limits) instruct the Linux kernel to kill a process when it exceeds the limit; but JVM is completely unaware of the limit, so bad things happen when the process exceeds the limit!

To simulate a process that is killed after exceeding the specified memory limit, we can use the "docker run-it-name mywildfly-50MB 50m jboss/wildfly" command to run the WildFly application server in the 50MB memory limit container and use the "dockerstats" command to check the container limit.

But after a few seconds, the container execution of Wildfly will be interrupted and display: * * JBossAS process (55) received KILL signal * *

The "docker inspect mywildfly-f'{{json.State}}'" command shows that the container has been killed due to OOM (insufficient memory). Notice the OOMKilled = true in the container "state".

How are 2.JAVA applications affected?

Start a java application in docker daemon with the parameters defined in Dockerfile-XX:+ PrintFlagsFinal and-XX:+ PrintGCDetails.

Machine:1GB RAM container memory: limited to 150m (seems to be enough for this Spring Boot application)

These parameters allow us to read the initial JVM ergonomic parameters and learn more about garbage collection (GC) execution.

Give it a try:

I have prepared an endpoint on "/ api / memory /" that uses String objects to load JVM memory to simulate memory-consuming operations. Let's call once:

This endpoint will reply "allocate more than 80 per cent (219.8 MiB) of maximum allowable JVM memory size (241.7 MiB)"

Here we can ask at least two questions:

Why does JVM allow a maximum of 241.7 MiB of memory?

If this container limits memory to 150MB, why does it allow Java to allocate near 220MB?

First, we need to review what's on the JVM ergonomics page about "maximum heap size": it's 1 hand 4 of physical memory. Because JVM does not know that it is executed in a container, it allows the maximum heap size to be close to 260MB. Since we added the-XX:+ PrintFlagsFinal flag during container initialization, we can check this value:

Second, we need to understand that when we use the parameter "- m 150m" in the docker command line, docker daemon will limit 150m in RAM and 150m in Swap. Therefore, the process can allocate 300m. This explains why our process has not been killed.

More combinations between memory limits (- memory) and swap (- memory-swap) in the docker command line can be found here (https://docs.docker.com/engine/reference/run/#example-run-htop-inside-a-container)).

3. Is it reliable to provide more memory?

Developers who do not understand the problem tend to think that the environment does not provide enough memory for performing JVM. So the usual solution is to provide more memory, which actually makes things worse.

Let's assume that we change the daemon from 1GB to 8GB (created with "docker-machinecreate-d virtualbox- virtualbox-memory '8192' docker8192") and change the container memory from 150m to 800m:

Note that this time, the "curl http://`docker-machine ipdocker819`: 8080/api/memory" command is not even finished, because the MaxHeapSize of the calculated JVM in the 8GB environment is 2092957696 bytes (~ 2GB). Check "docker logs mycontainer | grep-I MaxHeapSize"

The application will try to allocate more memory than 1.6GB, which exceeds the limit of this container (800MB in RAM + 800MB in Swap), and the process will be killed.

Obviously, it's not a good idea to run Java in the container by adding memory and letting JVM customize its parameters. When running the Java application inside the container, we should set the maximum heap size (the-Xmx parameter) based on the application requirements and container restrictions.

4. Solution

A slight change in Dockerfile allows the user to specify an environment variable to define additional parameters for JVM. Check the following lines:

Now we can use the JAVA_OPTIONS environment variable to inform JVM of the heap size. For this application, 300m is enough. You can check the log later and get a value of 314572800 bytes (300MBi)

For docker, you can use the "- e" switch to specify environment variables.

In Kubernetes, you can set the environment variable using switch "- env = [key = value]":

Go one step further

What if the value of the heap can be calculated automatically according to the container limit?

You can do it using the basic Docker image provided by the Fabric8 community. The mirror fabric8 / java-jboss-openjdk8-jdk uses a script to calculate the container limit and uses 50% of the available memory as the upper limit. Please note that this 50% memory ratio can be overridden. You can also use this image to enable / disable debugging, diagnostics, etc.

Let's take a look at how Dockerfile works on this Spring Boot application:

Got it! Now, regardless of the container memory limit, our Java application will always adjust the heap size to the container, not to the daemon.

This is the end of "how to run Java in Docker". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report