Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The aggressive Java, the metamorphosis of the original cloud era

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Author | Yi Li Ali Yun senior technical expert

Guide: with the advent of the cloud native era, what exactly does it have to do with Java developers? Some people say that cloud origin does not exist for Java at all. However, the author of this article believes that in the era of cloud origin, Java can still play the role of "giant". The author hopes to broaden the students' horizons and provide useful thinking through a series of experiments.

In the field of enterprise software, Java is still the absolute king, but it makes developers love and hate. On the one hand, because of its rich ecology and perfect tool support, it can greatly improve the efficiency of application development; but in terms of runtime efficiency, Java also bears the notoriety of "memory devourer" and "CPU teaser", and continues to be challenged by new and old languages such as NodeJS, Python, Golang and so on.

In the technology community, we often see people talking down Java technology, believing that it is no longer in line with the trend of cloud native computing. Let's put these ideas aside and first consider the different requirements for application runtime native to the cloud.

Smaller-for microservice distributed architectures, smaller size means less download bandwidth and faster distribution and download speed. Faster startup speed-for traditional monolithic applications, startup speed is not a key indicator compared to operating efficiency. The reason is that these applications are restarted and released relatively less frequently. However, for micro-service applications that require rapid iteration and horizontal scale, faster startup speed means higher delivery efficiency and faster rollback. Especially when you need to release an application with hundreds of copies, slow startup speed is a time killer. For Serverless applications, the end-to-end cold start speed is more critical. Even if the underlying container technology can achieve 100 millisecond resource ready, if the application cannot be started in 500ms, users will feel the access delay. Less resource footprint-lower runtime footprint means higher deployment density and lower computing costs. At the same time, when JVM starts, it needs to consume a lot of CPU resources to compile the bytecode, which can reduce the resource consumption at startup, reduce the resource competition, and better protect the SLA of other applications. Support for horizontal scaling-JVM's memory management mode leads to its relatively inefficient management of large memory. General applications cannot improve performance by configuring larger heap size, and few Java applications can effectively use 16GB of memory or more. On the other hand, with the decline of memory costs and the popularity of virtualization, large memory allocation has become a trend. Therefore, we generally use the way of horizontal expansion, deploy multiple copies of the application at the same time, and may run multiple copies of an application in a computing node to improve resource utilization. Warm-up preparation

Most developers who are familiar with the Spring framework are familiar with Spring Petclinic. This article will use this famous example application to demonstrate how to make our Java application smaller, faster, lighter, and more powerful!

We fork an example of IBM's Michael Thompson and make some adjustments.

$git clone https://github.com/denverdino/adopt-openj9-spring-boot$ cd adopt-openj9-spring-boot

First, we will build a Docker image for the PetClinic application. In Dockerfile, we use OpenJDK as the basic image, install Maven, download, compile and package the Spring PetClinic application, and finally set the startup parameters of the image to complete the image construction.

$cat Dockerfile.openjdkFROM adoptopenjdk/openjdk8RUN sed-I's https://github.com/spring-projects/spring-petclinic.gitWORKDIR Archive.ubuntu.com / Mirrors.aliyun.com / mavenWORKDIR / tmpRUN git clone https://github.com/spring-projects/spring-petclinic.gitWORKDIR / tmp/spring-petclinicRUN mvn installWORKDIR / tmp/spring-petclinic/targetCMD ["java", "- jar", "spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar"]

Build the image and execute

$docker build-t petclinic-openjdk-hotspot-f Dockerfile.openjdk. $docker run-- name hotspot-p 808014) 8080-- rm petclinic-openjdk-hotspot |\,). . (_ _'-'_ _ _ | |'-'(_ /. _) -'(_\ _) | _ _ _ | _ _ _ | | _ | _ _ |\ | _ | _ | | _ | | _) | _ _ | | _ | | _ _ | _ | _ _ | _ | / / /. 2019-09-11 01Vera 58 INFO 23.156 1-- [ Main] o.s.b.w.embedded.tomcat.TomcatWebServer: Tomcat started on port (s): 8080 (http) with context path''2019-09-11 01 purse 58 Tomcat started on port 23.158 INFO 1-[main] o.s.s.petclinic.PetClinicApplication: Started PetClinicApplication in 7.458 seconds (JVM running for 8.187)

The application interface can be accessed through http://localhost:8080/.

Check the constructed Docker image. The size of "petclinic-openjdk-openj9" is 871MB, while the basic image "adoptopenjdk/openjdk8" has only 300MB! This product is so inflated!

$docker images petclinic-openjdk-hotspotREPOSITORY TAG IMAGE ID CREATED SIZEpetclinic-openjdk-hotspot latest 469f73967d03 26 hours ago 871MB

The reason is: in order to build Spring applications, we introduced a series of compile-time dependencies in the image, such as Git,Maven, and produced a large number of temporary files. However, these contents are not needed at run time.

Article 5 of the famous 12 elements of software clearly points out, "Strictly separate build and run stages." Strict separation of build and run phases can not only help us improve application traceability and ensure the consistency of application delivery, but also reduce the volume of application distribution and reduce security risks.

Mirror image slimming

Docker provides Multi-stage Build (multi-phase build) to achieve image slimming.

We divide the image construction into two phases:

In the "build" phase, we still use JDK as the basic image, and use Maven to build the application. In the final released image, we will use the JRE version as the basic image, and copy the generated jar file directly from the "build" image. This means that the final released image contains only what is necessary for the runtime and does not contain any compile-time dependencies, which greatly reduces the size of the image. $cat Dockerfile.openjdk-slimFROM adoptopenjdk/openjdk8 AS buildRUN sed-I's tmp/spring-petclinicRUN mvn installFROM adoptopenjdk/openjdk8:jre8u222-b10-alpine-jreCOPY Archive.ubuntu.com tmpRUN git clone mirrors.aliyun.com / etc/apt/sources.listRUN apt-get updateRUN apt-get install-y\ git\ mavenWORKDIR / tmpRUN git clone tmp/spring-petclinicRUN mvn installFROM adoptopenjdk/openjdk8:jre8u222-b10-alpine-jreCOPY-- from=build / tmp/spring-petclinic/target/spring-petclinic-2.1.0.BUILD-SNAPSHOT. Jar spring-petclinic-2.1.0.BUILD-SNAPSHOT.jarCMD ["java" "- jar", "spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar"]

Check out the new image size and reduce it from 871MB to 167MB!

$docker build-t petclinic-openjdk-hotspot-slim-f Dockerfile.openjdk-slim.... $docker images petclinic-openjdk-hotspot-slimREPOSITORY TAG IMAGE ID CREATED SIZEpetclinic-openjdk-hotspot-slim latest d1f1ca316ec0 26 hours ago 167MB

Image slimming will greatly accelerate the speed of application distribution, is there any way to optimize the startup speed of the application?

From JIT to AOT-start speed up

In order to solve the performance bottleneck of Java startup, we first need to understand the implementation principle of JVM. To achieve the "write once, run anywhere" capability, Java programs are compiled into architecture-independent bytecode. JVM converts bytecode to native machine code execution at run time. This conversion process determines the speed at which Java applications are started and run. In order to improve the execution efficiency, JVM introduces JIT compiler (Just in Time Compiler, just-in-time compiler), in which Sun/Oracle 's HotSpot is the most famous JIT compiler implementation. It provides an adaptive optimizer, which can dynamically analyze and find the critical paths in the process of code execution, and compile and optimize them. The emergence of HotSpot has greatly improved the execution efficiency of Java applications, and has become the default VM implementation after Java 1.4. However, HotSpot VM compiles the bytecode only at startup, on the one hand, it leads to inefficient execution at startup, on the other hand, compilation and optimization requires a lot of CPU resources, which slows down the startup speed. Can we optimize this process and increase the startup speed?

Students who are familiar with the history of Java should know that IBM J9 VM, a high-performance JVM for IBM enterprise software products, helps IBM establish the dominant position of business application platform middleware. In September 2017, IBM donated J9 to the Eclipse Foundation and renamed it Eclipse OpenJ9 to start an open source journey.

OpenJ9 provides Shared Class Cache (SCC shared Class Cache) and Ahead-of-Time (AOT pre-compilation) technology, which significantly reduces the startup time of Java applications.

SCC is a memory-mapped file that contains J9 VM's bytecode execution analysis information and native code that has been compiled and generated. When AOT compilation is enabled, the JVM compilation results are saved in SCC and can be directly reused in subsequent JVM startup. Loading precompiled implementations from SCC is much faster and consumes less resources than JIT compilation at startup. Start-up time can be significantly improved.

We started to build a Docker application image with AOT optimization

$cat Dockerfile.openj9.warmedFROM adoptopenjdk/openjdk8-openj9 AS buildRUN sed-I's tmp/spring-petclinicRUN mvn installFROM adoptopenjdk/openjdk8-openj9:jre8u222-b10_openj9 Archive.ubuntu.com. / etc/apt/sources.listRUN apt-get updateRUN apt-get install. Aliyun.com / mavenWORKDIR / tmpRUN git clone https://github.com/spring-projects/spring-petclinic.gitWORKDIR / tmp/spring-petclinicRUN mvn installFROM adoptopenjdk/openjdk8-openj9:jre8u222-b10_openj9-0.15.1-alpineCOPY-- from=build / tmp/spring-petclinic/target/spring -petclinic-2.1.0.BUILD-SNAPSHOT.jar spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar# Start and stop the JVM to pre-warm the class cacheRUN / bin/sh-c 'java-Xscmx50M-Xshareclasses-Xquickstart-jar spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar &' Sleep 20; ps aux | grep java | grep petclinic | awk'{print $1}'| xargs kill-1CMD ["java", "- Xscmx50M", "- Xshareclasses", "- Xquickstart", "- jar", "spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar"]

Parameter Java-Xshareclasses enables SCC,-Xquickstart to enable AOT.

In Dockerfile, we used a technique to warm up SCC. During the build process, start JVM to load the application, open SCC and AOT, and stop JVM after the application starts. This includes the generated SCC file in the Docker image.

Then, let's build the Docker image and start the test application

$docker build-t petclinic-openjdk-openj9-warmed-slim-f Dockerfile.openj9.warmed-slim. $docker run-- name hotspot-p 8080 with context path 8080-rm petclinic-openjdk-openj9-warmed-slim...2019-09-11 03-- 35 with context path 20.192 INFO 1-[main] o.s.b.w.embedded.tomcat.TomcatWebServer: Tomcat started on port (s): 8080 (http) with context path''2019-09-11 03:35:20. 193 INFO 1-[main] o.s.s.petclinic.PetClinicApplication: Started PetClinicApplication in 3.691 seconds (JVM running for 3.952).

As you can see, the startup time has been reduced from 8.2s to 4s, an increase of nearly 50%.

In this scheme, on the one hand, we transfer the time-consuming compilation optimization process to the construction time, on the other hand, we use the space-for-time method to save the precompiled SCC cache to the Docker image. When the container starts, JVM can directly use memory-mapped files to load SCC, optimizing startup speed and resource consumption.

Another advantage of this method is that because Docker images are stored in layers, multiple Docker application instances on the same host will share the same SCC memory map, which can greatly reduce memory consumption in stand-alone high-density deployment.

Let's compare the resource consumption. We first use the HotSpot VM-based image to launch four Docker application instances at the same time, and then use docker stats to view the resource consumption 30 seconds later.

$. / run-hotspot-4.sh...Wait a while... CONTAINER ID NAME CPU% MEM USAGE / LIMIT MEM% NET BLOCK O BLOCK O PIDS0fa58df1a291 instance4 0.15% 597.1MiB / 5.811GiB 10.03% 726B / 0B 0B / 0B 3348f021d728bb instance3 0.13% 648.6MiB / 5.811GiB 10.90 726B / 0B 0B / 0B 33a3abb10078ef instance2 0.26% 549MiB / 5.811GiB 9.23% 726B / 0B 0B / 0B 336a65cb1e0fe5 instance1 0.15% 641.6MiB / 5.811GiB 10.78% 906B / 0B 0B / 0B 33.

Then use the OpenJ9 VM-based mirror, start four Docker application instances at the same time, and view the resource consumption

$. / run-openj9-warmed-4.sh...Wait a while... CONTAINER ID NAME CPU% MEM USAGE / LIMIT MEM% NET BLOCK O BLOCK O PIDS3a0ba6103425 instance4 0.09% 119.5MiB / 5.811GiB 2.01% 1.19kB / 0B 0B / 446MB 39c07ca769c3e7 instance3 0.19% 119.7MiB / 5.811GiB 2.01% 1.19kB / 0B 16.4kB / 120MB 390c19b0cf9fc2 instance2 0.15% 112.1MiB / 5.811GiB 1.88% 1.2kB / 0B 22. 8MB / 23.8MB 3995a9c4dec3d6 instance1 0.15% 108.6MiB / 5.811GiB 1.83% 1.45kB / 0B 102MB / 414MB 39.

Compared with HotSpot VM, the memory footprint of applications in OpenJ9 scenarios is reduced from average 600MB to 120MB. Are you surprised?

Generally speaking, HotSpot JIT can perform more comprehensive and in-depth path optimization than AOT, resulting in higher operational efficiency. In order to resolve this contradiction, the AOT SCC of OpenJ9 only takes effect in the startup phase, and JIT will continue to be used for branch prediction, code inlining and other in-depth compilation optimization in the subsequent operation.

For more technical introduction to OpenJ9 SCC and AOT, please refer to

Https://www.ibm.com/developerworks/cn/java/j-class-sharing-openj9/index.htmlhttps://www.ibm.com/developerworks/cn/java/j-optimize-jvm-startup-with-eclipse-openjj9/index.htmlHotSpot has also made great strides in Class Data Sharing (CDS) and AOT, but IBM J9 is more mature in this area. Look forward to Ali's Dragonwell also provide corresponding optimization support.

What to think: unlike statically compiled languages such as Cmax Candleshot Golang and Rust, Java runs in VM mode, which improves application portability at the expense of some performance. Can we take AOT to the extreme? The compilation process of completely removing bytecode to local code?

Native code compilation

In order to compile Java applications into native executable code, we first need to solve the dynamic challenges of JVM and application frameworks at runtime. JVM provides a flexible class loading mechanism, and Spring's dependency injection (DI,Dependency-injection) enables dynamic class loading and binding at run time. In Spring framework, reflection, Annotation runtime processor and other technologies are also widely used. On the one hand, these dynamics improve the flexibility and ease of use of the application architecture, on the other hand, they also reduce the startup speed of the application, which makes the native compilation and optimization of AOT very complex.

In order to solve these challenges, there are many interesting explorations in the community, and Micronaut is one of the excellent representatives. Unlike the Spring framework order, Micronaut provides compile-time dependency injection and AOP processing capabilities and minimizes the use of reflection and dynamic proxies. Micronaut applications have faster startup speed and lower memory footprint. What makes us more interested is that Micronaut support, in conjunction with Graal VM, can compile Java applications into locally executed code and run at full speed. Note: GraalVM is a new general-purpose virtual machine launched by Oracle, which supports multiple languages and can compile Java applications into native applications. Original drawing

To begin our adventure, we took advantage of the Micronaut version of the PetClinic sample project provided by Mitz and made a little adjustment. (using Graal VM 19.2)

$git clone https://github.com/denverdino/micronaut-petclinic$ cd micronaut-petclinic

The content of Docker image is as follows

$cat DockerfileFROM maven:3.6.1-jdk-8 as buildCOPY. / / micronaut-petclinic/WORKDIR / micronaut-petclinicRUN mvn packageFROM oracle/graalvm-ce:19.2.0 as graalvmRUN gu install native-imageWORKDIR / workCOPY-- from=build / micronaut-petclinic/target/micronaut-petclinic-*.jar. Run native-image-- no-server-cp micronaut-petclinic-*.jarFROM frolvlad/alpine-glibcEXPOSE 8080WORKDIR / appCOPY-- from=graalvm / work/petclinic .CMD ["/ app/petclinic"]

Among them

In the "build" phase, we use Maven to build Micronaut versions of PetClinic applications, and in the "graalvm" phase, we convert PetClinic jar files into executable files through native-image. In the final stage, add the local executable file to an Alpine Linux base image

Build application

$docker-compose build

Start the test database

$docker-compose up db

Start the test application

$docker-compose up appmicronaut-petclinic_db_1 is up-to-dateStarting micronaut-petclinic_app_1... DoneAttaching to micronaut-petclinic_app_1app_1 | 04overrides previous 57overrides previous 47.571 [main] INFO org.hibernate.dialect.Dialect-HHH000400: Using dialect: org.hibernate.dialect.PostgreSQL95Dialectapp_1 | 04VOUGHANG 47.649 [main] INFO org.hibernate.type.BasicTypeRegistry-HHH000270: Type registration [java.util.UUID] overrides previous: org.hibernate.type.UUIDBinaryType@5f4e0f0app_1 | 04VIE57VOV 47.653 [main] INFO o.h.tuple.entity.EntityMetamodel-HHH000157: Lazy property fetching available for: Com.example.micronaut.petclinic.owner.Ownerapp_1 | 04HHH000490 57 INFO o.h.e.t.j.p.i.JtaPlatformInitiator 47.656 [main] INFO io.micronaut.runtime.Micronaut-HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] app_1 | 04Partition 57V 47.672 [main] INFO io.micronaut.runtime.Micronaut-Startup completed in 159ms. Server Running: http://1285c42bfcd5:8080

The startup speed of the application has been increased to 159ms like lightning, with only 1TU50 of HotSpot VM!

Micronaut and Graal VM are still in rapid development, and there is still a lot of work to consider to migrate a Spring application. In addition, the debugging, monitoring and other tool chains of Graal VM are not perfect. But this has let us see the dawn, the world of Java applications and Serverless is no longer far away. Due to the limited space, students who are interested in Graal VM and Micronaut can refer to

Summary and postscript of https://docs.micronaut.io/latest/guide/index.html#graalhttps://www.exoscale.com/syslog/how-to-integrate-spring-with-micronaut/

As an aggressive giant, Java technology is also constantly evolving in the cloud native era. After JDK 8u191 and JDK 10, JVM enhances resource awareness in the Docker container. At the same time, the community is exploring the boundaries of the Java technology stack in many different directions. As a member of traditional VM, JVM OpenJ9 not only maintains a high degree of compatibility with existing Java applications, but also carefully optimizes the startup speed and memory footprint, which is more suitable for use with existing Spring and other micro-service architectures. On the other hand, Micronaut/Graal VM takes a different approach by changing the programming model and compilation process to deal with the dynamics of the application as early as possible to the compilation period, which greatly optimizes the startup time of the application and has a promising prospect in the field of Serverless. These design ideas are worth using for reference.

In the cloud native era, we should be able to effectively segment and reorganize the development, delivery, and operation and maintenance processes in the horizontal application development life cycle to improve the efficiency of R & D collaboration; and in the entire vertical software technology stack, we should be able to optimize the system at multiple levels, such as programming model, application runtime and infrastructure, to achieve radical simplification and improve system efficiency.

This article was completed on the train journey to participate in the 20th anniversary of Ali Group, the annual meeting of 9 + 10 Ali is a very unforgettable experience. Thanks to Mr. Ma, thanks to Ali, thanks to this era, thanks to all the little friends who help and support us, thanks to all the technicians who pursue dreams, we work together to develop the future of cloud origin.

"Alibaba Cloud's native Wechat official account (ID:Alicloudnative) focuses on micro-services, Serverless, containers, Service Mesh and other technology areas, focuses on cloud native popular technology trends, and large-scale cloud native landing practices, and is the technical official account that best understands cloud native developers."

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report