Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Reshape the Java language on the cloud

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)06/01 Report--

Music has no borders, but musicians have borders. Yun Yuansheng was the same. Although there is no defined programming language, the programming language used by the application already determines the behavior of the application deployment. Java was born 20 years ago, has a lot of excellent enterprise-level frameworks, practice OOP philosophy, more reflects the rigor and stability and high performance under long-term running conditions. Today, in cloud scenarios that require fast iterative delivery, language simplicity seems to be the primary requirement, and the traditional Java language seems a bit too heavy. This article by Alibaba JVM team technical expert Yu Lei (nickname: Liang Xi) shares how JVM team is facing and dealing with the huge business scale and complex business scenarios of the group. ElasticHeap Java is often criticized for consuming resources, the most notable of which is Heap's memory occupation, even if there is no request in processing and no object allocation, the process will still retain the complete heap memory space, ensuring GC to allocate memory and operate memory fast and agile. JDK ZenGC/ElasticHeap Double Eleven fully supports hundreds of applications and hundreds of thousands of instances of core links.

JDK12 started to support fixed-time concurrent mark triggering and shrinking Java heap return memory in remark, however, it did not solve the problem of increasing pause time in stw, so memory return could not be done at each young GC. ElasticHeap completes the overhead of repeated map/unmap and page fault processing in concurrent asynchronous threads, so any young GC can return memory in time or restore memory usage agilely. ElasticHeap Alibaba ElasticHeap Scenario 1: Predictable Traffic Peaks

ElasticHeap Scenario 2: Running multiple Java instances on a single machine

The traffic tasks received by multiple Java instances are relatively random, and the peaks do not overlap. In idle time, the overall memory footprint of multiple instances can be effectively reduced, and the deployment density can be improved. Double 11 verifies that the core trading system uses ElasticHeap for low power mode operation, greatly reducing the WSS(Working Set Size) scale.

Static compilation Many new applications on the cloud have chosen Go, largely because Go applications have no runtime dependencies, statically compiled programs start quickly, and do not need to be warmed up by JIT. How do we inject this capability into Java when Ali has a lot of Java code? Java static compilation technology is a radical AOT technology, which compiles Java programs into native code through a separate compilation phase, without the traditional Java virtual machine and runtime environment at runtime, only the operating system class library support. Its basic working principle is shown in the figure below. Static compilation technology realizes the combination of Java language and native program, compiling original Java program into a bootstrap native program with Java behavior, thus having the advantages of Java program and native program. The JVM team worked closely with the SOFAStack team to take the lead in implementing static compilation on middleware applications. Optimize the startup speed of an application from 60 seconds to 3.8 seconds. The statically compiled application runs stably during the double eleven period without failures. The GC pause time is within 100 milliseconds, which is within the business allowable range. The memory consumption and RT are equal to those of traditional Java applications.

To sum up, statically compiled applications reduce startup time by 2000% in terms of stability, resource consumption, RT response and other indicators that are basically the same as traditional Java applications. Wisp2 When you use the coolest Vert.X to develop a simple Web service, ready to experience the strongest performance, QA students take a 1C 2G container to let you press, but you find that you can not spell others Go application. After research, it was found that the original coroutine model performed much better in such a case with few cores. Time has changed and Java is behind us. JDK Wisp2 answers this question: Java can also have high-performance coroutines. This year marks the first year of a large-scale launch of Wisp 2, which features coroutine scheduling support throughout the Java runtime, and thread blocking (such as Socket.getInputStream().read() ) becomes a lighter coroutine switch. Fully compatible with Thread API, in JDK with Wisp2 enabled, Thread.start() actually creates a coroutine (lightweight thread), which can be compared to Go only providing coroutine keyword go without exposing thread interface; we also only provide a way to create coroutines, and applications can transparently switch to coroutines. Support work stealing, scheduling strategy is especially suitable for web scenarios, scheduling overhead is very small under high pressure. On November 11 this year, Wisp supported hundreds of applications and 100,000 containers, 90% of which have been upgraded to Wisp2.

We can see that near the peak, the CPU of the Wisp2 machine is about 7% lower ( Wisp1 is lower, Wisp2 is oriented RT, so the CPU will be higher), which is mainly the sys CPU saved by lightweight scheduling. CPU at point 0 is equal, which also means that Wisp2 solves scheduling overhead, and when CPU is low and scheduling is not stressed, there is no gap.

From the RT point of view, the RT of Wisp2 machine is about 20% lower. One reason for the obvious reduction of RT is that the CPU pressure of these machines is very high, and the scheduling advantage of coroutine is easier to reflect. This advantage can help the system reach higher water levels, improving overall utilization without worrying about RT being too high and causing system avalanches. FDO double eleven positive zeros will have an obvious CPU peak a few minutes later. According to data analysis, the main reason is that double eleven zeros trigger JIT compilation. For example, there is logic in the program: if (is1111(LocalDate.now())) { branch2 } else { branch3 } Assuming that branch3 has been taken during warm-up, then JIT has reason to believe that the basic follow-up will also take branch3, and will not compile branch2. At zero, we enter branch2, where we need to trigger the deoptimization recompile method. Let's see how AKDK solves this problem with profiling. When JDK runs code, Java methods are compiled dynamically using layered compilation. At the highest level (peak performance) of compilation, due to performance considerations, compilation will make some optimistic assumptions based on the information collected, once these assumptions are not met, there will be deoptimization phenomenon. For example, a certain code in a hot method will only be executed in double eleven, so this code will not be compiled during the warm-up process. Once this code is executed when double eleven arrives, it will trigger the deoptimization of the whole method. Deoptimization has two negative effects: one is that the methods that need to be run are changed from efficient compilation execution to interpretation execution, and the running speed is reduced by more than 100 times; the other is that the methods that are deoptimized during peak traffic will be quickly recompiled, and the compilation thread will consume CPU. Therefore, in the case of double eleven, where the flow rate increases sharply in a short time and is not the same as the preheating flow rate, the harm of deoptimization will be particularly obvious. FDO is an abbreviation for feedback directed optimization, which refers to compilation information of previous JVM runtimes to guide better compilation at this runtime. Specifically, we adopted a two-level approach to reduce deoptimization. Record the deoptimization information of each run time into a file, read this file at the next run time, and refer to the information in the file to make a judgment when deciding whether to make optimistic assumptions, thus reducing the probability of deoptimization. The information shows that the most frequent deoptimizations are related to if-else, accounting for more than half of the total number. We provide a way to close all relevant optimistic assumptions on a path based on information that has previously occurred if-else deoptimization. The effect of FDO in Double Eleven is launched this year. The goal is to solve two problems: 1. The CPU utilization pulse caused by the superposition of double eleven 0 traffic peaks and deoptimization/compilation peaks is too high. 2, preheating efficiency is low, pressure measurement after a long time preheating, increase the flow rate is still accompanied by a lot of compilation and optimization. For the first question, we collected the deoptimization/C2 compilation times and CPU data for the first minute of the double eleven peak. It can be seen that the number of C2 compilations decreased by about 45% and the number of deoptimizations decreased by about 70% during the peak period after FDO was enabled. CPU data, the peak period after the first minute to open FDO CPU decreased from about 67.5 to 63.1, a decrease of about 7.0%.

The second target can be verified by pressing the CPU data for the first minute. When FDO is turned on, CPU utilization decreases from 66.19 to 60.33% in the first minute of pressure measurement, which is about 10% lower. GraceZProfiler has always been a sharp tool for the whole group to investigate various problems of Java applications, and Grace, as its platform-based version, has implemented a series of optimizations on it, from the original stand-alone version to the current Master/Worker architecture, and introduced a task queuing mechanism to queue user tasks under high pressure to solve the problem of Worker overload. It has been qualitatively improved in maintainability, extensibility and user experience, laying a solid foundation for cloud and open source issues of subsequent tool platforms. At present, Heap Dump function has been integrated, some optimizations have been made on the basis of inheriting ZProfiler function, the version of parsing engine has been upgraded, and more comprehensive OQL syntax has been supported.

JDK11JDK8 as a classic version, is being used on a large scale, although migration from JDK6 and 7 has some pain, but the general feedback after upgrading is: "really fragrant." The next stable release of OpenJDK 8 is OpenJDK 11. The JVM team will naturally be actively following up in this direction, with JDK 11 now supporting the Wisp2, multitenant features of JDK 8. Some clusters of this Double Eleven have been launched to JDK11, and their performance is stable. Will upgrading JDK11 bring us the same surprises as upgrading JDK8? On JDK11 we can experience the latest ZGC. ZGC JDK 11 introduces an important feature: the ZGC memory garbage collector. This garbage collector claims to be able to keep pause times to within 10ms on heaps ranging from tens of GB to several terabytes. Many Java developers have struggled with delays caused by garbage collector pauses in the past, and ZGC's short pause feature will undoubtedly become a new favorite for Java developers in the future. ZGC is still experimental in OpenJDK, and JDK11 is not yet fully popular in the industry, JDK11 only supports ZGC on Linux (ZGC for MacOS and Windows is expected to be supported in the JDK14 version released in March 2020), many Java developers can only drool, in a wait-and-see state. How can we be deterred from eating crabs? Ali JVM team and database team have started to run database applications on ZGC, and have made corresponding improvements to ZGC according to the running effect, including optimization of ZGC's page caching mechanism, optimization of ZGC's trigger timing, etc. Since September, the two teams have pushed online database applications to run on ZGC, which has been running stably for two months and successfully passed the Double Eleven exam. Congratulations on the online feedback: 1. JVM pause time is kept within the official 10ms; 2. ZGC greatly improves the average RT and spurs of online running clusters. Summary From the above features you can see that JDK has evolved from a traditional Managed Runtime. In the future, JDK will continue to focus on improving the development experience of applications on the cloud, providing more possibilities for upper applications through innovation at the bottom. Using AJDK features on Dragonwell These double eleven tested features will be open source and delivered to the majority of users along with Dragonwell, so stay tuned. Alibaba Dragonwell 8 is a free OpenJDK distribution. It provides long-term support, including performance enhancements and security fixes. Alibaba Dragonwell 8 currently supports X86-64/Linux platforms, which can greatly improve stability, efficiency and performance in large-scale Java application deployment in data centers. Alibaba Dragonwell 8 is a friendly fork of OpenJDK, using the same license as OpenJDK. Alibaba Dragonwell 8 is compatible with Java SE standards and users can develop and run Java applications using Alibaba Dragonwell 8. Alibaba Dragonwell 8 is an open source version of Alibaba's internal OpenJDK custom version, which is optimized for online e-commerce, finance, and logistics combined with business scenarios and runs on a super-large Alibaba data center with 1,000,000+ servers. We are currently preparing for the release of Alibaba Dragonwell 11. Dragonwell 11 is a Dragonwell release based on OpenJDK 11, with more features, more enabling for cloud scenarios, clearer modularity, and long-term support, so we recommend attention and timely upgrades.

The original link to this article is Alibaba Cloud content and cannot be reproduced without permission.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report