Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the engines of the mobile JS?

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article mainly introduces "what are the engines of mobile JS". In daily operation, I believe many people have doubts about the engines of mobile JS. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful for you to answer the doubts about "what engines of mobile JS are there?" Next, please follow the editor to study!

Key points of JS engine selection

As the most popular scripting language in the world, JavaScript has a lot of engine implementations: JavaScriptCore for Apple, V8 with the most powerful performance, and the recent hot QuickJS. How to choose the most suitable one from these JS engines? Personally, I think there are several considerations:

Performance: it's needless to say. The sooner the better.

Volume: the JS engine will increase the packet volume.

Memory footprint: the less memory, the better.

JavaScript syntax support: the more new grammars supported, the better

Ease of debugging: does it support debug directly? You still need to compile and implement the debugging tool chain yourself.

Application market platform specification: mainly iOS platform, platform forbids application integration of virtual machines with JIT function

The trouble is that none of the above points are independent of each other:

For example, the V8 engine with JIT has the best performance, but it has a large engine size and a high memory footprint; QuickJS, which is dominant in package size, has an average performance gap of 5-10 times compared with engines with JIT due to the lack of JIT support.

Below I will combine the points just mentioned, and choose JavaScriptCore,V8,Hermes and QuickJS these four JSVM, talk about their advantages and characteristics, and then talk about their shortcomings.

The function of JS engine competes greatly

1.JavaScriptCore

Mobile_JSVM_JSC

JavaScriptCore is WebKit's default embedded JS engine, and there is no independent entry on wikipedia. It is only introduced in the three-level directory [1] of WebKit entry. I feel that it is still a bit outrageous. After all, it is also an old JS engine.

As WebKit is the first open source Apple, so WebKit engine is used in Apple's own Safari browser and WebView, especially on the iOS system, because of the restrictions of Apple, all web pages can only be loaded with WebKit, so WebKit has achieved a de facto monopoly on iOS. JSC, as a part of the WebKit module, also "basically" monopolized the JS engine share of the iOS platform with the policy of spring breeze.

Monopoly belongs to monopoly, in fact, the performance of JSC is OK.

Many people don't know that JSC's JIT function is actually earlier than V8. It was the best JS engine more than a decade ago, but it was later overtaken by V8. And JSC has a major advantage, after iOS7, JSC is open to developers as a system-level Framework, that is, if your APP uses JSC, you only need to import in the project, the package size is zero overhead! Of all the JS engines discussed today, JSC is the most powerful.

Although the performance of JSC with JIT enabled is very good, it is only limited to Apple's Safari browser and WKWebView, where the JIT function is enabled by default, and if the JSC,JIT function is directly introduced into the project, it is disabled. Why did you do this? boss RednaxelaFX [2] gave a very professional explanation [3]:

JIT compilation requires the underlying system to support dynamic code generation, which means supporting the dynamic allocation of memory pages with "writable executable" permissions for the operating system. When an application has permission to request the allocation of writable executable memory pages, it is more vulnerable to attacks that allow arbitrary code to be generated and executed dynamically, making it easier for malicious code to take advantage of.

For security reasons, Apple forbids third-party APP to turn on JIT when using JSC. These features are also explained in React Native's JS Runtime page [4]. However, in practical application, the operation without heavy CPU is only used as a glue language, and JSC is more than enough.

The above discussion is all about the iOS system, and on the Android system, JSC's performance is not satisfactory.

JSC does not make a good adaptation to the Android model. Although JIT can be enabled, the performance is not good, which is also one of the reasons why Facebook is determined to make Hermes. The specific performance comparison analysis can be found in the Hermes section of this article.

Finally, let's talk about the debugging support of JSC. If it is the iOS platform, we can directly use the debbuger function of Safari to debug. If it is the Android platform, I have not yet found a good way to debug the real machine.

Generally speaking, JavaScriptCore has a very obvious home advantage on the iOS platform, and all indicators are excellent, but the performance is not very good because of the lack of optimization on Android.

2.V8

Mobile_JSVM_V8

V8, I don't think I need to explain too much. V8 plays an important role in where JavaScript is today. Needless to say, JIT is the strongest in the industry (not just JS). There are a lot of articles about V8, which I won't describe here. Let's talk about the performance of V8 on mobile.

Also as a Google home product, every Android phone is equipped with Chromium-based WebView,V8 and bundled with it. But V8 and Chromium are bundled too tightly, unlike JavaScriptCore on iOS, which encapsulates a system library that can be called by all App. As a result, you have to package it yourself if you want to use V8 on Android. The more famous project in the community is J2V8 [5], which provides a Java bindings case of V8.

The performance of V8 is needless to say, JIT can be enabled on Android, but these advantages come at a price: after turning on JIT, the memory consumption of V8 is high, and the packet size of V8 is not small (about 7 MB). As a Hybrid system that only draws UI, it is still a bit extravagant.

Let's talk about the integration of V8 on iOS.

V8 launched JIT-less V8 [6] in 2019, that is, turning off JIT to execute JS files only with Ignition interpreter interpretation, so it is possible for us to integrate V8 on iOS, because Apple still supports access to a virtual machine engine with only interpreter functions. However, I don't think it is valuable to turn off JIT's V8 access to iOS, because if only the interpreter is enabled, the performance of V8 is similar to that of JSC, and the introduction of V8 will increase some volume overhead.

Another interesting feature of V8 that is rarely mentioned is heap snapshots (Heap snapshots), which V8 supported in 2015 [7], but few people in the community talk about it.

What is the principle of heap snapshot? Generally speaking, after JSVM starts, the first step is to parse the JS file, which is time-consuming. V8 supports pre-generating Heap snapshots and then loading it directly into heap memory to quickly get the initialization context of JS. Cross-platform framework NativeScript [8] uses this technology to increase the loading speed of JS by three times. Technical details can be found in their blog post [9].

V8_heap_snapshots

V8 real machine debugging also needs to introduce a third-party library. Someone in the Android community has extended the Chrome debugging protocol to J2V8, that is, the J2V8-Debugger [10] project. I have not found a related project for iOS, so I may need to implement a set of extensions myself.

Generally speaking, V8 is indeed the performance king of JSVM, and the Android side can give full play to its power when using it, but the iOS platform is not recommended because of its home disadvantage.

3.Hermes

Mobile_JSVM_hermes

Hermes is an open source JS engine of FaceBook in mid-2019. From the record of release [11], it can be seen that this is a JS engine specially built for React Native, which can be said to be built for Hybrid UI systems from the beginning of its design.

Hermes was launched to replace the original JS engine on the RN Android side, that is, JavaScriptCore (because JSC was too weak on the Android side). We can straighten out the timeline. Since FaceBook announced that Hermes is open source [12] in 2019-07-12, the maintenance information of jsc-android [13] has stopped forever at 2019-06-25 [14]. This signal is very clear: we no longer maintain JavaScriptCore Android, everyone uses the Hermes we do.

Recently, Hermes has planned to log in to the iOS platform with React Native version 0.64, but the RN version update blog has not been released yet. You can take a look at my previous interpretation of the Apple developer agreement: Apple Agreement 3.3.2 specification interpretation, so I won't say much about it here.

Hermes has two main characteristics, one is that it does not support JIT, and the other is that it supports direct generation / loading of bytecode, which we will talk about separately below.

There are two main reasons why Hermes does not support JIT: after joining the JIT, the warm-up time for the JS engine to start will be longer, and to a certain extent, the first screen TTI [15] (the interactive time when the page is loaded for the first time) will be lengthened. Now the front-end pages all pay attention to one second, and TTI is still a very important measurement indicator. Another problem is that JIT will increase the package size and memory footprint, and Chrome should bear some responsibility for its high V8 memory footprint.

Because do not support JIT,Hermes in some areas of CPU-intensive computing is not dominant, so in the Hybrid system, the best solution is to give full play to the role of JavaScript glue language, CPU-intensive computing (such as matrix transformation, parameter encryption, etc.) to do in Native, calculated and then passed to JS performance on UI, so that you can take into account both performance and development efficiency.

The most striking thing about Hermes is that it supports the generation of bytecodes, my previous blog post "? What is the core technology of the cross-end framework? "it is also mentioned that after Hermes joins AOT, the processes of Babel, Minify, Parse and Compile are all completed on the developer's computer. Just send the bytecode to let Hermes run. Let's demonstrate it with a demo.

Hermes

First write a test.js file, which can write anything; then compile the source code of Hermes, the compilation process directly according to the document [16], I will skip it here.

First of all, Hermes supports directly interpreting and running JS code, which is the normal JS loading, compiling and running process.

Hermes test.js

We can add the-emit-binary parameter to try the function of generating Bytecode:

Hermes-emit-binary-out test.hbc test.js

A test.hbc bytecode file is then generated:

Hermes_bytecode

Finally, we can have Hermes load and run the test.hbc file directly:

Hermes test.hbc

To objectively evaluate the bytecode of Hermes, first of all, the process of parsing and compiling in the JS engine is omitted, and the loading speed of JS code will be greatly accelerated, which is reflected in that the TTI time will be significantly shortened on UI. Another advantage is that the bytecode of Hermes takes into account the performance limitations of the mobile side, supports incremental loading rather than full loading, and is more friendly to low-and middle-end Android computers with limited memory. However, the bytecode will be larger than the original JS file, but considering the small size of the Hermes engine itself, these volume increments are acceptable.

About the detailed Hermes performance test, there are two articles written well online: one is React Native Memory profiling: JSC vs V8 vs Hermes [17], you can see that the performance of Hermes on Android devices is still very excellent, while the performance of JSC is very poor:

JSCvsV8vsHermes

The other is Ctrip's article: Ctrip's survey of RN's new generation JS engine Hermes shows that Hermes has the highest comprehensive score (JSC is still the same):

JSVM_CPU_Performance

After talking about the performance, let's talk about Hermes's JS syntax support.

Hermes mainly supports ES6 syntax. Proxy is not supported when the source is opened, but v0.7.0 [18] already supports it. Their team also has ideas and does not support API, which belongs to design dross, such as with eval (). I personally agree with this kind of design tradeoff.

Finally, let's talk about the debugging function of Hermes.

At present, Hermes has supported the debugging protocol of Chrome. We can debug the Hermes engine directly with Chrome's debugging tool. For more information, please see Debugging JS on Hermes using Google Chrome's DevTools [19].

Generally speaking, Hermes is a JS engine specially built for mobile Hybrid UI System. If you want to build your own Hybrid system, Hermes is a very good choice.

4.QuickJS

Mobile_JSVM_quickjs

Before we formally introduce QuickJS, let's talk about its author: Fabrice Bellard.

There has always been a saying in the software world that a senior programmer can create more than 20 mediocre programmers, but Fabrice Bellard is not a senior programmer, he is a genius, in my opinion, his creativity can exceed 20 senior programmers, we can follow the timeline [20] to figure out what he has created:

In 1997, the fastest algorithm for calculating pi was released. This algorithm is a variant of Bailey-Borwein-Plouffe formula. The time complexity of the former is O (n ^ 3), which is optimized to O (n ^ 2), which increases the calculation speed by 43%. This is his achievement in mathematics.

In 2000, he released FFmpeg, which is an achievement in the field of audio and video.

2000, 2001, 2001, 2018, three times, won the international confusion C code competition.

In 2002, he released TinyGL, which is his achievement in the field of graphics.

In 2005, he released QEMU, his achievement in the field of virtualization.

In 2011, he wrote a PC virtual machine Jslinux, a Linux operating system running on browsers, in JavaScript

In 2019, QuickJS, a JS virtual machine that supports the ES2020 specification, was released

When the gap between people is a few orders of magnitude, emotions such as envy will turn into worship, and Bellard is such a person.

To regain the mood, let's take a look at the QuickJS project. QuickJS inherits the usual characteristics of Fabrice Bellard's works-small and powerful.

QuickJS is very small, with only a few C files and no messy third-party dependencies. But his function is very perfect, JS syntax support to ES2020 [21], Test262 [22] tests show that QuickJS syntax support is even higher than V8.

Test262

What about the performance of QuickJS? QuickJS has a benchmark [23] that compares the performance of multiple JS engines against the same test case. Here are the test results:

JSVM_Benchmark

Combined with the above table and some personal tests, some simple conclusions can be drawn:

The comprehensive score of V8 on JIT is almost 35 times that of QuickJS, but in the same lightweight JS engine, the performance of QuickJS is still very dazzling.

In terms of memory footprint, QuickJS is much lower than V8. After all, JIT is a big memory eater, and the design of QuickJS is very friendly to embedded systems. (Bellard achievement trophy? Then + 1)

The running scores of QuickJS and Hermes are similar. I have done some performance tests in private, and the performance of the two engines is also very similar.

Because of QuickJS's design, I don't wonder how his performance compares with that of Lua.

Lua is a very small and intrepid language, which has always acted as a glue language in the game field and in the development of Cmax Category +.

I personally wrote some test cases and found that the execution efficiency of QuickJS and Lua is about the same. Later, I found a blog post Lua vs QuickJS [24] on the Internet. This brother also did some tests, and concluded that the performance of the two is similar, and Lua is faster than QuickJS in some scenarios.

It is mentioned in the official documentation that QuickJS supports the generation of bytecode [25], which eliminates the process of compiling and parsing JS files.

At first, I thought that QuickJS, like Hermes, could generate bytecode directly and then give it to QuickJS for interpretation and execution. Later, after compiling it myself, I found that the mechanism of QuickJS is not quite the same as that of Hermes: the-e and-c options of qjsc to generate bytecode first generate a bytecode from the js file, and then spell it into a .c file, which looks like this:

# include const uint32_t qjsc_hello_size = 87 / / the bytecodes generated by the compilation of / / JS files are all in this array const uint8_t qjsc_hello [87] = {0x02, 0x04, 0x0e, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c, 0x65, 0x06, 0x6c, 0x6f, 0x67, 0x16, 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x57, 0x6f, 0x72, 0x6c, 0x64, 0x22, 0x65, 0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x65, 0x65, 0x73, 0x73, 0x2f, 0x2f, 0x2f, 0x06, 0x00, 0x9e, 0x01, 0x00, 0x01, 0x00, 0x03, 0x00, 0x00, 0x14, 0x01, 0xa0, 0x01, 0x00, 0x00, 0x00, 0x39, 0xf1, 0x00, 0x00, 0x00, 0x43, 0xf2, 0x00, 0x00, 0x00, 0x04, 0xf3, 0x00, 0x00, 0x00, 0x24, 0x01, 0x00, 0xd1, 0x28, 0xe8, 0x03, 0x01, 0x00,} Int main (int argc, char * * argv) {JSRuntime * rt; JSContext * ctx; rt = JS_NewRuntime (); ctx = JS_NewContextRaw (rt); JS_AddIntrinsicBaseObjects (ctx); js_std_add_helpers (ctx, argc, argv); js_std_eval_binary (ctx, qjsc_hello, qjsc_hello_size, 0); js_std_loop (ctx); JS_FreeContext (ctx); JS_FreeRuntime (rt); return 0;}

Because this is a .c file, you have to compile it again to generate binaries if you want to run.

From the design point of bytecode, the positioning of QuickJS and Hermes is not quite the same.

Although directly generating bytecode can greatly reduce the parsing time of JS text files, QuickJS is still more embedded, the generated bytecode is placed in a C file and needs to be compiled before running; Hermes is generated for React Native, and the bytecode generated takes into account the distribution function (hot update is an application scenario), supporting the direct loading and running of bytecode without the need to compile again.

The above is mainly a consideration of performance, let's take a look at the development experience.

The first is the debug function support of QuickJS. So far (2021-02-22), QuickJS does not have an official debugger, that is, debugger statements will be ignored. Some people in the community have implemented a set of debugger based on VSCode to support vscode-quickjs-debug [26], but will make some customizations to QuickJS. Individuals are still looking forward to official support for a debugger protocol.

From an integration point of view, there are already sample projects for iOS [27] and Android [28] in the community, which can be used to reference and integrate into their own projects.

Taken together, QuickJS is a very potential JS engine that optimizes performance and size to the extreme on the premise of high support for JS syntax. Access can be considered in the Hybrid UI architecture and game script system on the mobile side.

Type selection idea

1. Single engine

Single engine means that both the iOS side and the Android side use the same engine. In this way, the differences in the JS layer can be erased, and it is not easy to have a strange BUG in which the same JS code runs well on iOS and goes wrong on Android. Combined with the cross-end solutions on the market, there are about three options:

Unified adoption of JSC: this is the solution before React Native 0.60.

Unified use of Hermes: this is the design after React Native 0.64.

The unified use of QuickJS:QuickJS is very small and can be used to make a very lightweight Hybrid system.

Above, we can see that V8 is not uniformly used. This is what I said earlier. V8 has no home advantage on the iOS platform. After turning off JIT, its performance is similar to that of JSC, and it will also increase the package size, which is not very cost-effective.

two。 Twin engine

Twin engines are also easy to understand. IOS and Android are used separately. The advantage is that they can give full play to their respective home advantages, while the disadvantage is that the two-end operation results may not be unified due to platform inconsistency. There are several current solutions:

IOS uses JSC,Android with V8 Weex.NativeScript is like this, and there can be a good balance between packet size and performance.

IOS uses JSC,Android to use Hermes:React Natvie's current solution

IOS uses JSC,Android with QuickJS: Didi's cross-end framework hummer [29] is such a design.

From the point of view of type selection, iOS has its own choice of JSC,Android, but it gives full play to the characteristics of the two platforms:)

3. Debug

Whether it is single-engine or dual-engine, the integrated business development experience is also important. It's all right for an engine with its own debugger capabilities, but for an engine that doesn't implement a debugging protocol, the lack of debugger can affect the experience.

But it is not and there is no way. Generally speaking, we can curve to save the nation, similar to the thinking of React Native's Remote JS Debugging:

We can add a switch to transfer the JS code to Chrome's Web Worker via websocket, and then debug it with Chrome's V8. The advantage of this is that you can adjust some business BUG, and the disadvantage is that you will introduce another JS engine. In case you encounter some engine-implemented BUG, it is very difficult to debug. But fortunately, this situation is very rare, and we can't give up eating for fear of choking, right?

At this point, the study of "what are the engines of mobile JS" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report