Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

An example Analysis of the JIT case of PHP8's New Features

2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article will explain in detail the example analysis of the JIT case about the new features of PHP8. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.

PHP8 alpha1 has been released yesterday, I believe that people are most concerned about JIT, how it is used, what to pay attention to, and how to improve performance?

First of all, let's look at a picture:

The diagram on the left is the diagram of the Opcache process before PHP 8, and on the right is the diagram of Opcache in PHP 8. You can see several key points:

PHP8's JIT is provided in Opcache.

Currently, PHP8 only supports CPU in x86 architecture.

JIT is optimized on the basis of the original Opcache optimization, not a substitute.

In fact, JIT shares a lot of basic data structures optimized by the original Opcache, such as data flow graph, call graph, SSA, etc., about this part, if you have time later, you can write a separate article to introduce it, today just focus on the use level.

After downloading and installing, in addition to the original opcache configuration, we need to add the following configuration to php.ini for JIT:

Opcache.jit=1205opcache.jit_buffer_size=64M

The configuration of opcache.jit looks a little complicated. Let me explain that this configuration consists of four separate numbers from left to right (please note that this is based on the current version of alpha1, and some configurations may be fine-tuned with subsequent versions):

Whether to use AVX instruction when generating machine code points requires CPU support: 0: do not use

1: usin

Register allocation strategy: 0: no register allocation

1: local (block) domain assignment

2: global (function) domain assignment

JIT trigger strategy: 0: JIT when the PHP script is loaded

1: JIT when the function is executed for the first time

2: after a run, JIT calls the function with the highest percentage of times (opcache.prof_threshold * 100)

3: JIT after the function / method executes more than N (N and opcache.jit_hot_func related) times

4: JIT a function method when it contains @ jit in its comment

5: when a Trace is executed more than N times (related to opcache.jit_hot_loop, jit_hot_return, etc.), JIT

JIT optimization strategy. The greater the number, the greater the optimization effort: 0: no JIT

1: do the JIT of the jump part between opline

2: introverted opcode handler call

3: function-level JIT based on type inference

4: based on type inference, procedure call graph is used as function-level JIT

5: based on type inference, procedure call graph is used as script-level JIT

Based on this, we can roughly draw the following conclusions:

Try to use the 12x5 configuration, which should be the best at this time.

For x, 0 is recommended if it is script level. If it is Web service type, you can choose 3 or 5 based on the test results.

The form of @ jit, with attributes, may become

Now, let's test the difference between JIT with and without Zend/bench.php, first of all, not enabling (php-d opcache.jit_buffer_size=0 Zend/bench.php):

Simple 0.008simplecall 0.004simpleucall 0.004simpleudcall 0.004mandel 0.035mandel2 0.055ackermann (7) 0.020ary (50000) 0.004ary2 (50000) 0.003ary3 (2000) 0.048fibo (30) 0.084hash2 (50000) 0.013hash3 (20000) 0.010heapsort (20000) 0.027matrix (20) 0.026nestedloop (12) 0.023sieve (30) 0.013strcat (200000) 0.006

-

Total 0.387

According to the above, we chose opcache.jit=1205 because bench.php is a php (php-d opcache.jit_buffer_size=64M-d opcache.jit=1205 Zend/bench.php):

Simple 0.002simplecall 0.001simpleucall 0.001simpleudcall 0.001mandel 0.010mandel2 0.011ackermann (7) 0.010ary (50000) 0.003ary2 (50000) 0.002ary3 (2000) 0.018fibo (30) 0.031hash2 (50000) 0.011hash3 (20000) 0.008heapsort (20000) 0.014matrix (20) 0.015nestedloop (12) 0.011sieve (30) 0.005strcat (200000) 0.004

-

Total 0.157

It can be seen that for Zend/bench.php, compared with not enabling JIT, the time consuming is reduced by nearly 60%, and the performance is improved by nearly 2 times.

For your research and learning, you can use opcache.jit_debug to observe the assembly results generated after JIT, such as:

Function simple () {$a = 0politics for ($I = 0; $I < 1000000; $iota +) $astatine;}

We can see through php-dopcache.jit = 1205-dopcache.jit_debug=0x01:

JIT$simple:; (/ tmp/1.php)

Sub $0x10,% rspxor% rdx,% rdxjmp .L2.L1: add $0x1,% rdx.L2:cmp $0x0, EG (vm_interrupt) jnz .L4cmp $0xf4240,% rdxjl .L1mov 0x10 (% R14),% rcxtest% rcx,% rcxjz .L3mov $0x1, 0x8 (% rcx). L3:mov 0x30 (% R14),% raxmov% rax, EG (current_execute_data) mov 0x28 (% R14),% editest $0x9e0000,% edijnz JIT$$leave_functionmov% R14, EG (vm_stack_top) mov 0x30 (% R14) R14cmp $0x0, EG (exception) mov (R14), r15jnz JIT$$leave_throwadd $0x20, r15add $0x10, rspjmp (R15). L4:mov $0x45543818, r15jmp JIT$$interrupt_handler

If we use opcache.jit=1201, we can get the following results:

JIT$simple:; (/ tmp/1.php)

Sub $0x10,% rspcall ZEND_QM_ASSIGN_NOREF_SPEC_CONST_HANDLERadd $0x40,% r15jmp .L2.L1: call ZEND_PRE_INC_LONG_NO_OVERFLOW_SPEC_CV_RETVAL_UNUSED_HANDLERcmp $0x0, EG (exception) jnz JIT$$exception_handler.L2:cmp $0x0, EG (vm_interrupt) jnz JIT$$interrupt_handlercall ZEND_IS_SMALLER_LONG_SPEC_TMPVARCV_CONST_JMPNZ_HANDLERcmp $0x0, EG (exception) jnz JIT$$exception_handlercmp $0x452a0858,% r15djnz .L1add $0x10,% rspjmp ZEND_RETURN_SPEC_CONST_LABEL

You can also try various debug configurations, such as opcache.jit_debug=0xff, and there will be more information output.

This is the end of the article on "sample analysis of JIT cases with new features of PHP8". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report