In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article focuses on "Vue.js practical performance optimization skills sharing", interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "Vue.js practical performance optimization skills sharing"!
Catalogue
Functional components
Child component splitting
Local variables
Reuse DOM with v-show
KeepAlive
Deferred features
Time slicing
Non-reactive data
Virtual scrolling
Functional components
The first tip, functional components, you can check out this online example
The component code before optimization is as follows:
Export default {props: ['value'],}
The optimized component code is as follows:
Then we have 800 components before and after the rendering optimization of each parent component, and in each frame, we modify the data to trigger the update of the component, open the Performance panel of Chrome to record their performance, and get the following results.
Before optimization:
After optimization:
Comparing these two pictures, we can see that it takes more time to execute script before optimization than after optimization, and we know that JS engine is a single-threaded running mechanism, and JS threads block UI threads, so when the script execution time is too long, it will block rendering, resulting in page stutters. The optimized script has a shorter execution time, so its performance is better.
So why does it take less time to execute a functional component JS? This starts with the implementation principle of functional components, which you can think of as a function that renders a DOM based on the contextual data you pass.
Functional components are different from ordinary object type components, it will not be regarded as a real component, we know that in the patch process, if a node is a component vnode, it will recursively execute the initialization process of sub-components; while the render of functional components generates ordinary vnode, there will be no recursive sub-component process, so the rendering overhead will be much lower.
As a result, functional components will not have states, responsive data, lifecycle hook functions, and so on. You can think of it as a part of the common component template DOM stripped out, rendered through the function, is a kind of reuse at the DOM level.
Child component splitting
The second technique, sub-component splitting, you can check out this online example.
The component code before optimization is as follows:
{{heavy ()}} export default {props: ['number'], methods: {heavy () {const n = 100000 let result = 0 for (let I = 0; I)
< n; i++) { result += Math.sqrt(Math.cos(Math.sin(42))) } return result } }} 优化后的组件代码如下: export default { components: { ChildComp: { methods: { heavy () { const n = 100000 let result = 0 for (let i = 0; i < n; i++) { result += Math.sqrt(Math.cos(Math.sin(42))) } return result }, }, render (h) { return h('div', this.heavy()) } } }, props: ['number']} 然后我们在父组件各渲染优化前后的组件 300 个,并在每一帧内部通过修改数据来触发组件的更新,开启 Chrome 的 Performance 面板记录它们的性能,得到如下结果。 优化前:After optimization:
Comparing these two figures, we can see that the time to perform script after optimization is significantly less than that before optimization, so the performance experience is better.
So why is there any difference? let's take a look at the components before optimization. The example simulates a time-consuming task through a heavy function, and this function is executed once every time it is rendered, so it takes a long time to execute JavaScript for each component rendering.
The optimized way is to encapsulate the execution logic of the time-consuming task heavy function with the sub-component ChildComp. Because the update of Vue is component granularity, although each frame is re-rendered by data modification, ChildComp will not re-render, because there are no responsive data changes inside it. So the optimized component does not perform time-consuming tasks every time it is rendered, and there is less JavaScript time for natural execution.
However, I have put forward some different views on this optimization method. You can click on the issue for details. I think the optimization in this scenario is better with computational attributes than sub-component splitting. Thanks to the caching nature of the computed property itself, the time-consuming logic is only performed the first time it is rendered, and there is no additional overhead of rendering the subcomponents using the computed attribute.
In practical work, there are many scenarios in which computational attributes are used to optimize performance. after all, it also embodies a kind of optimization idea of space for time.
Local variables
The third technique, local variables, you can check out this online example.
The component code before optimization is as follows:
{{result}} export default {props: ['start'], computed: {base () {return 42}, result () {let result = this.start for (let I = 0; I)
< 1000; i++) { result += Math.sqrt(Math.cos(Math.sin(this.base))) + this.base * this.base + this.base + this.base * 2 + this.base * 3 } return result }, },} 优化后的组件代码如下: {{ result }}export default { props: ['start'], computed: { base () { return 42 }, result ({ base, start }) { let result = start for (let i = 0; i < 1000; i++) { result += Math.sqrt(Math.cos(Math.sin(base))) + base * base + base + base * 2 + base * 3 } return result }, },} 然后我们在父组件各渲染优化前后的组件 300 个,并在每一帧内部通过修改数据来触发组件的更新,开启 Chrome 的 Performance 面板记录它们的性能,得到如下结果。 优化前: 优化后: 对比这两张图我们可以看到优化后执行 script 的时间要明显少于优化前的,因此性能体验更好。 这里主要是优化前后组件的计算属性 result 的实现差异,优化前的组件多次在计算过程中访问 this.base,而优化后的组件会在计算前先用局部变量 base 缓存 this.base,后面则直接访问 base变量。 那么为啥这个差异会造成性能上的差异呢,原因是你每次访问 this.base 的时候,由于 this.base 是一个响应式对象,所以会触发它的 getter,进而会执行依赖收集相关逻辑代码。类似的逻辑执行多了,像示例这样,几百次循环更新几百个组件,每个组件触发 computed 重新计算,然后又多次执行依赖收集相关逻辑,性能自然就下降了。 从需求上来说,this.base 执行一次依赖收集就够了,因此我们只需要把它的 getter 求值结果返回给局部变量 base,后续再次访问 base 的时候就不会触发 getter,也不会走依赖收集的逻辑了,性能自然就得到了提升。 这是一个非常实用的性能优化技巧。因为很多人在开发 Vue.js 项目的时候,每当取变量的时候就习惯性直接写 this.xxx 了,因为大部分人并不会注意到访问 this.xxx 背后做的事情。在访问次数不多的时候,性能问题并没有凸显,但是一旦访问次数变多,比如在一个大循环中多次访问,类似示例这种场景,就会产生性能问题了。 我之前给 ZoomUI 的 Table 组件做性能优化的时候,在 render table body 的时候就使用了局部变量的优化技巧,并写了 benchmark 做性能对比:渲染 1000 * 10 的表格,ZoomUI Table 的更新数据重新渲染的性能要比 ElementUI 的 Table 性能提升了近一倍。 Reuse DOM with v-show 第四个技巧,使用 v-show 复用 DOM,你可以查看这个在线示例。 优化前的组件代码如下: 优化后的组件代码如下: 然后我们在父组件各渲染优化前后的组件 200 个,并在每一帧内部通过修改数据来触发组件的更新,开启 Chrome 的 Performance 面板记录它们的性能,得到如下结果。 优化前: 优化后: 对比这两张图我们可以看到优化后执行 script 的时间要明显少于优化前的,因此性能体验更好。 优化前后的主要区别是用 v-show 指令替代了 v-if 指令来替代组件的显隐,虽然从表现上看,v-show 和 v-if 类似,都是控制组件的显隐,但内部实现差距还是很大的。 v-if 指令在编译阶段就会编译成一个三元运算符,条件渲染,比如优化前的组件模板经过编译后生成如下渲染函数: function render() { with(this) { return _c('div', { staticClass: "cell" }, [(props.value) ? _c('div', { staticClass: "on" }, [_c('Heavy', { attrs: { "n": 10000 } })], 1) : _c('section', { staticClass: "off" }, [_c('Heavy', { attrs: { "n": 10000 } })], 1)]) }} 当条件 props.value 的值变化的时候,会触发对应的组件更新,对于 v-if 渲染的节点,由于新旧节点 vnode 不一致,在核心 diff 算法比对过程中,会移除旧的 vnode 节点,创建新的 vnode 节点,那么就会创建新的 Heavy 组件,又会经历 Heavy 组件自身初始化、渲染 vnode、patch 等过程。 因此使用 v-if 每次更新组件都会创建新的 Heavy 子组件,当更新的组件多了,自然就会造成性能压力。 而当我们使用 v-show 指令,优化后的组件模板经过编译后生成如下渲染函数: function render() { with(this) { return _c('div', { staticClass: "cell" }, [_c('div', { directives: [{ name: "show", rawName: "v-show", value: (props.value), expression: "props.value" }], staticClass: "on" }, [_c('Heavy', { attrs: { "n": 10000 } })], 1), _c('section', { directives: [{ name: "show", rawName: "v-show", value: (!props.value), expression: "!props.value" }], staticClass: "off" }, [_c('Heavy', { attrs: { "n": 10000 } })], 1)]) }} 当条件 props.value 的值变化的时候,会触发对应的组件更新,对于 v-show 渲染的节点,由于新旧 vnode 一致,它们只需要一直 patchVnode 即可,那么它又是怎么让 DOM 节点显示和隐藏的呢? 原来在 patchVnode 过程中,内部会对执行 v-show 指令对应的钩子函数 update,然后它会根据 v-show 指令绑定的值来设置它作用的 DOM 元素的 style.display 的值控制显隐。 因此相比于 v-if 不断删除和创建函数新的 DOM,v-show 仅仅是在更新现有 DOM 的显隐值,所以 v-show 的开销要比 v-if 小的多,当其内部 DOM 结构越复杂,性能的差异就会越大。 但是 v-show 相比于 v-if 的性能优势是在组件的更新阶段,如果仅仅是在初始化阶段,v-if 性能还要高于 v-show,原因是在于它仅仅会渲染一个分支,而 v-show 把两个分支都渲染了,通过 style.display 来控制对应 DOM 的显隐。 在使用 v-show 的时候,所有分支内部的组件都会渲染,对应的生命周期钩子函数都会执行,而使用 v-if 的时候,没有命中的分支内部的组件是不会渲染的,对应的生命周期钩子函数都不会执行。 因此你要搞清楚它们的原理以及差异,才能在不同的场景使用适合的指令。 KeepAlive 第五个技巧,使用 KeepAlive 组件缓存 DOM,你可以查看这个在线示例。 优化前的组件代码如下: 优化后的组件代码如下: 我们点击按钮在 Simple page 和 Heavy Page 之间切换,会渲染不同的视图,其中 Heavy Page 的渲染非常耗时。我们开启 Chrome 的 Performance 面板记录它们的性能,然后分别在优化前后执行如上的操作,会得到如下结果。 优化前:After optimization:
Comparing these two figures, we can see that the time to perform script after optimization is significantly less than that before optimization, so the performance experience is better.
In non-optimized scenarios, each time we click the button to switch the routing view, we will re-render the component, and the rendering component will go through the process of component initialization, render, patch, and so on. If the component is complex or deeply nested, the whole rendering will take a long time.
After using KeepAlive, the vnode and DOM of the components wrapped by KeepAlive will be cached after the first rendering, and then the next time the component is rendered again, the corresponding vnode and DOM will be directly obtained from the cache, and then rendered. There is no need to go through a series of processes such as component initialization, render and patch, reducing the execution time of script and better performance.
But using the KeepAlive component is not cost-free, because it takes up more memory for caching, which is a typical application of space-for-time optimization.
Deferred features
Sixth, use the Deferred component to delay the batch rendering component, and you can view this online example.
The component code before optimization is as follows:
I'm an heavy page
The optimized component code is as follows:
I'm an heavy page import Defer from'@ / mixins/Defer'export default {mixins: [Defer (),],}
When we click the button to switch between Simple page and Heavy Page, we will render different views, and the rendering of Heavy Page is very time-consuming. We open the Performance panel of Chrome to record their performance, and then do the same before and after optimization, respectively, and get the following results.
Before optimization:
After optimization:
Comparing these two pictures, we can find that when we cut from Simple Page to Heavy Page before optimization, near the end of a Render, the page is still rendered as Simple Page, which gives people a feeling of stutter. After optimization, when we cut from Simple Page to Heavy Page, the page in front of the Render has already rendered the Heavy Page, and the Heavy Page is gradually rendered.
The main difference before and after optimization is that the latter uses the mixin of Defer, so how does it work? let's find out:
Export default function (count = 10) {return {data () {return {displayPriority: 0}}, mounted () {this.runDisplayPriority ()}, methods: {runDisplayPriority () {const step = () = > {requestAnimationFrame () = > {this.displayPriority++ if (this.displayPriority)
< count) { step() } }) } step() }, defer (priority) { return this.displayPriority >= priority}
The main idea of Defer is to split the rendering of a component into multiple times, which internally maintains the displayPriority variable, and then increments itself to each frame rendered through requestAnimationFrame, adding at most to count. Components using Defer mixin can then control the rendering of certain chunks when displayPriority is added to xxx by using the method of "defer if =" (xxx) ".
When you have rendering time-consuming components, it is a good idea to use Deferred for progressive rendering, which can avoid a render rendering jam caused by long JS execution time.
Time slicing
The seventh tip, using Time slicing time slice cutting technology, you can check out this online example.
The code before optimization is as follows:
FetchItems ({commit}, {items}) {commit ('clearItems') commit (' addItems', items)}
The optimized code is as follows:
FetchItems ({commit}, {items, splitCount}) {commit ('clearItems') const queue = new JobQueue () splitArray (items, splitCount). ForEach (chunk = > queue.addJob (done = > {/ / time slice submission data requestAnimationFrame (() = > {commit (' addItems', chunk) done ()})) await queue.start ()}
We first create 10000 fake data by clicking the Genterate items button, then click the Commit items button to submit data with Time-slicing on and off, and open the Performance panel of Chrome to record their performance. We will get the following results.
Before optimization:
After optimization:
Comparing these two pictures, we can find that the total script execution time before optimization is less than that after optimization, but from the actual look and feel, if you click the submit button before optimization, the page will be stuck for about 1.2 seconds. After optimization, the page will not be completely stuck, but it will still have the feeling of rendering stutter.
So why does the page get stuck before optimization? Because too much data is submitted at one time, the execution time of the internal JS is too long, blocking the UI thread and causing the page to get stuck.
After optimization, the page still has stutters because we split the data with a granularity of 1000. In this case, re-rendering components are still under pressure. When we observe that there are only more than a dozen fps, there will be stutters. Usually, as long as the fps of the page reaches 60, the page will be very smooth. If we split the data into 100 pieces of granularity, basically the fps can reach more than 50. Although the page rendering becomes smooth, the total submission time to complete 10000 pieces of data is still longer.
Using Time slicing technology to avoid page jams, we usually add a loading effect to this time-consuming task processing. In this example, we can turn on loading animation and then submit the data. By comparison, it is found that before optimization, due to one-time submission of too much data, JS has been running for a long time, blocking UI threads, and this loading animation will not be displayed. After optimization, because we split into multiple time slices to submit data, the running time of a single JS becomes shorter, so loading animation has a chance to show.
It should be noted here that although we use requestAnimationFrame API to split the time slice, using requestAnimationFrame itself cannot guarantee that the full frame will run. RequestAnimationFrame ensures that the callback function passed in will be executed after each redraw in the browser. To ensure full frames, you can only keep JS running within a Tick for no more than 17ms.
Non-reactive data
The eighth tip, using Non-reactive data, you can view this online example.
The code before optimization is as follows:
Const data = items.map (item = > ({id: uid++, data: item, vote: 0}))
The optimized code is as follows:
Const data = items.map (item = > optimizeItem (item)) function optimizeItem (item) {const itemData = {id: uid++, vote: 0} Object.defineProperty (itemData, 'data', {/ / Mark as non-reactive configurable: false, value: item}) return itemData}
Again, in the previous example, we first create 10000 fake data by clicking the Genterate items button, then click the Commit items button to submit data with Partial reactivity on and off, and open the Performance panel of Chrome to record their performance. We will get the following results.
Before optimization:
After optimization:
Comparing these two figures, we can see that the time to perform script after optimization is significantly less than that before optimization, so the performance experience is better.
The reason for this difference is that when the data is submitted internally, the newly submitted data will be defined as responsive by default, and if the sub-attribute of the data is in the form of an object, it will also recursively make the sub-attribute become responsive. Therefore, when a lot of data is submitted, this process becomes a time-consuming process.
After optimization, we manually change the object attribute data in the newly submitted data to configurable as false, so that getting the object attribute array through Object.keys (obj) in walk will ignore data, and it will not be the attribute defineReactive of data. Because data points to an object, it will reduce the recursive response logic, which is equivalent to reducing the performance loss of this part. The larger the amount of data, the more obvious the effect of this optimization will be.
In fact, there are many other ways to optimize this. For example, some of the data we define in the component may not have to be defined in data. Some data we do not use in the template, nor do we need to listen to its changes. We just want to share this data in the context of the component. At this time, we can just mount this data to the component instance this, for example:
Export default {created () {this.scroll = null}, mounted () {this.scroll = new BScroll (this.$el)}}
This allows us to share the scroll object in the context of the component, even if it is not a responsive object.
Virtual scrolling
The ninth tip, using Virtual scrolling, you can view this online example.
The code for the component before optimization is as follows:
The optimized code is as follows:
Still in the previous example, we need to open View list, and then click the Genterate items button to create 10000 fake data (note that the online example can only create a maximum of 1000 pieces of data, in fact, 1000 pieces of data can not well reflect the optimization effect, so I changed the limit of the source code, run locally, created 10000 pieces of data), and then click the Commit items button to submit data in the case of Unoptimized and RecycleScroller, respectively, to scroll the page. Open the Performance panel of Chrome to record their performance and get the following results.
Before optimization:
After optimization:
Comparing the two images, we find that in the case of non-optimization, the fps of 10000 pieces of data is only one digit in the case of scrolling, and only more than ten in the case of non-scrolling, because there are too many DOM rendered in the non-optimized scene, and the rendering itself is very stressful. After optimization, even if there are 10000 pieces of data, the fps can have more than 30 in the case of scrolling and reach 60 full frames in the case of non-scrolling.
The difference is due to the way virtual scrolling is implemented: only the DOM within the viewport is rendered. In this way, the total number of DOM rendered will be very small, and the natural performance will be much better.
Virtual scrolling component is also written by Guillaume Chau, interested students can study its source code implementation. Its basic principle is to listen for scrolling events, dynamically update DOM elements that need to be displayed, and calculate their displacement in the view.
The virtual scrolling component is not without cost, because it needs to be calculated in real time during scrolling, so there will be a certain cost of script execution. So if the amount of data in the list is not very large, we can use normal scrolling.
At this point, I believe you have a deeper understanding of "Vue.js practical performance optimization skills sharing". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.