In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly explains "how to optimize the front-end performance". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's ideas to study and learn "how to optimize the front-end performance".
What are the causes of performance problems?
Performance problems are usually not caused unilaterally, especially after many years of iteration of large systems, which is caused by long-term overwork. Therefore, we need to analyze and find out the crux of the problem and break it one by one according to the priority of bottlenecks. Take our project as an example, roughly divided into several aspects:
1 the resource package is too large
Through the Network tag of Chrome DevTools, we can get the actual resource size pulled by the page (as shown below):
After the rapid development of the front end and the iteration of the project update in recent years, the front end construction products are also increasing rapidly. Because the business comes first, many students introduce the library and coding process without considering the performance problems, resulting in the number of built packages to increase to dozens of MB. This brings two significant problems:
Under weak (ordinary) network, it takes a long time to download the resources on the first screen.
Slow execution of resource decompression and parsing
For the first problem, it will basically affect all mobile users, and will consume a lot of unnecessary user bandwidth, which is an economic implicit loss and experience loss to customers.
For the second problem, it will affect all users, who may give up use because of the long waiting time.
The following figure shows the delay and user response:
2 the code takes a long time
At the code execution level, performance problems caused by project iterations are generally caused by the quality of the developer's code, probably for the following reasons:
Unnecessary data flow monitoring
This scenario is easier to appear in the hooks+redux scenario, as shown in the following code:
Const FooComponent = () > {const data = useSelector (state = > state.fullData); return;}
Assuming that fullData is a large object that changes frequently, although FooComponent relies only on its .bar.baz property, each change in fullData causes Foo to re-render.
Double-edged sword cloneDeep
It is believed that many students have cloneDeep experience in projects, especially for projects that have been iterated for many years, in which mutable data processing logic or business-level dependence is inevitable, and cloneDeep is needed. However, this method itself has a lot of performance pitfalls, as shown below:
/ / a.tsx export const a = {name: 'asides,}; / / b.tsx import {a} = b; saveData (_ .cloneDeep (a)); / / suppose you need to clone and store to the backend database
There is no problem with the normal iteration of the above code, but suppose that one day a needs to extend a property to save a reference to ReactNode, then the browser may crash directly when it is executed to b.tsx!
Memo of Hooks
The release of hooks not only brings more freedom to react development, but also brings quality problems that are easy to ignore. Since there is no longer the concept of life cycle with clearly marked prices in classes, and the state of components needs to be freely controlled by developers, it is important to understand the rendering mechanism of hooks components by react during development. The following code can be optimized:
Const Foo = () = > {/ / 1. Foo available React.memo to avoid rendering const result = calc () without props changes; / / 2. Directly executed logic cannot be used in the component, and return needs to be encapsulated with useEffect, etc. / React.useMemo is available at 3.render, rendering only the necessary data dependencies}
Immutable Deep Set
In the process of using data flow, to a large extent, we rely on the functions of lodash/fp to implement immutable changes, but the fp.defaultsDeep series functions have a drawback. Their implementation logic is equivalent to performing fp.set after deep cloning of the original object, which may cause some performance problems and cause all the hierarchical attributes of the original object to be changed, as follows:
Const a = {b: {c: {d: 123}, c2: {D2: 321}; const merged = fp.defaultsDeep ({b: {c3: 3}}, a); console.log (merged.b.c = a.b.c); / / print false
3 troubleshooting path
For these sources of problems, we can clearly understand the time-consuming and stutter points of each part of the page loading and rendering process through Chrome DevTools's Performance flame image (as shown below):
When we lock a time-consuming link, we can go further through the matrix tree diagram (below) to find a specific time-consuming function.
Admittedly, usually we don't directly find a single point function that takes a very long time, but basically hundreds of times of superimposed execution of each N-millisecond function leads to stutters. So this Profile combined with the react debug plug-in can help you locate the rendering problem:
As shown in the figure, the number of times the react component is rendered and its rendering time are clear at a glance.
Second, how to solve the performance problem?
1 Resource package analysis
As a developer of performance sense, it is necessary to be sensitive to the content of the product you build. Here we use the stats provided by webpack for product analysis.
First, execute webpack--profile-- json >. / build/stats.json to get the package dependency analysis data of webpack, and then use webpack-bundle-analyzer. / build/stats.json to see a large construction picture in the browser (different project products are different, the following figure is just an example):
Of course, there is also an intuitive way to use Chrome's Coverage feature to help determine which code is used (as shown in the following figure):
Red indicates code that has not been executed
The best way to build
Generally speaking, the basic idea for our organization to build packages is:
Build by entry entry.
One or more shared packages are used by multiple entry.
The idea based on complex business scenarios is:
The entry entrance is lightweight.
Shared code is automatically generated in chunk mode and dependencies are established.
Dynamic import of large resource packs (asynchronous import).
A new plug-in splitChunks is provided in webpack 4 to solve the problem of code separation optimization, and its default configuration is as follows:
Module.exports = {/ /... Optimization: {splitChunks: {chunks: 'async', minSize: 20000, minRemainingSize: 0, maxSize: 0, minChunks: 1, maxAsyncRequests: 30, maxInitialRequests: 30, automaticNameDelimiter:' ~', enforceSizeThreshold: 50000 CacheGroups: {defaultVendors: {test: / [\ /] node_ modules [\\ /] /, priority:-10}, default: {minChunks: 2, priority:-20 ReuseExistingChunk: true}
According to the above configuration, the separation of chunk is based on the following points:
The module is shared or the module comes from node_modules.
Chunk must be greater than 20kb.
No more than 30 chunk or initial packages can be loaded in parallel at the same time.
In theory, the default code separation configuration of webpack is the best way, but if the project is complex or deeply coupled, we still need to adjust our chunk split configuration according to the actual build product.
Resolve TreeShaking failure
"more than 60% of the code in your project is not used!"
The original intention of treeshaking is to solve the problem in the above sentence and remove the unused code.
Webpack default production mode will enable treeshaking, through the above build configuration, in theory should achieve the effect of "unused code should not be put into the package", while the reality is "you think the unused code, all into the Initial package", this problem usually occurs in complex projects, the reason is code side effects (code effects). Because webpack cannot determine whether some code "needs to have side effects", it will include such code in the package (as shown below):
So, you need to be clear about whether your code has side effects, and determine by this sentence: "side effects are defined as code that performs special behavior (modifying global objects, code that executes immediately, etc.) on import, rather than just exposing an export or multiple export. For example, polyfill, which affects global scope and usually does not provide export."
The solution is to tell webpack that my code has no side effects and can be removed directly if it is not introduced.
Mark sideEffects as false in package.json.
Or add sideEffects filtering to module.rules in the webpack configuration.
Module specification
Therefore, in order to achieve the best results of the build product, we agreed on the following module specifications in the coding process:
[must] the module must be es6 module (i.e. export and import).
[must] three-party packages or data files (such as map data, demo data) exceed 400KB must be dynamically loaded on demand (asynchronous import).
[prohibit] the use of export * as output is prohibited (may cause tree-shaking invalidation and difficult to trace).
[recommended] introduce specific files in the package as much as possible to avoid introducing the whole package directly (such as import {Toolbar} from'@ alife/foo/bar').
[must] dependent three-party packages must be marked as sideEffects: false in package.json (or in webpack configuration).
2 Mutable data
Basically, through the debugging capabilities provided by the Performance and React plug-ins, we can basically locate the problem. However, for mutable data changes, I also give some non-standard debugging methods in combination with practice:
Freezing positioning method
As we all know, one of the reasons for the idea of data flow is to avoid the problem that mutable data is untraceable (because you can't know which code changed the data), and many projects can not avoid mutable data changes, this method is to solve a thorny problem of mutable data changes. Here I temporarily name it "freeze location method", because the principle is to use the freeze method to locate the mutable change problem. Use the equivalent tricky:
Constob j = {prop: 42}; Object.freeze (obj); obj.prop=33; / / Throws an error in strict mode
Mutable traceability
This method is also used to solve the problem of uncertain changes in data caused by mutable changes, and is used to achieve several purposes of troubleshooting:
Where the property is read.
Where the property is changed.
What is the access link corresponding to the.
In the following example, for deep changes or access to an object, you can output change-related information (stack content, change content, etc.) no matter where you set its properties at any level after using watchObject:
Const a = {b: {c: {d: 123}; watchObject (a); const c = a.b.c; c.d = 0; / / Print: Modify: "a.b.c.d"
The principle of watchObject is to encapsulate an object with deep Proxy to intercept get/set permissions. For more information, please see https://gist.github.com/wilsoncook/68d0b540a0fea24495d83fc284da9f4b.
Avoid Mutable
Usually, technology stacks like react will use the corresponding data flow scheme, which is naturally opposed to mutable, so mutable data should be avoided as much as possible in the coding process, or the two should be separated from the design (different store), otherwise there will be unexpected problems and difficult to debug.
3 calculation & rendering
Minimize data dependency
With the explosive growth of project components, the store content level of data flow is getting deeper and deeper, and many components rely on an attribute to trigger rendering. This dependency needs to follow the principle of minimization at design time as much as possible to avoid frequent rendering caused by relying on a large attribute as described above.
Make rational use of cache
(1) calculation results
In some necessary cpu-intensive computing logic, it is important to use caching mechanisms such as WeakMap to store the current final computing results or intermediate states.
(2) component status
For components like hooks, it is necessary to follow the following two principles:
Memo time-consuming logic as much as possible.
There are no redundant memo dependencies.
Avoid cpu-intensive functions
The complexity of some utility functions increases with the order of magnitude of the input parameters, while others themselves consume a lot of cpu time. For this type of tool, you should try to avoid using it. If you can't avoid it, you can also control it strictly by "controlling the input content (whitelist)" and "asynchronous thread (webworker, etc.)".
For example, for _ .cloneDeep, if it is unavoidable, it is necessary to control that large data such as references are not allowed in the input parameter properties.
In addition, for the problem of immutable data depth merge described above, you should also control the input parameters as much as possible, or you can refer to the self-developed immutable implementation: https://gist.github.com/wilsoncook/fcc830e5fa87afbf876696bf7a7f6bb1
Const a = {b: {c: {d: 123}, c2: {D2: 321}; const merged = immutableDefaultsDeep (a, {b: {c3: 3}}); console.log (merged = a); / / print false console.log (merged.b.c = a.b.c) / / print true Thank you for your reading, the above is the content of "how to optimize front-end performance". After the study of this article, I believe you have a deeper understanding of how to optimize front-end performance, and the specific usage needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.