In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article introduces the relevant knowledge of "the scheme of optimizing the performance of Web application and its advantages and disadvantages". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Chrome Corverage analyzes code coverage
Before we explain these mechanisms, let's talk about a Chrome tool, Corverage. This tool can help find JavaScript and CSS code that is used or not used on the current page.
The opening process of the tool is:
Open the browser console console
Ctrl+shift+p opens a command window
Enter the show Coverage display tab in the command window
Webpackjs
If you want to query the code used when the page is loaded, click reload button
If you want to see the code used after interacting with the page, click record buton
Here, taking Taobao as an example, introduce how to use
The above two are clicked analyses of reload and record, respectively.
Where from left to right are
Required resources URL
Js and css contained in the resource
Total resource size
Currently unused resource size
There is a summary in the lower left-hand corner. Describes the size of the resources loaded on the current page and the percentage that is not used. You can see that Taobao's unused rate of home code is only 36%.
The purpose of introducing this feature is not to require you to ReFactor the code base so that each page contains only the js and css you need. This is difficult or even impossible. But this indicator can improve our understanding of the current project in order to improve performance.
The benefits of increased code coverage are the highest of all performance optimization mechanisms, which means that you can load less code, execute less code, consume less resources, and cache fewer resources.
Webpack externals acquires external CDN resources
Generally speaking, we basically use Vue,React and corresponding component libraries to build SPA single-page projects. However, it is not a very wise choice to package these framework code directly into the project at build time.
We can add the following code directly to the index.html of the project
You can then configure it like this in webpack.config.js
Module.exports = {/ /... Externals: {'vue':' Vue', 'vue-router':' VueRouter',}}
The role of webpack externals is not to package Vue into the final project at build time, but to acquire these external dependencies at run time. This works well for teams that don't have the strength to build themselves at the beginning of the project and need to use CDN services.
Principle
When these projects are packaged into a third-party library, they are also exported as global variables. Thus it can be obtained and used directly on the window object of the browser. That is,
Window.Vue / bn (t) {this._init (t)}
That's why we can use it directly in the html page.
{{message}} / / Vue is mounted on the window, so you can use var app = new Vue directly on the page ({el:'# app', data: {message: 'Hello vouchers'}})
At this point, we can use webpack Authoring Libraries to learn how to use webpack to develop third-party packages.
Advantages and disadvantages
advantage
For this kind of dependent library, which can neither split the code nor Tree Shaking, it is very profitable to put the dependent library of these requirements in the public cdn.
Defect
For libraries like Vue React, a problem with the CDN service means that the project cannot be used at all. You need to frequently browse the announcements of the CDN service providers you use (announcements such as no longer providing services, etc.), and add similar error remediation plans to your code.
Window.Vue | |. Other treatment
Webpack dynamic import increases code coverage
We can use webpack to import dynamically, and we can call getComponent when we need to leverage the code. Before that, webpack needs to be configured. For more information, please see webpack dynamic-imports.
After the configuration is complete, we can write the following code.
Async function getComponent () {const element = document.createElement ('div'); / * * webpackChunkName, the same name will be packaged in a chunk * / const {default: _} = await import (/ * webpackChunkName: "lodash" * /' lodash'); Element [XSS _ clean] = _ .join (['Hello',' webpack'],''); return element } getComponent () .then (component = > {document.body.appendChild (component);})
Advantages and disadvantages
advantage
By dynamically importing the configuration, you can handle multiple chunk, which will only be loaded and executed when needed. The resources (routing control, permission control) that the user will not use will not be loaded, thus directly increasing the code coverage.
Defect
Tree Shaking can be understood as dead code elimination, that is, unwanted code is not built and packaged. However, when we use dynamic import, we cannot use Tree Shaking optimization because there is a compatibility problem between the two. Because webpack cannot assume how users use dynamic imports.
Basic code X module A module B-business code A business code B business code.
When multiple asynchronous blocks are used in the business, the business code A requirements module An and the business code B requirements module B, but webpack can not assume that the two modules An and B are mutually exclusive or complementary at the same time. So it is inevitable to assume that modules An and B can be loaded at the same time, and the underlying code X has two export states, which is impossible! In this respect, dynamic import and Tree Shaking are difficult to be compatible with. For more information, please see Document why tree shaking is not performed on async chunks.
Of course, the use of dynamic import, there will be some performance degradation, after all, one is a local function call, the other involves network requests and compilation. But this is not so much a flaw as a decision. Which is more helpful to your own project?
Use loadjs to assist in loading third-party cdn resources
In ordinary business code we can use dynamic import, in today's front-end projects, there are always some libraries that we need but have very low usage, such as the ECharts data graph library that only appears in the statistical module, or the rich text editor library that only appears when editing documents or web pages.
For these bitter libraries, we can actually use loadjs to load when the page or component is mounted. Because there is no Tree shaking enhancement for using dynamic import of these third-party libraries, the effect is about the same, but loadjs can fetch common CDN resources. You can refer to github loadjs for specific use. Because the library is relatively simple, we will not discuss it in depth for the time being.
Use output.publicPath managed code
Because whether using webpack externals or loadjs to use public cdn is a compromise. If companies can spend money on oss + cdn services, they can host packaged resources directly.
Module.exports = {/ /... Output: {/ / prefix of each block publicPath: 'https://xx/', chunkFilename:' [id] .chunk.js'}}; / / the prefix of the packaged data becomes
At this point, the business server only needs to load index.html.
Use prefetch to load resources during vacancy time
If you do not need to use scripts on the first screen of the browser. You can take advantage of the new prefetch latency in the browser to get the script.
The following code tells the browser that echarts will be used in a future navigation or function, but the download order weight of the resource is relatively low. In other words, prefetch is usually used to speed up the next navigation. Resources marked as prefetch will be loaded by the browser in idle time.
This feature also applies to pre-requests for html and css resources.
Use instant.page to load resources in advance
Instant.page is a relatively new function library, which is small and beautiful. And non-invasive. As long as you add the following code before the project, you will get the benefit.
This solution is not suitable for single-page applications, but the library makes great use of prefetch, which changes the last link of the head into the href of the hover link when you hover over the 65ms.
The following code is the main code
/ / load prefetcher const prefetcher = document.createElement ('link') / / check whether prefetcher const isSupported = prefetcher.relList & & prefetcher.relList.supports & & prefetcher.relList.supports (' prefetch') / / hover time 65 ms let delayOnHover = 65 / / read the instantIntensity set on the script If the hover time const milliseconds = parseInt (document.body.dataset.instantIntensity) if (! isNaN (milliseconds)) {delayOnHover = milliseconds} / / supports prefetch and the data protection mode if (isSupported & &! isDataSaverEnabled) {prefetcher.rel = 'prefetch' document.head.appendChild (prefetcher). / / Mouse hover exceeds instantIntensit ms | | 65ms changes href to obtain in advance. Html mouseoverTimer = setTimeout (() = > {preload (linkElement.href) mouseoverTimer = undefined} DelayOnHover). Function preload (url) {prefetcher.href = url}
Delayed prefetch? Or when the mouse hovers to load. I have to say that the library takes advantage of many new browser mechanisms. It includes using type=module to reject old browser execution and using dataset to read instantIntensity to control delay time.
Optimize-js skips v8 pre-Parse to optimize code performance
Recognizing that this library is in the v8 article on the new version, marked in github as UNMAINTAINED is no longer maintained, but understanding and learning the library still has its value and significance. The use of the library is very simple and crude. I can't believe I just changed the function to IIFE (execute the function expression immediately). The usage is as follows:
Optimize-js input.js > output.js
Example input:
! function () {} () function runIt (fun) {fun ()} runIt (function () {})
Example output:
! (function () {}) () function runIt (fun) {fun ()} runIt ((function () {}))
Principle
Inside the V8 engine (not just V8, here we take V8 as an example), the pre-Parse located in each compiler is divided into Pre-Parse and Full-Parse,Pre-Parse. The entire Js code is checked, which can directly determine the existence of syntax errors and directly interrupt subsequent parsing. At this stage, Parse will not generate the AST structure of the source code.
/ / This is the top-level scope. Function outer () {/ / preparsed will be pre-analyzed here function inner () {/ / preparsed will be pre-analyzed but not fully analyzed and compiled}} outer (); / Fully parses and compiles `outer`, but not `inner`.
However, if you use the IIFE,v8 engine, you will not do the Pre-Parsing operation directly, but will immediately fully parse and compile the function. Please refer to Blazingly fast parsing, part 2: lazy parsing
Advantages and disadvantages
advantage
Hurry up! Even on the newer V8 engines, we can see that the optimize-js is still the fastest. Not to mention the version of the domestic browser is much smaller than the current version of V8. Unlike the back-end node, the front-end page life cycle is very short, the faster the better.
Defect
But again, any technology is not a silver bullet, direct and complete parsing and compilation can also cause memory pressure, and the library is not the recommended use of the js engine. I believe that in the near future, the revenue of the library will gradually become smaller, but for some special needs, the library will certainly help.
Let's talk about code coverage
At this point we are talking about code coverage. If we can achieve high code coverage when recording on the first screen. Direct execution is a better way. The higher the code coverage in the project, the greater the benefit of letting the code execute as soon as possible beyond the Pre-Parsing.
Polyfill.io establishes different polyfill according to different browsers.
If you have written the front end, it is impossible not to know polyfill. Different browsers have different versions and different polyfill requirements
Polyfill.io is a service that reduces the annoyance of Web development by selectively populating the content needed by the browser. Polyfill.io reads the User-Agent header for each request and returns the polyfill appropriate to the requesting browser.
If it is the latest browser and has Array.prototype.filter
Https://polyfill.io/v3/polyfill.min.js?features=Array.prototype.filter / * Disable minification (remove `.min` from URL path) for more info * /
If not, the relevant polyfill will be added below the body.
Alibaba in China has also set up a service that can be considered for use at https://polyfill.alicdn.com/polyfill.min.js
Type='module'-assisted packaging and deployment of es2015+ code
With the new DOM API, polyfill can be conditionally loaded because it can be detected at run time. However, with the new JavaScript syntax, this can be tricky, because any unknown syntax will cause parsing errors, and then none of the code will run.
.
As early as 2017, I knew that type=module could support module functions natively in browsers. You can refer to the JavaScript modules module for details. But at that time, it just felt that this function was very powerful, and there was no interpretation of this function. But I didn't expect to use this feature to identify whether your browser supports ES2015.
Every browser that supports type= "module" supports most of the ES2015+ syntax you are familiar with!
For example
Native syntax support such as Promises Map Set
Therefore, it is possible to make an elegant downgrade by using this feature. Support type=module to provide the js to which it belongs, and provide another js if not supported. For more information, please refer to Phillip Walton's wonderful blog posts. There is also a translated version of https://jdc.jd.com/archives/4911..
Vue CLI modern model
If the current project has begun to move from the webpack camp to the Vue CLI camp, then congratulations, the above solution has been built into Vue CLI. The project produces two versions of the package simply by using the following instructions.
Vue-cli-service build-modern
You can refer to the modern model of Vue CLI for details.
Advantages and disadvantages
advantage
Improve code coverage, directly use native await and other syntax, directly reduce a lot of code.
Improve code performance. In the past, v8 used the Crankshaft compiler, but over time, the compiler was abandoned because it could not optimize modern language features, and then V8 introduced a new Turbofan compiler to support and optimize the new language features. Try catch, await,JSON regularity and other performance discussed in the community have been greatly improved. You can browse the v8 blog from time to time to see the optimization.
Writing ES2015 code is a win for developers, and deploying ES2015 code is a win for users. "the solution to optimize the performance of Web applications and its advantages and disadvantages" is introduced here. Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.