Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the classic interview questions derived from the front-end performance optimization?

2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article introduces the relevant knowledge of "what are the classic interview questions derived from front-end performance optimization". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations! I hope you can read it carefully and be able to achieve something!

Rendering optimization

Rendering optimization is a very important part of front-end optimization. A good first screen time can bring users a good experience. What I want to say here is the definition of the first screen time. Different teams have different definitions of the first screen time. Some teams think that the first screen time is the white screen time, which is the time from the page loading to the first screen. However, when we talk about user experience, this alone will not achieve the effect, so some front-end teams think that the first screen time should be from the page load to the normal page operation time for the user, so we will explain according to the latter.

Js css loading order

Before we talk about rendering optimization, we need to talk about a small episode, that is, the classic question "what happened to the browser address bar input url". Only by understanding this can we have a better understanding of the impact of js,css loading sequence on rendering.

Question 1: what happened to entering url in the address bar

This question is often asked. Some people answer it more succinctly, while others may answer it in more detail. Let's talk about the main process.

First of all, the url will be parsed, and the ip will be found according to the dns system.

According to the ip, you can find the server, and then the browser and the server will make a TCP three-way handshake to establish a connection. If this is https, it will also establish a TLS connection and negotiate the encryption algorithm. Here is another problem that needs to be noted: "the difference between https and http" (described below).

After the connection is established, the browser begins to send requests to get the file. At this time, there will be another situation, that is, caching. Whether to cache the connection or retrieve it directly depends on the background setting. So there will be a concern, "browser caching mechanism", which we will talk about later. Now let's pretend that there is no cache and go directly to get the file.

First of all, get the html file and build the DOM tree. This process is to download and parse at the same time, not to parse the html after all the html files have been downloaded, which is a waste of time, but to download and parse a little bit.

Well, when it comes to the head of html, there will be another question, where is the css,js? Different locations will cause different rendering. At this time, another question that needs attention will arise: "where should the css,js be placed? why?" Let's first explain according to the correct position (css put the head, js put the tail)

After parsing to the html header, it is found that there is a css file. Download the css file and parse the css file while downloading. The CSSOM tree is constructed. When the DOM tree and CSSOM tree are all built, the browser will build the DOM tree and CSSOM tree into a rendering tree.

Style calculation, the last sentence above, "the DOM tree and the CSSOM tree will be built together into a render tree" is a bit general, in fact, there are more detailed operations, but the general answer to the above should be OK, we now talk about what else we did when constructing the render tree. The first is style calculation. After DOM tree and CSSOM tree are available, browsers begin style calculation, mainly to find the corresponding style for the nodes on the DOM tree.

Build the layout tree, and then start building the layout tree after the style calculation. The main purpose is to find the corresponding position on the page and the hiding of some "display:none" elements for the nodes on the DOM tree.

To build a hierarchical tree, the browser also needs to build a hierarchical tree after the layout tree is completed, mainly to meet the complex hierarchical operations such as scrolling bar and zMurindexjudicial position.

Block the hierarchical tree and use the raster to find the corresponding bitmap under the view window. Mainly because a page may be as long as several screens, it is wasteful to render it at once, so the browser will find the corresponding block of the view window and render this part of the block.

In the end, the rendering process renders the entire page, and there will be rearrangement and redrawing during the rendering process. This is also a favorite question, "Why does rearrangement redraw affect rendering and how to avoid it?"

The above process roughly explains the whole process from url to page rendering, but actually involves several issues that need to be paid attention to. Let's talk about it in detail.

The influence of problem 2:js css sequence on Front-end Optimization

Above we talked about the entire rendering process, but did not talk about the impact of css and js on rendering. The composition of the rendering tree must be the DOM tree and the CSSOM tree, so building the CSSOM tree as soon as possible is an important optimization means. If the css file is placed at the tail, then the whole process is a serial process to parse the dom first, and then parse the css. So we usually put the css in the header, so the construction of the DOM tree and the CSSOM tree are carried out synchronously.

Let's take a look at js, because the operation of js will prevent the rendering of the DOM tree, so once our js is placed in the head, and these operations are not loaded asynchronously, once the js has been running, the DOM tree has not been built, then the page will always have a white screen interface, so we usually put the js file at the end. Of course, put to the tail is not without the problem, but the problem is relatively small, put to the tail of the js file if too large, long running time, code loading, there will be a large number of time-consuming operations caused by the page can not click, this is another problem, but this is definitely better than the white screen, white screen is what the page is not smooth operation.

There is another reason why the js script is placed at the end. Sometimes the js code will manipulate the dom node. If it is executed in the header, the DOM tree has not been built and the DOM node has not been obtained, but if you use it, an error will occur. If the error is not handled properly, the page will collapse directly.

Question 3: why does rearranging redraw affect rendering and how to avoid it?

Why rearrangement and redrawing affect rendering, which is more influential, and how to avoid it is a question that is often asked. Let's talk about redrawing first.

Redraw

Redrawing refers to operations that do not affect the layout of the interface, such as changing the color, so according to the above rendering explanation, we know that after redrawing, we only need to repeat the style calculation to render directly, and the impact on browser rendering is relatively small.

Rearrange

Rearrangement refers to operations that affect the layout of the interface, such as changing width and height, hiding nodes, and so on. For rearrangement, it is not as simple as a recalculation style, because the layout has been changed, and according to the above rendering process, the stages involved are style calculation, layout tree regeneration, hierarchical tree regeneration, so the impact of rearrangement on browser rendering is relatively high.

Avoidance method

Js minimizes the operation of styles, and uses css if it can be done with css.

If you have to use the js operation style, you can merge as much as possible instead of multiple operations.

It is best to add anti-shake to resize events, triggering as little as possible.

Load the operation on dom as little as possible, and use it where you can use createDocumentFragment.

When adding pictures, write the width and height in advance.

Question 4: browser caching mechanism

Browser caching is a common problem. I will explain it from the way the browser cache is used, the implementation of the cache, and where the cache is.

Caching mode

We often say that there are two kinds of browser cache, one is mandatory cache, the other is negotiation cache, because there is a specific implementation below, so let's talk about the concept here.

Negotiation cache

Negotiation cache means that the file has been cached, but whether to read from the cache needs to be negotiated with the server. How to negotiate depends on the field setting of the request header / response header, as described below. It is important to note whether the cache is negotiated or the request is made.

Force caching

Forced caching means that a file is obtained directly from the cache without sending a request

Cache implementation

Force caching

Expires is used when forcing the cache to exist in http1.0, and a field in the response header indicates the expiration time of the file. Is an absolute time, and in some cases, the cache will be invalidated when the server's time zone is inconsistent with the browser's time zone. To solve this problem, HTPP1.1 introduces a new response header, cache-control, whose optional values are as follows

Cache-control

Max-age: cache expiration time, which is a relative time

Public: indicates that both the client and the proxy server cache

Private: indicates that it is cached only on the client

No-cache: negotiate the cache identifier, indicating that the file will be cached but needs to be negotiated with the server

No-store: indicates that the file will not be cached

HTTP1.1 uses max-age:600 to force caching, because it is a relative time, so there is no Expires problem.

Negotiation cache

The negotiation cache makes use of the two pairs of request and response headers of Last-Modified/if-Modified-Since,Etag/if-None-Match.

Last-Modified/if-Modified-Since

Etag/If-None-Match

Because the time granularity of Last-Modified is seconds, some files may be changed many times in 1 second. This approach will still fail in this particular case, so HTTP1.1 introduces the Etag field. This field generates a tag such as "W /" 5f9583bd-10a8 "based on the content of the file, and then compares it with If-None-Match to know more accurately whether the file has been changed.

The first time the browser sends a request to get the file cached, the server response header returns an if-Modified-Since to record the time when it was changed.

The second time the browser sends a request, it will bring a Last-Modified request header, and the time is the value returned by if-Modified-Since. Then the server gets this field and compares it with the time set internally by itself. The same time means there is no modification, so it directly returns 304 to get the file from the cache.

Where is the cache?

Knowing the cache method and implementation, let's talk about where the cache is stored, we can open the Nuggets and see the following information. There are two sources of cache from dist cache,from memeory cache

Form memory cache

This is cached in memory, which has the advantage of being fast but timely, and the cache will be invalidated when tab is turned off.

From dist cache

This is cached on disk, which is slower but faster than the request. The advantage is that the cache can always be retained, even if the tab page is closed.

When is the memory cached, and when is the appropriate caching for dist?

The standard answer to this question is rarely found on the Internet. Everyone agrees that it is js, the picture file browser will automatically save it in memory, and the css file will be saved in dist because it is not often modified. We can open the Nuggets website. A large part of the files are in accordance with this rule, but there are also a few js files cached in dist. So what exactly is his storage mechanism? I checked many articles with this question, and although I didn't find a definite answer in the end, an answer from Zhihu can provide us with a train of thought. Here is a quote from a Zhihu respondent.

The first phenomenon (take pictures as an example): visit-> 200-> exit the browser and enter again-> 200 (from disk cache)-> refresh-> 200 (from memory cache). Conclusion: is it possible that chrome is very smart to judge that now that it has been taken from disk, take it from memory for the second time. (laughing and crying)

The second phenomenon (take the picture as an example): as long as the picture is base64, I think it is from memroy cache. Summary: it's better to do it once and put it in memory when it's so hard to parse and render a picture. Take it directly when you use it.

The third phenomenon (take js css as an example): when individuals are doing static tests, it is found that large js css files are direct disk cache. K: will chrome say that I take up too much space when I go to your age? Just stay on the hard drive. Slow down. Slow down.

The fourth phenomenon: in privacy mode, almost all of them are from memroy cache. Summary: privacy mode, right? I can't expose your stuff. I'd better put it in memory. You turn it off, I'll die.

Although the above points are very humorous, part of the answer can be found, but I think the other one, Zhihu, I agree more.

When the browser is running, several processes work together, so in order to save memory, the operating system will swap some of the resources in memory back to the swap area of the disk, of course, there is a strategy, such as the most commonly used is LRU.

When to save dist and when to save memoey are under the control of the browser. If there is not enough memory, you may consider saving dist. So after what I have mentioned above, I have summarized the results as follows.

Larger files will be cached in dist because memory is limited and disk space is larger.

A smaller file js, and the picture is memory.

Dist generally exists in css files

In special cases, the size of memory is limited, and browsers will save part of the js files in dist according to their own built-in algorithms.

The difference between problem 5:https and http

When it comes to the difference between https and http, we can talk about the difference between https server and client connection, as well as the negotiation of https-specific encryption algorithms, and maybe even talk about symmetric encryption, asymmetric encryption and certificates, etc., which is very long. Please take a look at a detailed https article I wrote separately before, which is very detailed.

Request optimization

Before asking for optimization, let's summarize the js and css file order optimization mentioned above. In order to make rendering faster, we need to put js on the tail and css on the head, and then pay attention to minimize rearrangement and redrawing when writing js. When writing html,css, try to be concise and not redundant, in order to build DOM tree and CSSOM tree more quickly. All right, let's talk about request optimization. Request optimization can start from two aspects: the number of requests and the request time.

Reduce the number of requests

Package small pictures into base64

Use sprite to fuse multiple small pictures

Using caching has been mentioned above.

Reduce request time

Js,css,html and other files can be compressed as much as possible to reduce file size and speed up download speed.

Use webpack packaging to load lazily according to the route, do not load all at the beginning, then the file will be very large

Http that can be upgraded to a higher version is upgraded to a higher version (this answer is a clich é). Why can the high version improve the speed? see the https article I mentioned above.

Setting up internal CDN can get files more quickly.

Webpack optimization

After introducing rendering optimization, let's take a look at webpack optimization. I usually write my own handwritten webpack configuration when writing demo to the team for training. Although it is only a few dozen lines, it allows me to consolidate the basic configuration of webpack every time. Let's talk about what webpack optimization means are.

Optimization of basic configuration

The configuration of extensions belongs to resolve and is often used to extend file suffixes, as follows

Resolve: {extensions: ['.ts', '.tsx', '.js']}

This configuration means that webpack will look for file suffixes according to extensions, so if our project is mainly written in ts, we can write in front of .tsx and .ts, so that webpack can parse quickly.

The configuration of alias also belongs to resolve and is used to map road strength. The main reason for reducing the packing time is that webpack can quickly parse the file path and find the corresponding file. The configuration is as follows

Resolve: {alias: {Components: path.resolve (_ _ dirname,'. / src/components')}}

NoParse

NoParse means files that do not need to be parsed, and some files may come from third parties and are introduced by providePlugin as variables on windows. Such files are relatively large and have been packaged, so it is necessary to exclude such files. The configuration is as follows

Module: {noParse: [/ proj4\ .js /]}

Exclude

Some loader will have such an attribute to specify the scope of loader. Exclude means that excluding some files does not require babel-loader processing. The scope of loader is smaller, and the packaging speed is naturally faster. Use babel-loader to give a simple example.

{test: /\ .js $/, loader: "babel-loader", exclude: path.resolve (_ _ dirname, 'node_modules')}

Devtool

This configuration is a debug item. Different configurations show different results, and the package size and speed are also different. For example, cheap-source-map is definitely faster than source-map in the development environment. As for why, I highly recommend my previous article on devtool: webpack devtools is very detailed.

{devtool: 'cheap-source-map'}

.eslintignore

Although this is not a webpack configuration, it is still very useful for packaging speed optimization. In my practice, eslint check has a great impact on packaging speed, but in many cases we cannot do without this eslint check. If eslint check is only opened in vs, it may not be safe.

Because it is possible that the eslint plug-in in your vs is suddenly closed or vs cannot be checked for some reason, you can only rely on the webpack build to help you stop the submission of the error code, even if it is not foolproof, because you may submit the code in a hurry and do not start the service, just blindly change it and submit it. At this time, you can only be protected by the last barrier, that is, when you are in CI. For example, we will also help you check the eslint when jenkins is built, and the three barriers ensure that there will be no problem with our final image.

So eslint is very important, can not be deleted, in the case of how to make the inspection time less, we can ignore the file, let unnecessary files prohibit eslint, only for the needed files eslint can greatly improve the packaging speed

Loader,plugins optimization

There are several basic configuration optimizations mentioned above, and there should be other basic configurations. If you encounter them in the future, we will continue to add them. Now let's talk about examples of using some loader,plugins to improve packaging speed.

Cache-loader

This loader caches the results of the package when it is packaged for the first time, and reads the contents of the cache directly when it is packaged the second time, thus improving the efficiency of packaging. But it also needs to be used wisely, and let's keep in mind that every loader,plugins you add brings extra packing time. This extra time is more than the reduced time it brings, so there is no point in blindly increasing the loader, so cache-loader is best used on the time-consuming loader. The configuration is as follows

{rules: [{test: /\ .vue $/, use: ['cache-loader',' vue-loader'], include: path.resolve (_ _ dirname,'. / src')}]}

Webpack-parallel-uglify-plugin, uglifyjs-webpack-plugin, terser-webpack-plugin

As we already know in the rendering optimization above, the smaller the file, the faster the rendering. So we often use compression when configuring webpack, but compression also takes time, so we often use one of the above three plug-ins to enable parallel compression and reduce compression time. We use terse-webpack-plugin recommended by webpack4 as an example to illustrate

Optimization: {minimizer: [new TerserPlugin ({parallel: true, cache: true})],}

Happypack, parallel-webpack, thread-loader

These loader/plugin are also enabled for parallelism as above, but only for parallel builds. Since the author of happypack says his interest is no longer on js, there is no maintenance, and it is recommended to use thread-loader if you are using webpack4. The basic configuration is as follows

{test: /\ .js $/, use: [{loader: "thread-loader", options: threadLoaderOptions}, "babel-loader",], exclude: / node_modules/,}

DllPlugin,webpack.DllReferencePlugin

The parallel plug-ins mentioned above can theoretically increase the construction speed, as many articles have said on the Internet, but I used it in practice. I found that sometimes it not only did not improve but also reduced the packaging speed. The reason given by the Internet speed query is that your computer's core count is already low, or at that time your CPU is already running very high, and then start multiple processes to slow down the construction speed.

The parallel plug-ins mentioned above may not achieve the desired results in some cases, but from our team's experience in optimizing webpack performance, the two plug-ins mentioned this time are obvious and can improve packaging speed each time. The principle is to first package the third-party dependencies to generate a js file at a time, and then when you really package the project code, you will get the required objects directly from the packaged js file according to the mapping file, instead of having to package the third-party file. It's just that in this case, the packaging configuration is a little more troublesome, and you need to write a webpack.dll.js. It is roughly as follows

Webpack.dll.js

Const path = require ('path'); const webpack = require (' webpack') Module.exports = {entry: {library: ["vue", "moment"]}, output: {filename:'[name] .dll.js', path: path.resolve (_ _ dirname, 'json-dll'), library:' [name]'} Plugins: [new webpack.DllPlugin ({path:'. / json-dll/library.json', name:'[name] .json'})}

Webpack.dev.js

New AddAssetHtmlWebpack ({filepath: path.resolve (_ dirname,'. / json-dll/library.dll.js')}), ew webpack.DllReferencePlugin ({manifest: require (". / json-dll/library.json")})

Other optimized configuration

These plug-ins are briefly introduced. They have been used in my personal project, and I feel OK. For specific use, please refer to npm or github.

Webpack-bundle-analyzer

This plug-in can use visualization to help us analyze the package volume, so as to use appropriate optimization to improve our webpack configuration.

Speed-measure-webpack-plugin

This plug-in can tell us how much time it takes to package each loader or plugin, thus optimizing the time-consuming plugin and loader.

Friendly-errors-webpack-plugin

This plug-in can help us optimize the packaging log. When we package, we often see a long log message, sometimes we don't need it, and we don't read it, so we can use this plug-in to simplify.

Code optimization

This is the last part of the code optimization, the code performance optimization here I only say that I feel in the work, as for other smaller optimization points such as the use of createDocumentFragment can check other articles

Can not operate dom, do not operate dom, even if sometimes need to change the design

In many cases, we can restore the design draft with css, but sometimes it is impossible to restore the design draft only from css, especially when you are not writing the components. For example, our team uses antd, and sometimes the product design cannot be implemented only from css. We can only use js, delete and add nodes to match the style.

Since we are also a company that does big data, there will be performance problems at this time. At the beginning of writing code, I will find a way to figure out what the product says and what it says, no matter what method is used. After coming to the customer site under the request of big data, the performance shortcomings were exposed immediately.

So one of the principles of code optimization, I think, is not to write code that can not be written, of course, from a performance point of view, to give reasons to the product through performance analysis, and it is best to provide a better solution. This is what we need to consider.

If you are using react, you must use the life cycle function of writing shouldComponentUpdate, otherwise you will find yourself confused about why you have executed it so many times when printing.

Turn complex comparisons into simple comparisons

What does that mean? Let's take shouldComponentUpdate as an example. It's okay to use this function, but we can do better, as we often write in our work.

ShouldComponentUpdate (nextPrpops) {return JSON.stringify (nextPrpops.data)! = = JSON.stringify (this.props.data)}

If this is a paging table and data is each page of data, the data is changed and re-rendered, which is fine in a small data scenario. But if there may be a problem in big data's scene, some people may wonder, since paging, how can there still be big data, because our product is to do big data analysis log, 10 logs per page, some logs may be very long, that is to say, even the comparison of 10 pieces of data is very time-consuming, so can the idea find other alternative variables to indicate that the data has changed? Such as the following.

ShouldComponentUpdate (nextPrpops) {return nextPrpops.data [0] .id! = = this.props.data [0] .id}

The difference in the id in the first item means that the data has changed. Obviously, it exists under certain circumstances, and some people will say that there may be the same id. What if it is changed to the following?

ShouldComponentUpdate (nextPrpops) {return nextPrpops.current! = = this.props.current}

The data comparison is converted into the current comparison, because the number of pages has changed, the data has basically changed, for our own log display, there are basically no two pages of data is exactly the same, if there is, it may be a background problem. Then think about this question carefully, even if there are two pages of data that are alike, the most is that the table will not be re-rendered, but there is no problem that the two pages of data will not be re-rendered, because the data is the same. Or if you're still worried, wouldn't it be better to do this one below?

This.setState ({data, requestId: guid ()}) shouldComponentUpdate (nextPrpops) {return nextPrpops.requestId! = this.props.requestId}

Give a requestId to live with data, and then only compare requestId. There may be problems with all the above writing, but the main point is that when we write code, we can think about whether we can "turn complex comparisons into simple comparisons."

Learning data structures and algorithms will certainly come in handy in your work.

We often hear that learning data structures and algorithms is of little use because the work is basically useless. I thought this sentence was right before, but now it seems to be very wrong. Every skill we learn will come in handy in future life. After writing the code, I lost it and didn't optimize it, so I think the algorithm is meaningless, difficult and easy to forget. But now you are required to complete the requirements, open mock, open perfermance for a large amount of data testing, look at those red flame diagrams and visible stutters, you will understand the importance of the algorithm and data structure, because at this time you can only get optimization from it, usually you reject it very much, when you optimize, you want to have it so much. I'll take the code I wrote earlier as an example. Because the company code is confidential, I'll change the variables. The pseudo code is as follows.

Data.filter (({id}) = > {return selectedIds.includes (id);})

Just a few lines of code, the logic is to filter out the data that has been checked in the data. Basically, a lot of people might write this, because I think that's what our team says. At that time, the product was already limited to a maximum of 200 data data, so there was no pressure and no impact on performance. However, in accordance with the principle of performance optimization (mainly scared by the on-site environment), I started the mock service and adjusted the data to 20, 000 for testing, and the code defects were exposed. When the interface enters stutter, it will also stutter when I re-select it. Then the optimization began, and the specific ideas at that time were as follows.

According to the current code, this is a two-layer cycle of violent search time complexity of O (n ^ 2). So I wondered if I could reduce the complexity to at least O (nlogn). I saw that the code could only start with selectedIds.includes (id), so I wondered if I could use dichotomy, but it was immediately denied because dichotomy is needed. My array is all string dichotomies.

After being quiet for a while, recalling the algorithm courses and books I read and the algorithm problems I did, I basically changed the methods of violent search.

1: upper pointer

2: array dimensionality enhancement

3: use hash table

I rejected the first two because I didn't think it was that complicated, so I used the idea of hash table to solve this problem, because there is a natural hash table structure in js that is the object. We know that the query for the hash table is O (1), so I rewrite the code as follows

Const ids = {}; selectedIds.forEach (id = > ids [id] = 1); data.filter (({id}) = > {return! ids [id];})

Change from selectedIds query to ids query, so the time complexity changes from O (n ^ 2) to O (n), and this code increases

Const ids = {}; selectedIds.forEach (id = > ids [id] = 1)

In fact, the addition of a selectedIds traversal is also an O (n) complexity, the total complexity is O (2n), but from the long-term expectation of time complexity, it is still an O (n) time complexity, only adding an additional object, so this is also a typical example of space for time, but don't worry, the garbage collection mechanism will collect ids after it is used up.

This is the end of the content of "what are the classic interview questions derived from the front-end performance optimization?" Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report