In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article shares with you the content of the sample analysis of caching in nuxt.js. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Nuxt is a vue-based ssr solution that can be isomorphic to the front and back ends using vue syntax.
However, compared with the traditional ssr scheme of pure string concatenation, the performance is not so good. Nuxt needs to generate virtual dom on the server, and then serialize the HTML string. We often say that the high performance of nodejs refers to scenarios where asynchronous IO operations are frequent rather than CPU operation-intensive scenarios. After all, nodejs runs under a single thread, and the performance will be degraded in scenarios involving high concurrency. We can consider adopting a reasonable caching strategy.
Nuxt caching can be divided into component-level cache, API-level cache and page-level cache.
Component-level caching
The configuration of the configuration item nuxt.config.js looks like this:
Const LRU = require ('lru-cache') module.exports = {render: {bundleRenderer: {cache: LRU ({max: 1000, / / maximum number of caches maxAge: 1000 * 60 * 15 / / cache for 15 minutes})}
This is not to say that caching at the component level is achieved with this item. You also need to add name and serverCacheKey fields to the vue components that need to be cached to determine the unique key value of the cache, such as:
Export default {name: 'AppHeader', props: [' type'], serverCacheKey: props = > props.type}
The above component caches according to the type value passed down by the parent component, and the key value is: AppHeader::$ {props.type}, so that when a new request arrives, as long as the type attribute passed down by the parent component has been previously processed, the previous rendering cache result can be reused to improve performance.
As can be seen from this example, if the component not only depends on the type property of the parent component, but also depends on other properties, serverCacheKey also needs to make corresponding changes here. Therefore, if the component depends on a lot of global states, or if the dependent state values are very large, which means that the cache will be frequently set and lead to overflow, it does not make much sense. In the configuration of lru-cache, the maximum number of caches set is 1000. The excess will be wiped out.
Secondly, sub-components that may have side effects on the rendering context should not be cached. For example, the hooks of the component's created and beforeCreated will also go on the server side, and the component will not be executed after it is cached. Be careful where these may affect the rendering context. For more information, please refer to component-level caching.
Generally speaking, the more suitable scenario is the rendering of a large amount of data in v-for, because the loop operation consumes more cpu.
API-level caching
In the server-rendered scene, the request is often done on the server, and the page is rendered and then returned to the browser, while some interfaces can be cached, for example, an interface that does not rely on the login state and does not depend on too many parameters, or an interface that simply obtains configuration data, etc., the processing of the interface also takes time, and the cache of the interface can speed up the processing of each request and release the request more quickly. To improve performance.
Api's request uses axios, and axios can be used either on the server or in the browser. The code looks like this.
Import axios from 'axios'import md5 from' md5'import LRU from 'lru-cache'// adds a 3-second cache to api const CACHED = LRU ({max: 1000, maxAge: 1000 * 3}) function request (config) {let key / / caching only on the server Forget about if (config.cache & &! process.browser) {const {params = {} on the browser side. Data = {}} = config key = md5 (config.url + JSON.stringify (params) + JSON.stringify (data)) if (CACHED.has (key)) {/ / cache hits return Promise.resolve (CACHED.get (key))}} return axios (config) .the n (rsp = > {if (config.cache &! process.browser) {/ / set cache CACHED.set (key) before returning the result Rsp.data)} return rsp.data})}
The use is the same as the usual use of axios, and an extra cache attribute is added to identify whether it needs to be cached on the server.
Const api = {getGames: params = > request ({url:'/ gameInfo/gatGames', params, cache: true})}
Page-level caching
In the case of not relying on the login state and too many parameters, if the concurrency is large, you can consider using page-level caching and adding the serverMiddleware attribute to nuxt.config.js.
Const nuxtPageCache = require ('nuxt-page-cache') module.exports = {serverMiddleware: [nuxtPageCache.cacheSeconds (1, req = > {if (req.query & & req.query.pageType) {return req.query.pageType} return false})]}
In the above example, the cache is done according to the parameter pageType after the link. If there is a pageType parameter after the link, the cache time is 1 second, that is, for the same pageType request within 1 second, the server will only perform a complete rendering.
Nuxt-page-cache refers to route-cache and is poorly written. You can also re-encapsulate it. Nuxt finally returns data using res.end (html, 'utf-8'). The general idea of page-level caching is as follows:
Const LRU = require ('lru-cache') let cacheStore = new LRU ({max: 100, / / set the maximum number of caches maxAge: 200}) module.exports.cacheSeconds = function (secondsTTL, cacheKey) {/ / set the cache time const ttl = secondsTTL * 1000 return function (req, res, next) {/ / get the cache key let key = req.originalUrl if (typeof cacheKey = =' function') {key = cacheKey (req) Res) if (! key) {return next ()}} else if (typeof cacheKey = 'string') {key = cacheKey} / / if cache hits Directly return const value = cacheStore.get (key) if (value) {return res.end (value, 'utf-8')} / / cache the original end scheme res.original_end = res.end / / rewrite the res.end scheme, so that nuxt calls res.end is actually calling the method Res.end = function () {if (res.statusCode = 200) {/ / set cache cacheStore.set (key, data, ttl)} / / the final result res.original_end (data, 'utf-8')}
If the cache hits, the original calculation result is returned directly, which greatly provides performance.
Thank you for reading! This is the end of this article on "sample Analysis of caching in nuxt.js". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.