Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to optimize the performance of non-refresh pagination in Ajax

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article is about how to optimize the performance of non-refresh pages in Ajax. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.

Ajax does not refresh paging, which is already a familiar thing. Probably, there is a js method on the web front-end page. You can request the server-side paging data interface through Ajax. After getting the data, you can create a html structure on the page and present it to the user, similar to the following:

Function getPage (pageIndex) {ajax ({url: "RemoteInterface.cgi", method: "get", data: {pageIndex:pageIndex}, callback:callback});} function callback (datalist) {/ / todo: create a html structure based on the returned datalist data to present to the user. }

Where RemoteInterface.cgi is a server-side interface. We are limited to space here, and the example code involved may not be complete, just to make the meaning clear.

On UI, there may be various styles of paging controls, which everyone is familiar with, such as:

But it is nothing more than that the user clicks on the control to trigger the getPage (pageIndex) method here, and this getPage method may not be that simple.

If you write according to code snippet 1, we can imagine that every time a user clicks to turn the page, he will request a RemoteInterface.cgi, ignoring the possibility that the data may be updated, except for the first time, every remote interface request triggered by getPage (1), getPage (2), getPage (3), and so on, as well as the round-trip data traffic on the network, are actually repetitive and unnecessary. When each page is requested for the first time, the data can be cached on the page in some form. If the user is interested in looking back at the page that has been flipped over before, the getPage method should first check whether the page data is contained in the local cache, and if so, it can be redisplayed to the user directly instead of calling the remote interface. With this in mind, we can modify code snippet 1 as follows:

Var pageDatalist= {}; function getPage (pageIndex) {if (pageDatalist [pageIndex]) {/ / if the local data list contains the data of the current request page number showPage (pageDatalist [pageIndex]) / / directly displays the current data} else {ajax ({url: "RemoteInterface.cgi", method: "get", data: {pageIndex:pageIndex}, callback:callback});} function callback (pageIndex,datalist) {pageDatalist [pageIndex] = datalist; / / cached data showPage (datalist) / / display data} function showPage (datalist) {/ / todo: create a html structure based on the returned datalist data and present it to the user. }

This saves the round-trip time of network requests and, more importantly, saves valuable network traffic and reduces the burden on the interface server. In the low speed environment or when the running pressure of the interface server is relatively high, this necessary improvement can show a more obvious optimization effect. The first of the famous 34 articles of yahoo is to minimize the number of HTTP requests. There is no doubt that asynchronous requests from Ajax are also within the scope of http requests. A low-traffic web application may not feel necessary, but imagine having a page with 10 million views a day and an average of five pages turned by users, including one page being viewed repeatedly. So such a page, according to snippet 1, will trigger an average of 50 million data requests per day, while according to snippet 2, it can reduce at least 10 million requests per day. If the amount of data per request is 20kb, 10 million * 20kb=200000000kb or 190G of network traffic can be saved. From this point of view, the savings in resources are considerable.

If you want to dig deeper, the data caching method in code snippet 2 is also worth discussing. We assume that the timeliness of paged data can be ignored, but in practical applications, timeliness is an unavoidable problem. Caching will undoubtedly lead to the reduction of timeliness, and the real caching scheme should also rely on the analysis and trade-off of application timeliness requirements.

For content that generally does not place special emphasis on timeliness, the cache on the page should be acceptable. On the one hand, users will not stay on a page all the time. When reloading is caused by a jump between pages, the updated data can be obtained. In addition, if the user has the habit of refreshing the page, he can choose to refresh the page when he especially wants to see if the list has data updates. If you pursue perfection, you can also consider setting a time frame, such as 5 minutes. If the user stays on the current page for more than 5 minutes, his page flip will read the cache on the page first within 5 minutes, and then request the server's data again after 5 minutes.

In some cases, if we can predict how often the data will be updated, such as how many days it will take to update the data, or even consider using local storage, we can trigger a request for server data at regular intervals, thus saving the number of requests and traffic more thoroughly. Of course, in the final analysis, what kind of caching method is applicable depends on the timeliness requirements of the product, but the principle can save requests and traffic as much as possible, especially for pages with excessive visits.

For a type of data that requires high timeliness, caching is not applicable at all? Of course not, but the overall thinking has to change again. The so-called change may be mainly due to the increase, decrease or change of the data in the list, but the vast majority of the data remains the same. In most cases, the above settings apply for caching over a period of time.

If there is a need for real-time data updates, it may be easy to think of ways to use timers, such as executing the getPage (pageIndex) method every 20 seconds and redrawing the list. But as long as you think of the assumption of the previous 10 million page visits, you will find that this is undoubtedly a super scary thing, according to this traffic and the frequency of retries, the server is under a lot of pressure. As for how to deal with this situation, I would like to ask you to take a look at the handling of mailing list pages such as Gmail, 163mailboxes and Sina mailboxes. Almost at the same time, they satisfy our previous assumptions: ultra-large daily visits, real-time updates to data, and so on. It is not difficult to analyze it with the network packet grabbing tool, and it is not difficult to find that they will not make a request to the server when the user repeatedly requests data of the same page number. To ensure that the user is notified and the mailing list is updated when a message is updated, a scheduled, repeated asynchronous request can be used, but this request is only a status query, not a refresh list. Only when the status of a message update is obtained will a request be made to obtain the updated data, or the API for status query will directly return the updated data when an update is found. In fact, 163 mailbox this regular status query, the interval is relatively long, about two minutes, Sina mailbox this interval is longer, about 5 minutes, we can understand that they are trying their best to reduce the number of requests. However, this kind of processing may not be done unilaterally by the front end, and the implementation solution needs to be considered with the background interface as a whole.

Now let's go back to the data caching method in snippet 2. Instead of talking about the number of requests and traffic savings, let's take a look at the front-end implementation to see if there is anything worth digging into. According to snippet 2, the original data is stored, and when it is called again, showPage (datalist) needs to reconstruct the html structure based on the data to show to the user, but we have done this process before, can we consider saving the structure directly when we create the structure for the first time? This can reduce the repeated calculation of js, especially when the structure is more complex. Think again, this structure has been created on the page before, when you turn the page, it is also resource-consuming to destroy and create a new structure again. Can you not destroy it when you turn the page for the first time, but just hide it by controlling the CSS style? when you turn the page repeatedly, you can only control each other to show or hide between these created structures.

Thank you for reading! This is the end of this article on "how to optimize the performance of non-refresh pagination in Ajax". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it out for more people to see!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report