Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How squid forces caching of dynamic pages

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Squid how to force caching dynamic pages, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain for you in detail, people with this need can come to learn, I hope you can gain something.

Actually, I didn't want to write this title. My original intention was to cache the query data of yupoo api, and I found a reference method (Caching Google Earth with Squid) in the process. Ha ha, so I will also come back to the title party.

This reference has been widely circulated and has been mentioned on Digg, and I don't know where it came from.

But... no, no, no. If you follow its instructions, it doesn't work correctly!

Then again, let's talk about my needs first.

Recently, the access speed of yupoo is very slow, and I often fail to complete a bunch of api requests. I guess either the other party has limited the number of connections to the same ip, or yupoo has encountered a new round of traffic bottlenecks. After contacting the zola of Yupoo, it is confirmed that it is caused by their high load, and there is no limit on the number of connections. So I'm going to find a way to do some caching on my side.

Because I myself use the squid proxy to solve the cross-domain problem of calling API in Ajax, it is natural to target the configuration file of squid.

The request address of yupoo api is www.yupoo.com/api/rest/?method=xx&xxxxxxx...

Everyone knows that squid automatically caches static files, but how to cache this kind of dynamic web page, so look for it on google and find the blog post that caches Google Earth mentioned above.

His method is:

Acl QUERY urlpath_regex cgi-bin\? Intranet

Acl forcecache url_regex-I kh.google keyhole.com

No_cache allow forcecache

No_cache deny QUERY

#-

Refresh_pattern-I kh.google 1440 20% 10080 override-expire override-lastmod reload-into-ims ignore-reload

Refresh_pattern-I keyhole.com 1440 20% 10080 override-expire override-lastmod reload-into-ims ignore-reload

The principle is to use no_cache allow and refresh_pattern to set some caching rules to force the google earth request to cache.

As soon as this article was published, someone naturally went to verify it, but no one succeeded, and there was no news from the original author. It's also mentioned in squid's mailing list. (look at the friends who come in the title, don't worry, read on, it won't let you go back empty-handed.)

I don't care, either. I guess there's something wrong with other people's skills. First try to solve the caching problem of yupoo api with a rewrite.

Acl QUERY urlpath_regex cgi-bin\?

Acl forcecache url_regex-I yupoo\ .com

No_cache allow forcecache

No_cache deny QUERY

Refresh_pattern-I yupoo\ .com 1440 10080 override-expire override-lastmod reload-into-ims ignore-reload

Hey, sure enough, NND is useless, and there is still a pile of TCP_MISS in the interview records.

So I looked at the documents over and over again, looked for information, and found that it was squid's bug that caused the trouble, but it had already been fixed (strictly speaking, it was an extension patch).

My squid is 2.6.13. After going through the source code, I have indeed patched it.

Several extension parameters (ignore-no-cache ignore-private) of refresh_pattern are required to solve this problem, which are not mentioned in the squid documentation and configuration examples, and it seems that squid is not up to date.

Let's talk about the problem.

First take a look at the HTTP header information returned by yupoo api (cache related part)

Cache-Control: no-cache, must-revalidate

Pragma: no-cache

These two lines control the browser's caching behavior and instruct the browser not to cache. Squid also follows RFC, and normally doesn't cache these pages. Override-expire override-lastmod reload-into-ims ignore-reload can't deal with it.

And that patch is for these two Cache-Control:no-cache and Pragma: no-cache.

So rewrite the refresh_pattern sentence as

Refresh_pattern-I yupoo\ .com 1440 10080 override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache ignore-private

So it's done. Squid-k reconfigure look at access.log, and this time it's finally there.

TCP_HIT/200 TCP_MEM_HIT/200, which shows that the cache rules do work. The excitement is 555 years old.

=

Add:

Later, I took a look at the HTTP header of the google earth server hk1.google.com, and there was only

Expires: Wed, 02 Jul 2008 20:56:20 GMT

Last-Modified: Fri, 17 Dec 2004 04:58:08 GMT

In this way, it seems that it doesn't need ignore-no-cache ignore-private to work, maybe the author made a mistake here

Kh.google should be kh.\ .Google.

Finally, to sum up, the correct configuration of caching Google Earth/Map should be

Acl QUERY urlpath_regex cgi-bin\? Intranet

Acl forcecache url_regex-I kh.\ .Google mt.\ .Google mapgoogle\ .mapabc keyhole.com

No_cache allow forcecache

No_cache deny QUERY

#-

Refresh_pattern-I kh.\ .Google 1440 20% 10080 override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache ignore-private

Refresh_pattern-I mt.\ .Google 1440 20% 10080 override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache ignore-private

Refresh_pattern-I mapgoogle\ .mapabc 1440 20% 10080 override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache ignore-private

Refresh_pattern-I keyhole.com 1440 20% 10080 override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache ignore-private

Note:

KhX.google.com is google earth's picture server.

MtX.google.com is google map's picture server.

Mapgoogle.mapabc.com is google ditu's picture server.

Http://nukq.malmam.com/archives/16

Is it helpful for you to read the above content? If you want to know more about the relevant knowledge or read more related articles, please follow the industry information channel, thank you for your support.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report