In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
The purpose of caching server on:
1. Reduce the access pressure of clint to the back-end server.
2. In the case of more static resources, the server can quickly respond to the access to clint.
At present, the cache server is quite famous, and the cache proxy server uses more Varnish/squid in the market.
Squid mainly caches large files.
Varnish mainly caches static resources, such as pictures.
Differences between Varnish versions:
The new version of Varnish5,client/backend separation, the new vanishlog query language, security is said to have improved.
First, you must define the version number: vcl 4. 0. VMOD's is more independent, and the official recommendation is to load Standard VMOD's (std).
In addition, director has become VMOD. If you need to use it, you need import directors.
The vcl_fetch function is replaced by vcl_backend_response and vcl_backend_fetch, and req.* is no longer suitable for vcl_backend_response, only bereq.* can be used.
As for vcl_backend_fetch, it seems that none of the doc has seen detailed usage.
Error is changed to return (synth (http_code,message)), and req.backend is changed from req.backend_hint,req.request to req.method,obj to read-only object.
Vcl_synth uses resp.*, instead of the original obj.*.
If you change vcl_error to vcl_backend_error, you must use beresp.*, instead of obj.*.
Keyword "purge;" command, has been removed. Use return (purge) in vcl_recv.
Hit_for_pass is specified by set beresp.uncacheable = true;.
Vcl_recv must return lookup changes to hash,vcl_hash. Hash changes must be returned to lookup,vcl_pass. Pass changes must be returned to fetch.
Req.backend.healty is replaced by std.healthy (req.backend), but can not set grace, chicken ribs, was abandoned, now can only do is the role of keepalive.
The locations that can be used are different among req, bereq,resp, and beresp.
Server.port and client.port are changed to std.port (server.ip) and std.port (client.ip) respectively. Like the above healthy, import std is required.
The change from session_linger to timeout_linger,sess_timeout to timeout_idle,sess_workspace was discarded.
Remove has been completely deprecated, but I have been using unset to say.
Change return (restart) to return (retry) and vcl_backend_fetch will be used.
Custom sub functions cannot start with vcl_ and can be called as call udf.
Deployment architecture: single node / double node
1. If your service has been deployed in cloud nodes for a long time, and popular cloud platforms have load balancers (HA/LB), and so on, deploy 2 Varnish to mount the back-end nginx.
2. If you build your own computer room, the front-end agent of Varnish can choose (Nginx/HA). These two sets of open source software are popular, and the performance of acting as an agent is also good.
The following is a rough Varnish logic diagram:
The first is all single nodes:
The second kind of Varnish has two nodes, and Nginx has a single node:
The third kind of double node:
The previous installment of Varnish has been written, please refer to the previous blog
Varnish configuration parameters:
Cat / etc/sysconfig/varnish# Maximum number of open files (for ulimit-n) # maximum number of open files, limit can modify NFILES=131072# Default log size is 82MB + header # log file size 82MMEMLOCK=82000# Maximum number of threads (for ulimit-u) # maximum number of threads, using the unlimited variable NPROCS= "unlimited" # Maximum size of corefile (for ulimit-c). The maximum number of core files opened by Default in Fedora is 0 #. Fedora defaults to 0 # DAEMON_COREFILE_LIMIT= "unlimited" # Set this to 1 to make init script reload try to switch vcl without restart.# To make this work, you need to set the following variables# explicit: VARNISH_VCL_CONF, VARNISH_ADMIN_LISTEN_ADDRESS,# VARNISH_ADMIN_LISTEN_PORT, VARNISH_SECRET_FILE, or in short,# use Alternative 3, Advanced configuration, belowRELOAD_VCL=1 # if configured as 1, re-load varnish configuration file vcl The varnish server will not restart, it may be hot loading, it is not clear to use the default # # Main configuration file. You probably want to change it:) VARNISH_VCL_CONF=/etc/varnish/default.vcl # Varnish default loaded configuration file # VARNISH_LISTEN_ADDRESS= # Varnish monitoring port, changed to 80, modified VARNISH_LISTEN_PORT=80# # Telnet admin interface listen address and port # Varnish management port VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1VARNISH_ADMIN_LISTEN_PORT=6082# # Shared secret file for admin interfaceVARNISH_SECRET_FILE=/etc/varnish/secret # Varnish secret file Default # # The minimum number of worker threads to start # Varnish minimum number of threads VARNISH_MIN_THREADS=50# # The Maximum number of worker threads to startVARNISH_MAX_THREADS=4000 # Varnish maximum number of threads. Default is 1000 No more than 5000 should be fine # # Idle timeout for worker threadsVARNISH_THREAD_TIMEOUT=120 # timeout # # Cache file size: in bytes, optionally using k / M / G / T suffix,# # or in percentage of available disk space using the% suffix.VARNISH_STORAGE_SIZE=512M # Varnish defaults to # # Backend storage specificationVARNISH_STORAGE= "malloc,$ {VARNISH_STORAGE_SIZE}" # storage space uses memory File is disk # # Default TTL used when the backend does not specify oneVARNISH_TTL=120 # Varnish cache time 120s
Varnish configuration file, this part uses version 4.0, other versions are still different, Varnish official website to version 5.
[root@www varnish] # cat default.vcl # # This is an example VCL file for Varnish.## It does not do anything by default, delegating control to the# builtin VCL. The builtin VCL is called when there is no explicit# return statement.## See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/# and http://varnish-cache.org/trac/wiki/VCLExamples for more examples.# Marker to tell the VCL compiler that this VCL has been adapted to the# new 4.0 format.vcl 4.0 import directors;# Default backend definition # imports the directors module. Based on load balancing scheduling, multiple backend hosts can use polling import directors;# Default backend definition. Set this to point to your content server.# demonstrates backend web1 {.host = "172.16.2.27"; .port = "80";} # Note: after the user's request is successful, the requested data is processed, varnish5 caches use return (hash), varnish4 caches use return (lookup); there is a difference. Sub vcl_recv {if (req.http.host ~ "(www.)? laiwojia.la") {# if you request a www domain name, use web1 set req.backend_hint = web1;} # return (hash); # otherwise, go hash It is best to write this to the end of the vcl_recv configuration. This is the if judgment. Except for www using web1, the rest of the url is hash. For subsequent tests of pass, you will find that they are all hits. After commenting out this line, you will not pass if (req.url ~ "(? I) ^ / (login | admin)") {# login pass return (pass);} if (req.url ~ "(? I)\ .php $") {# visit url is php ending with web1 set req.backend_hint = web1 } if (req.url ~ "(? I)\. (jpg | jpeg | png | gif | css | js) $") {# Picture web1 set req.backend_hint = web1;} # Note: req.method when using the varnish5.* version The version of varnish4.* uses req.request. Note that if (req.method! = "GET" & & req.method! = "HEAD" & & req.method! = "PUT" & & req.method! = "POST" & & req.method! = "TRACE" & & req.method! = "OPTIONS" & & req.method! = "PATCH" & & req.method! = "DELETE") {return (pipe) } if (req.http.Upgrade ~ "(? I) websocket") {return (pipe);} if (req.method! = "GET" & & req.method! = "HEAD") {return (pass);} # the following paragraph is where I see resources on other people's github and find more complete excerpts, such as compression, pictures, video streams, documents, etc. Access to these resources does not save cookie information If (req.url ~ "^ [^?] *\. (7z | avi | bmp | bz2 | csv | docx | eot | flv | gif | gz | ico | jpg | js | less | mka | mkv | mov | mp3 | mp4 | mpeg | odt | otf | ogg | ogm | opus | png | ppt | pptx | rar | rar | rtf | svg | svg) (\?. *)? $") {CD (CD);} # this paragraph actually duplicates some of the above, so it is right to choose one of the two or keep it all. } if (req.url ~ "test.html") {# this is tested above. If the request is test.html, pass. The previous return (hash) is written above, and all access is hit. Check the configuration file return (pass);} return (hash). # otherwise, it will be cached. 3.0,2.0 seems to be return (lookup), and 4.0 is return (hash). I try some lookup operations to find errors} # cancel the private identity of specific types of resources, such as public images, etc. And forcibly set the length of time it can be cached by varnish. Below is the server response to the cache server sub vcl_backend_response {if (beresp.http.cache-control! ~ "s-maxage") {if (bereq.url ~ "(? I)\. (jpg | jpeg | png | gif | css | js) $") {unset beresp.http.Set-Cookie Set beresp.ttl = 3600s }} if (bereq.http.host = "(www.)? laiwojia.la") {if (bereq.url ~ "(? I) / api/product/hotlist" | | bereq.url ~ "(? I) / api/dolphin/list" | | bereq.url ~ "(? I) / api/product/baseInfo" | | bereq.url ~ "(? I) / api/product/desc" | | bereq Url ~ "(? I) / api/search/brandRecommendProduct" | | bereq.url ~ "(? I) / cms/view/h6/headlinesList" | | bereq.url ~ "(? I) / cms/view/h6/category" | | bereq.url ~ "(? I) / cms/view/h6/article" | | bereq.url ~ "(? I) / cms/view/h6/\ w+\ .html "| | bereq.url ~" (? I) / api/product/distributions ") {set beresp.ttl = 300s / / change the cache time to 5 minutes} elseif (bereq.url ~ "(? I) / api/search/searchList\? sortType=volume4sale_desc\ & companyId=10\ & keyword=\ *\ & pageSize=10") {set beresp.ttl = 60s / / set to 1 minute} elseif (bereq.url ~ "(? I) / cms/view/.*/templateJS\ .json" | | bereq.url ~ "(? I)\ .html") {set beresp.ttl = 600s; / / set to 10 minutes} elseif (bereq.url ~ "(? I) / libs/") {set beresp.ttl = 1800s } set beresp.grace = 2m;} sub vcl_pipe {if (req.http.upgrade) {set bereq.http.upgrade = req.http.upgrade;} return (pipe);} sub vcl_pass {# return (pass);} sub vcl_hash {hash_data (req.url); if (req.http.host) {hash_data (req.http.host);} else {hash_data (server.ip) } if (req.http.Cookie) {hash_data (req.http.Cookie);}} sub vcl_hit {if (obj.ttl > = 0s) {return (deliver);}} sub vcl_miss {return (fetch);} # result delivery, choose one of the two below # respond # sub vcl_deliver {# if (obj.hits > 0) {# set resp.http.X-Cache = "Hit" + server.ip / / the server response returned "HIT" + Varnish server ip#} else {# set resp.http.X-Cache = "Miss"; / / returned missed #} #} sub vcl_deliver {set resp.http.X-Age = resp.http.Age; # response returned Age, unset resp.http.X-Age; if (obj.hits > 0) {set resp.http.X-Cache = "HIT" + server.hostname # Server response only returns hit or miss;} else {set resp.http.X-Cache = "MISS";}}
If you need a profile to refer to, you can reply directly from the blog.
Common commands for Varnish:
Reload configuration file: varnish_reload_vcl
View the log of Varnish the resource that is often cached: varnishlog
Another way to view the log: varnishncsa
Varnish management port command: varnishadm
View varnish hit ratio: varnishstat
Icon View varnish hit: varnishhist
Configuration Test:
The first test: directly skip Varnish and directly access the nginx test:
Access through the linux native command curl
View the nginx log:
The second kind of access domain name, domain name resolution is on the host Varnish:
Curl domain name test:
View the results:
Third: access through the web browser to see the hit:
The fourth test:
Change the contents of the index.html file, the test results: the first time will not be hit, the second / third time all hit.
Configure Varnish and pass as long as it is a test.html file. Test results:
The following is the difference between return (pass) and return (pipe), excerpts from other people's blogs:
Call the pass function to call the data from the back-end server.
Call the pipe function to establish a direct connection between the client and the back-end server, and call the data from the back-end server.
Call the hash function to find the reply data from the cache and return it. If it cannot be found, call the pass function from the back-end server
Call data.
The process of establishing a connection in http
Type of http request: get post head
Let's start with the process of establishing a connection with http.
When the browser wants to get a web page content, such as typing www.google.com in the browser.
At this point, the browser begins to establish a connection with the server, first perform a three-way handshake to confirm the establishment of the connection.
The browser then sends a request, and a web page contains multiple content, such as pictures, text, html code, css code, js code.
If in html version 1. 0, you need to establish a connection to request a file, multiple requests for multiple connections. It costs a lot of money.
In HTML 1.1, it has the feature of persistent connection, which allows the connection to be maintained during the keep-live time, during which there is no need to
Multiple requests can be sent by establishing a connection.
The request is completed or the keep-live time is up and the connection is disconnected.
Type of HTTP request:
There are several types of HTTP requests, and here are the main ones:
GET: requests the specified page information and returns the entity body.
HEAD: only the first part of the page is requested.
POST: requests the server to accept the specified document as a new dependent entity for the identified URI.
To put it bluntly, to request a static HTML page is to use the get type, while if you post a Weibo on Sina Weibo, it is actually the post type.
To sum up, get is to request the relevant URI and accept the returned data from the server. In order to receive data.
Post sends data to the server, and the server needs to deal with the data accordingly. To send data.
If you understand all of the above, you can answer these three questions:
Pass and pipe both fetch data from back-end servers. What's the difference between them?
Under what circumstances should I use pass and under what circumstances should I use pipe?
What kind of data will be cached in varnish?
Q: both pass and pipe fetch data from back-end servers. What's the difference between them?
Answer: when vcl_recv calls the pass function, pass forwards the current request directly to the backend server. And the subsequent requests are still
It is processed by varnish.
For example, after the HTTP connection is established, the client requests two files, a.css and a.png sequentially, and "current request" refers to the first
The request, a.cssjournal a.css, is forwarded directly to the back-end server and is not cached. The subsequent a.png will be done by varnish.
Processing, varnish will determine how a.png handles it.
Summary: in a connection, except for the current request, the other requests are still handled by the varnish as usual.
The pipe mode is different. When vcl_recv determines that the pipe function needs to be called, varnish will be in both the client and the server.
A direct connection is established between, and then all requests from the client are sent directly to the server, bypassing varnish, and no longer by varnish
Check the request until the connection is disconnected.
Under what circumstances should I use pass and under what circumstances should I use pipe?
Answer: pass usually only deals with static pages. That is, calling the pass function is appropriate only in requests of GET and HEAD types.
In addition, it is important to note that the pass schema cannot handle POST requests. Why? Because POST requests are usually sent
When the data is given to the server, the server needs to receive the data, process the data and feedback the data. It is dynamic and does not cache.
The sample code is as follows:
If (req.request! = "GET" & & req.request! = "HEAD")
{
Return (pipe)
}
Then under what circumstances do you use pipe? As you can see from the above statement, pipe is used when the type is POST, but it may not be clear. For instance,
When the client requests a video file, or a large document, such as a .zip .tar file, you need to use pipe mode
These large files are not cached in varnish.
What kind of data will be cached in varnish?
A: varnish caches only static data. The varnish caching strategy found on the Internet can answer this question:
Varnish caching strategy
By default, caching is based on the http status code returned by the backend. The status codes that can be cached are as follows:
two hundred
two hundred and three
three hundred
three hundred and one
three hundred and two
four hundred and ten
four hundred and four
The master's blog link: http://yeelone.blog.51cto.com/1476571/772369/
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.