In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the knowledge of "how to use varnish to separate the movement of a website based on centos6.5". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
A brief introduction to varnish
Varnish is a high-performance, open source reverse proxy server and cache server, and its developer poul-henning kamp is one of the core developers of freebsd.
Varnish mainly runs two processes: the management process and the child process (also known as the cache process).
The management process mainly implements the application of new configuration, compiling vcl, monitoring varnish, initializing varnish and providing a command line interface. The management process detects the child process every few seconds to determine whether it is running properly. If it does not get a response from the child process within the specified period of time, management will restart the child process.
II. Varnish work flow
1) after the varnish receives the request from the client, it will be processed by the vcl_recv status engine. The unrecognized request will be sent to the vcl_pipe state engine through the parameter pipe, the request that needs to find the cache will be sent to the vcl_hash state engine through the lookup parameter, and the data that does not need to be cached will be sent to the vcl_pass state engine through the parameter pass.
2) after receiving the request, the vcl_hash status engine will look up the data from the cache. There are two query results, one is hit cache hit, the other is miss cache miss
3) the vcl_hit state engine delivers the hit cache data to the vcl_deliver state engine through the parameter deliver, and the vcl_deliver state engine processes the data and finally returns it to the client
4) the vcl_miss state engine gives the missed result parameter fetch to the vcl_fetch state engine, and the vcl_fetch state engine will look up the data from the database
5) the vcl_fetch status engine returns the results queried from the database to the vcl_deliver status engine
6) the vcl_deliver status engine returns the result to the master process and finally to the client
Third, use varnish to realize the separation of dynamic and static on the website.
Lab environment, three virtual machines
Linux:centos6.5
Varnish:varnish-3.0.4-1.el6.x86_64
Nginx:nginx-1.4.7
Varnish host: two network cards, external network ip 172.16.36.10, internal network ip 192.168.0.10
Web server 1:ip 192.168.0.20, used as a static file server
Web server 2:ip 192.168.0.30, used as a dynamic program server
Premise statement:
The configuration file of varnish is the vcl suffix and is located in the / etc/varnish/ directory. Cache is usually used to improve response speed. Generally speaking, html static pages, pictures, js scripts and css stylesheets can be cached. Because pages written by dynamic scripting languages need to be processed by script engines, there is no need for caching. Nginx itself has caching and reverse proxy functions, which can completely realize the dynamic and static separation of web services. However, compared with caching functions, varnish caching is obviously more professional than nginx, so if you want to be a cache server, you can try varnish. This operation will use varnish to separate web services for experimental purposes.
1. Install varnish
# rpm-ivh varnish-3.0.4-1.el6.x86_64.rpm varnish-docs-3.0.4-1.el6.x86_64.rpm varnish-libs-3.0.4-1.el6.x86_64.rpm
2. Configure varnish
1). Edit the configuration file / etc/sysconfig/varnish of the varnish script, and modify the port for varnish listening to 80
2) create a new file / etc/varnish/web.vcl, and edit varnish cache rules
# define backend server backend web1 {.host = "192.168.0.20"; .port = "80";} backend web2 {.host = "192.168.0.30"; .port = "80";} # only allow native purgers request method to clear cache acl purgers {"127.0.0.1"; "172.16.0.0" / 16 } sub vcl_recv {if (req.request== "purge") {if (! client.ip~purgers) {error 405 "mothod not allow";}} # static resources are delivered to the web1 server if (req.url ~ "\. (html | htm | shtml | js | jpg | png | gif | jpeg)) {set req.backend=web1;} # php page is delivered to the web2 server and the cache if (req.url ~"\ .php ") {set req.backend=web2; return (pass);} return (lookup) is skipped } # clear the hit cache sub vcl_hit {if (req.request = = "purge") {purge; error 200 "purged ok";}} # if the requested purge resource is not in the cache list, return status sub vcl_miss {if (req.request = = "purge") {purge; error 404 "not in cache" } # if the resource requested to clear is a non-cacheable resource, return sub vcl_pass {if (req.request = = "purge") {error 502 "purged on a passed object.";}} # sub vcl_fetch {if (req.url ~ "\. (html | htm | shtml | css | js | jpg | png | gif | jpeg)") {set beresp.ttl=7200s }} # returns the result to the client and adds two fields to the response header to show whether it is hit or not, and to display the web server sub vcl_deliver {if (obj.hits > 0) {set resp.http.x-cache= "hit from" + "+ server.ip;} else {set resp.http.x-cache=" miss ";} set resp.http.backend-ip=req.backend;}
3), load the configuration to varnish
3. 1), connect varnish
3.2), load configuration
3.3), usage configuration
4) configure two web servers and install nginx and php respectively
192.168.0.20 server, create two new pages, index.html and index.php. The output of the request for the two pages is as follows:
192.168.0.30 server, create two new pages, index.html and index.php. The output of the request for the two pages is as follows:
5), test results, open the address: 172.16.36.10
When we request a html page, no matter how much we refresh the request, the cache always hits and displays hit, and the back-end server is always web1 (192.168.0.20)
When we request a php page, we don't let him cache it, so the cache will never hit, showing miss, and the back-end server has always been web2 (192.168.0.30).
6), clear the cache
The whole configuration is completed, and the varnish realizes the separation of movement and movement.
This is the end of the content of "how to use varnish to separate the movement of a website based on centos6.5". Thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.