Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the installation and configuration of Varnish cache server under Linux

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Linux Varnish cache server installation and configuration is how, many novices are not very clear about this, in order to help you solve this problem, the following editor will explain in detail for you, people with this need can come to learn, I hope you can gain something.

Varnish is a high-performance and open source reverse proxy server and http accelerator. Compared with the traditional Squid, Varnish has many advantages, such as higher performance, faster speed, more convenient management and so on. Poul-Henning Kamp is one of the kernel developers of FreeBSD. Varnish uses a new software architecture that works closely with current hardware submissions. In 1975, there were only two storage media: memory and hard disk. But now the memory of the computer system not only includes the main memory, but also includes L1, L2 and even L3 cache in cpu. There is also a cache device on the hard disk, so the architecture in which squid cache handles object replacement itself cannot know about these situations and optimize them, but the operating system can know about them, so this part of the work should be left to the operating system, which is the Varnish cache design architecture.

Verdens Gang, the largest online newspaper in Norway (http://www.vg.no) uses 3 Varnish instead of 12 squid, and its performance is better than before. This is the most successful application case of Varnish.

Varnish features:

1. Cache based on memory, and the data will disappear after restart

two。 Using virtual memory mode, Ithumb O has good performance.

3. Support to set a precise cache time of 0: 60 seconds

4.VCL configuration management is more flexible.

The maximum cache file size on a 5.32-bit machine is 2G

6. Have powerful management functions, such as top, stat, admin, list, etc.

7. The state machine is cleverly set up and the structure is clear.

8. Using binary heap to manage cache files can achieve the purpose of actively deleting.

Comparison between Varnish and Squid

Squid is a high-performance proxy cache server. There are many similarities and differences between Squid and varnish, as follows:

Similarities:

It's a reverse proxy server.

It's all open source software.

The difference is also the advantage of Varnish:

Varnish is highly stable, and Squid servers are more likely to fail than Varnish when they complete the same load of work, because they have to restart frequently when using Squid.

Varnish access is faster, Varnish uses "Visual Page Cache" technology, all cached data is read directly from memory, while squid is read from hard disk, so Varnish is faster in terms of access speed.

Varnish can support more concurrent connections because Varnish's TCP connections are released faster than Squid. Therefore, more TCP connections can be supported in the case of high concurrent connections.

Varnish can use regular expressions to bulk clear part of the cache through the management port, while Squid cannot.

Squid belongs to a single process using a single-core CPU, but Varnish opens multiple processes in the form of fork for processing, so it is reasonable to use all cores to process the corresponding requests.

Of course, compared with the traditional Squid, Varnish also has some disadvantages, as follows:

Once the varnish process is suspended, crashed or restarted, the cached data will be completely released from memory, and all requests will be sent to the back-end server, which will cause great pressure on the back-end server in the case of high concurrency.

In the use of varnish, if a single url request passes through a different varnish server through HA/F5 (load balancer), the requested varnish server will be penetrated to the backend, and the same request will be cached on multiple servers, resulting in a waste of varnish cache resources and performance degradation.

Solution:

To sum up, in the case of high traffic, it is recommended to use varnish's in-memory cache to start, and you need to follow multiple squid servers. Mainly to prevent the previous varnish service and server from being restarted, there will certainly be a lot of penetration in the early stage so that squid can act as the second layer cache, and it also makes up for the problem that varnish cache restarts in memory will be released.

This problem can be solved by hashing a single url request to a varnish server on the load balancer.

The workflow of varnish

1. Communication between processes

Varnish starts or has 2 processes master (management) process and child (worker) process. Master reads the storage configuration commands, initializes them, and then fork to monitor the child. Child allocates threads for cache work, and child also manages threads and generates many worker threads.

During the initialization of the main thread of the child process, the entire large storage file is loaded into memory. If the file exceeds the virtual memory of the system, the original configuration mmap size is reduced, and then the load continues. At this time, the free storage structure is created and initialized and placed in the storage management struct, waiting for allocation.

Then a thread in varnish responsible for a new http connection to the interface starts to wait for the user, if there is a new http connection, but this thread is only responsible for receiving, and then wakes up the work thread in the waiting thread pool for request processing.

After the worker thread reads the uri, it will find the existing object. If it is hit, it will return directly. If it is not hit, it will be taken out of the backend server and put in the cache. If the cache is full, the old object is released according to the LRU algorithm. For releasing the cache, a timeout thread detects the lifecycle of all object in the cache, and if the cache expires (ttl), it is deleted, freeing up the corresponding storage memory.

2. Communication between the structures of configuration files

Varnish installation

The code is as follows:

Wget http://ftp.cs.stanford.edu/pub/exim/pcre/pcre-8.33.tar.gz

Tar xzf pcre-8.33.tar.gz

Cd pcre-8.33

. / configure

Make & & make install

Cd.. /

The varnish-3.0.4 error is as follows:

Varnishadm.c:48:33: error: editline/readline.h: No such file or directory

Varnishadm.c: In function 'cli_write':

Varnishadm.c:76: warning: implicit declaration of function 'rl_callback_handler_remove'

Varnishadm.c:76: warning: nested extern declaration of 'rl_callback_handler_remove'

Varnishadm.c: In function 'send_line':

Varnishadm.c:179: warning: implicit declaration of function 'add_history'

Varnishadm.c:179: warning: nested extern declaration of 'add_history'

Varnishadm.c: In function 'varnishadm_completion':

Varnishadm.c:216: warning: implicit declaration of function 'rl_completion_matches'

Varnishadm.c:216: warning: nested extern declaration of 'rl_completion_matches'

Varnishadm.c:216: warning: assignment makes pointer from integer without a cast

Varnishadm.c: In function 'pass':

Varnishadm.c:233: error: 'rl_already_prompted' undeclared (first use in this function)

Varnishadm.c:233: error: (Each undeclared identifier is reported only once

Varnishadm.c:233: error: for each function it appears in.)

Varnishadm.c:235: warning: implicit declaration of function 'rl_callback_handler_install'

Varnishadm.c:235: warning: nested extern declaration of 'rl_callback_handler_install'

Varnishadm.c:239: error: 'rl_attempted_completion_function' undeclared (first use in this function)

Varnishadm.c:300: warning: implicit declaration of function 'rl_forced_update_display'

Varnishadm.c:300: warning: nested extern declaration of 'rl_forced_update_display'

Varnishadm.c:303: warning: implicit declaration of function 'rl_callback_read_char'

Varnishadm.c:303: warning: nested extern declaration of 'rl_callback_read_char'

Make [3]: * * [varnishadm-varnishadm.o] Error 1

Make [3]: Leaving directory `/ root/lnmp/src/varnish-3.0.4/bin/varnishadm'

Make [2]: * * [all-recursive] Error 1

Make [2]: Leaving directory `/ root/lnmp/src/varnish-3.0.4/bin'

Make [1]: * * [all-recursive] Error 1

Make [1]: Leaving directory `/ root/lnmp/src/varnish-3.0.4'

Make: * * [all] Error 2

Report an error and do not find a solution. Choose varnish-3.0.3.

The code is as follows:

Wget http://repo.varnish-cache.org/source/varnish-3.0.3.tar.gz

Tar xzf varnish-3.0.3.tar.gz

Cd varnish-3.0.3

Export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

. / configure-prefix=/usr/local/varnish-enable-debugging-symbols-enable-developer-warnings-enable-dependency-tracking-with-jemalloc

Make & & make install

/ usr/bin/install-m 755. / redhat/varnish.initrc / etc/init.d/varnish

/ usr/bin/install-m 644. / redhat/varnish.sysconfig / etc/sysconfig/varnish

/ usr/bin/install-m 755. / redhat/varnish_reload_vcl / usr/local/varnish/bin

Useradd-M-s / sbin/nologin varnish

The code is as follows:

Ln-s / usr/local/varnish/sbin/varnishd / usr/sbin/

Ln-s / usr/local/varnish/bin/varnish_reload_vcl / usr/bin/

Ln-s / usr/local/varnish/bin/varnishadm / usr/bin/

The code is as follows:

Chkconfig-add varnish

Chkconfig varnish on

Generate the varnish management key:

The code is as follows:

Uuidgen > / usr/local/varnish/etc/varnish/secret

Chmod 644 / usr/local/varnish/etc/varnish/secret

Modify the varnish startup configuration:

The code is as follows:

Sed-I "s @ ^ variable _ VCL_CONF=/etc/varnish/default.vcl@#VARNISH_VCL_CONF=/etc/varnish/default.vcl\ nVARNISH_VCL_CONF=/usr/local/varnish/etc/varnish/linuxeye.vcl@" / etc/sysconfig/varnish

Sed-I "s @ ^ variable _ LISTEN_PORT=6081@#VARNISH_LISTEN_PORT=6081\ nVARNISH_LISTEN_PORT=80@" / etc/sysconfig/varnish

Sed-I "s @ ^ variable _ SECRET_FILE=/etc/varnish/secret@#VARNISH_SECRET_FILE=/etc/varnish/secret\ nVARNISH_SECRET_FILE=/usr/local/varnish/etc/varnish/secret@" / etc/sysconfig/varnish

Sed-I "s @ ^ variable _ STORAGE_FILE=/var/lib/varnish/varnish_storage.bin@#VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin\ nVARNISH_STORAGE_FILE=/usr/local/varnish/var/varnish_storage.bin@" / etc/sysconfig/varnish

Sed-I "s @ ^ Varnish _ STORAGE_SIZE.*@VARNISH_STORAGE_SIZE=150M@" / etc/sysconfig/varnish

Sed-I "s @ ^ variable _ STORAGE=.*@VARNISH_STORAGE=\" malloc,\ ${VARNISH_STORAGE_SIZE}\ "@" / etc/sysconfig/varnish

Suppose your server has multiple logical processors, and you can also make the following settings:

Custom parameters can also be added in / etc/sysconfig/varnish, using the "- p parameter", such as:

DAEMON_OPTS= "- a ${VARNISH_LISTEN_ADDRESS}: ${VARNISH_LISTEN_PORT}\

-f ${VARNISH_VCL_CONF}\

-T ${VARNISH_ADMIN_LISTEN_ADDRESS}: ${VARNISH_ADMIN_LISTEN_PORT}\

-t ${VARNISH_TTL}\

-w ${VARNISH_MIN_THREADS}, ${VARNISH_MAX_THREADS}, ${VARNISH_THREAD_TIMEOUT}\

-u varnish-g varnish\

-S ${VARNISH_SECRET_FILE}\

-s ${VARNISH_STORAGE}\

-p thread_pools=2 "# here to add items

After Varnish starts, it runs in the background and returns to the command line state. It is important to note that the Varnish runtime starts two processes at the same time, one main process and one child process. If there is a problem with the child process, the main process will regenerate a child process.

VCL configuration

The code is as follows:

/ usr/local/varnish/etc/varnish/linuxeye.vcl

# A backend host named webserver is defined through backend. ".host" specifies the IP address or domain name of the backend host, and ".port" specifies the service port of the backend host.

Backend webserver {

.host = "127.0.0.1"

.port = "8080"

}

# call vcl_recv to start

Sub vcl_recv {

If (req.restarts = = 0) {

If (req.http.x-forwarded-for) {

Set req.http.X-Forwarded-For =

Req.http.X-Forwarded-For + "," + client.ip

} else {

Set req.http.X-Forwarded-For = client.ip

}

}

# if the type of request is not GET, HEAD, PUT, POST, TRACE, OPTIONS, DELETE, enter pipe mode. Note that this is the "& &" relationship.

If (req.request! = "GET" & &

Req.request! = "HEAD" & &

Req.request! = "PUT" & &

Req.request! = "POST" & &

Req.request! = "TRACE" & &

Req.request! = "OPTIONS" & &

Req.request! = "DELETE") {

Return (pipe)

}

# if the type of request is not GET or HEAD, enter pass mode

If (req.request! = "GET" & & req.request! = "HEAD") {

Return (pass)

}

If (req.http.Authorization | | req.http.Cookie) {

Return (pass)

} # accelerate the caching of linuxeye.com domain names, which is a concept of pan-domain names, that is, all domain names ending in linuxeye.com are cached

If (req.http.host ~ "^ (. *) .linuxeye.com") {

Set req.backend = webserver

}

# pairs end with .jsp, .do, php and with? When URL, read the content directly from the back-end server

If (req.url ~ "\. (jsp | do | php) ($|\)") {

Return (pass)

} else {

Return (lookup)

}

}

Sub vcl_pipe {

Return (pipe)

}

Sub vcl_pass {

Return (pass)

}

Sub vcl_hash {

Hash_data (req.url)

If (req.http.host) {

Hash_data (req.http.host)

} else {

Hash_data (server.ip)

}

Return (hash)

}

Sub vcl_hit {

Return (deliver)

}

Sub vcl_miss {

Return (fetch)

}

The code is as follows:

# if the request type is GET and the URL of the request contains upload, then the cache time is 300 seconds, that is, 5 minutes

Sub vcl_fetch {

If (req.request = = "GET" & & req.url ~ "^ / upload (. *) $") {

Set beresp.ttl = 300s

}

If (req.request = = "GET" & & req.url ~ "\. (png | gif | jpg | jpeg | bmp | swf | css | js | html | htm | xsl | xml | pdf | ppt | docx | rar | zip | ico | mp3 | mp4 | rmvb | ogg | mov | avi | wmv | txt) $") {

Unset beresp.http.set-cookie

Set beresp.ttl = 30d

}

Return (deliver)

}

The code is as follows:

# the following is to add a Header identity to determine whether the cache is hit

Sub vcl_deliver {

If (obj.hits > 0) {

Set resp.http.X-Cache = "HIT from demo.linuxeye.com"

} else {

Set resp.http.X-Cache = "MISS from demo.linuxeye.com"

}

Return (deliver)

}

The code is as follows:

# you can customize an error page using vcl_error

Sub vcl_error {

Set obj.http.Content-Type = "text/html; charset=utf-8"

Set obj.http.Retry-After = "5"

Synthetic {"

"} + obj.status +"+ obj.response + {"

Error "} + obj.status +"+ obj.response + {"

"} + obj.response + {"

Guru Meditation:

XID: "} + req.xid + {"

Varnish cache server

"}

Return (deliver)

}

Sub vcl_init {

Return (ok)

}

Sub vcl_fini {

Return (ok)

}

Check that the VCL is configured correctly:

The code is as follows:

Service varnish configtest

Or

The code is as follows:

Varnishd-C-f / usr/local/varnish/etc/varnish/linuxeye.vcl

Start varnish:

The code is as follows:

Service varnish start

View varnish status:

The code is as follows:

Service varnish status

Dynamically load the VCL configuration:

The code is as follows:

Service varnish reload

Stop varnish:

The code is as follows:

Service varnish stop

View the port 80 currently being monitored by varnish:

The code is as follows:

# netstat-tpln | grep: 80

Tcp 0 0 0.0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0 of the LISTEN 15249/varnishd.

Tcp 0 0 0.0.0.0 8080 0.0.0.015 * LISTEN 19468/nginx

Tcp 0 0: 80: * LISTEN 15249/varnishd

View the varnish process:

The code is as follows:

# ps-ef | grep varnishd | grep-v grep

Root 15248 10 11:47? 00:00:00 / usr/sbin/varnishd-P / var/run/varnish.pid-a: 80-f / usr/local/varnish/etc/varnish/linuxeye.vcl-T 127.0.0.1 var/run/varnish.pid 6082-t 120-w 50 1400120-u varnish-g varnish-S / usr/local/varnish/etc/varnish/secret-s malloc,150M

Varnish 15249 15248 0 11:47? 00:00:00 / usr/sbin/varnishd-P / var/run/varnish.pid-a: 80-f / usr/local/varnish/etc/varnish/linuxeye.vcl-T 127.0.0.1 var/run/varnish.pid 6082-t 120-w 50 1400120-u varnish-g varnish-S / usr/local/varnish/etc/varnish/secret-s malloc,150M

Varnish access Log

Varnishncsa can log HTTP requests to a log file using NCSA Universal Log format (NCSA Common Log Format).

The code is as follows:

/ usr/bin/install-m 755. / redhat/varnishncsa.initrc / etc/init.d/varnishncsa

Chmod + x / etc/init.d/varnishncsa

Chkconfig varnishncsa on

Mkdir-p / usr/local/varnish/logs

Edit varnishncsa startup configuration

The code is as follows:

Ln-s / usr/local/varnish/bin/varnishncsa / usr/bin

Sed-I's @ ^ logfile. * @ logfile= "/ usr/local/varnish/logs/varnishncsa.log" @'/ etc/init.d/varnishncsa

Start varnishncsa:

The code is as follows:

Service varnishncsa start

Use logrotate to poll log files (poll daily):

The code is as follows:

Cat > / etc/logrotate.d/varnish

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report