Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

High availability of web Cluster realized by haproxy+keepalived

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

High availability of web Cluster realized by haproxy+keepalived

The concept of load balancing Cluster

Load balancing is one of the factors that must be considered in the design of distributed system architecture. it refers to sharing the pressure load of "request" and "access" among the nodes in the cluster as much as possible by scheduling and distribution. avoid access delay caused by high load on some nodes and waste of resources on some nodes with low load. In this way, each node can bear a certain load pressure of access requests, and can realize the dynamic distribution of access requests among nodes to achieve load balancing, so as to provide enterprises with higher performance and more stable system architecture solutions.

The concept of highly available clusters

High availability refers to the technology that aims to reduce service interruption time or avoid service interruption, and it is also one of the factors that must be considered in distributed system architecture. The heartbeat check between the nodes in the cluster can detect the health status of the nodes in the whole cluster. if a node fails, its standby node will take over its work in a few seconds. Therefore, for users, the service is always accessible.

What is haproxy?

HAProxy is a free and open source software written in C language that provides high availability, load balancing, and TCP and HTTP-based application proxies. HAProxy is especially suitable for heavily loaded web sites (which are also suitable for database load balancing), which usually require session persistence or seven-tier processing. HAProxy runs on current hardware and can support tens of thousands of concurrent connections. And its mode of operation makes it easy and secure to integrate into your current architecture, while protecting your web server from being exposed to the network.

HAProxy is used by well-known websites including GitHub, Bitbucket, Stack Overflow, Reddit, Tumblr, Twitter and Tuenti, as well as Amazon Web Services.

What is keepalived?

Keepalived is a lightweight high-availability software that can only achieve high availability of IP resources. Highly available functions are mainly achieved through the Virtual routing redundancy Protocol (VRRP). In a non-dual master keepalived cluster, a master node is elected based on priority. IP resources are preferentially bound to the master node, and other nodes become backup nodes. The survival of each other is detected by checking the heartbeat line between the master node and the standby node. Once the primary node goes down, the standby node preempts the IP resources. When the primary node returns to normal, the standby node will release the IP resources to the primary node.

Here is a simple experiment to see how to achieve high availability and load balancing of web clusters through haproxy+keepalived.

Environment settin

Hostnam

Role

IP

Web1

Real server-1

192.168.83.129/24web2

Real server-2

192.168.83.130/24haproxy1

Proxy server-1

192.168.83.131/24haproxy2

Proxy server-2

192.168.83.131 Universe 24 Galaxy

Client

192.168.43.159/24

/ / all servers need to synchronize the time, and the time of the cluster is very sensitive. It is recommended to use the internal ntp server in the production environment. With the increase of the running time, the external ntp server will gradually appear time offset. Once the offset is too large, the cluster will have very serious problems.

Ntpdate time.nist.gov

Crontab-l

* / 10 * ntpdate time.nist.gov

Deploy apache as real server on both hosts

Web1

[root@web1 ~] # yum-y install httpd

[root@web1 ~] # sed-I 's/Listen 80/Listen 8080 s/Listen 80/Listen g' / etc/httpd/conf/httpd.conf # for security reasons, change the default port to 8080

[root@web1 ~] # systemctl start httpd

[root@web1 ~] # systemctl enable httpd

[root@web1 ~] # echo "web1" > / var//www/html/index.html

[root@web1 ~] # curl http://192.168.83.129:8080

Web1

Web2

[root@web2 ~] # yum-y install httpd

[root@web2 ~] # sed-I 's/Listen 80/Listen 8080 systemctl start httpd g' / etc/httpd/conf/httpd.conf # for security reasons, change the default port to 8080 [root@web2 ~] # systemctl start httpd

[root@web2 ~] # systemctl enable httpd

[root@web2 ~] # echo "web2" > / var/www/html/index.html

[root@web2 ~] # curl http://192.168.83.130:8080

Web2

Set up proxy server 1

[root@haproxy1 ~] # yum-y install haproxy # install haproxy

[root@haproxy1 ~] # cp / etc/haproxy/haproxy.cfg / etc/haproxy/ haproxy.cfg.bak`date +% Fmury% T` # in practical work, you must be careful to modify files. It is best to make a backup in advance.

[root@haproxy1 haproxy] # cat haproxy.cfg.bak2017-05-28-01\: 16\: 53 | egrep-v "(# | ^ $)" > haproxy.cfg # filter comments and blank lines

[root@haproxy1 ~] # cat / etc/haproxy/haproxy.cfg # modify the configuration file as follows

Global # Global configuration

Log 127.0.0.1 local3 info # records info-level logs sent to the log device local3 locally

Chroot / var/lib/haproxy # the working path of binding haproxy

Pidfile / var/run/haproxy.pid # pid file path

Maxconn 4000 # maximum connections

User haproxy # the user running the process

Group haproxy # user groups that run the process

Daemon # operates in the future mode

Stats socket / var/lib/haproxy/stats # haproxy dynamically maintained socket files, the following will be a small experiment to see the role of this thing

Defaults # unless specifically defined, default options will be added to the following options, and those that are not applicable will not be defined

Mode http # default mode

Log global # refers to the global log configuration

Option httplog # enables logging of http requests. Haproxy does not log http requests by default

Connections that do not record health checks in the option dontlognull # log

Option http-server-close # for some cases where http persistent connections are not supported on the server side, you can use this parameter to use the client-to-haproxy persistent connection and the haproxy-to-server short connection

Option forwardfor except 127.0.0.0 option forwardfor except 8 # allows the server to record the IP address of the real client that initiated the request

Option redispatch # when client acquires a resource from a web server, it needs to establish a tcp connection. In order to maintain the persistence of the session and keep the tcp connection persistent within a certain period of time, there is no need to establish a tcp connection when you visit the same resource again. Tcp persistence depends on cookie. When you have a realserver down, you will redirect the cache of the visited http to another realserver.

The number of failed reconnections of retries 3 # to the real server. If this value is exceeded, the corresponding official server will be marked as unavailable.

Timeout http-request 10s # http request timeout

Timeout of timeout queue 1m # requests in queue

Timeout connect 10s # connection timeout

Timeout client 1m # client connection timed out

Timeout server 1m # server connection timeout

Timeout http-keep-alive 10s # http-keep-alive timeout

Timeout check 10s # detect timeout

Maxconn 3000 # maximum number of connections per process

Frontend www # defines the front end

Bind *: 80 # which IP port 80 is accessed by the binding client

Mode http # specifies the mode as http

Option httplog # logs http requests

Log global # apply global log configuration

Monitoring page included with stats uri / haproxy?stats # haproxy

Default_backend web # specifies the default backend

Backend web # defines the backend

Mode http # mode is http

Option redispatch

Balance roundrobin # load balancing algorithm designated as polling

Option httpchk GET / index.html # method of detecting back-end Real Server

Server web1 192.168.83.129:8080 cookie web1 weight 1 check inter 2000 rise 2 fall 3

Server web2 192.168.83.130:8080 cookie web2 weight 1 check inter 2000 rise 2 fall 3

# defined real server with a weight of 1, health check interval of 2 seconds, 2 retries and 3 failures marked as unavailable

[root@haproxy1 ~] # haproxy-c-f / etc/haproxy/haproxy.cfg # check whether the configuration file is correct

Configuration file is valid

Enable the function of remote logging

[root@haproxy1] # cat-n / etc/rsyslog.conf

15$ ModLoad imudp # remove comments

16$ UDPServerRun 514 # Delete comments

73 local7.* / var/log/boot.log # add the following below this line

74 local3.* / var/log/haproxy.log # logs sent to the local3 log device are recorded in / var/log/haproxy.log

[root@haproxy1 ~] # systemctl restart rsyslog

[root@haproxy1 ~] # systemctl start haproxy

[root@haproxy1 ~] # systemctl enable haproxy

Verification

Browser input: http://192.168.83.131/haproxy?stats to access the monitoring page of haproxy

Test the real server monitoring status detection function set in the configuration file (option httpchk GET / index.html)

[root@haproxy1 haproxy] # sed-I 's/index\ .html / test.html/g' haproxy.cfg # change the test page to test.index

[root@haproxy1 haproxy] # systemctl reload haproxy # reload haproxy. Restart haprox is not recommended in production, as this will disconnect all existing connections

Message from syslogd@localhost at May 29 10:30:23...

Haproxy [3305]: backend web has no server available! # prompt the backend server to be unavailable immediately

You can see the real server downtime at the backend on the monitoring page.

There is a line in the configuration file about dynamic maintenance of haproxy, so what is dynamic maintenance of haproxy? a small example is as follows:

[root@haproxy1 ~] # yum-y install socat

[root@haproxy1 ~] # echo "show info" | socat stdio / var/lib/haproxy/stats # View the information of info, which can be used to monitor the status of haproxy.

Name: HAProxy

Version: 1.5.14

Release_date: 2015-07-02

Nbproc: 1

Process_num: 1

Pid: 3390

Uptime: 0d 0h24m43s

Uptime_sec: 883

Memmax_MB: 0

Ulimit-n: 8033

Maxsock: 8033

Maxconn: 4000

Hard_maxconn: 4000

CurrConns: 0

CumConns: 19

CumReq: 37

MaxSslConns: 0

CurrSslConns: 0

CumSslConns: 0

Maxpipes: 0

PipesUsed: 0

PipesFree: 0

ConnRate: 0

ConnRateLimit: 0

MaxConnRate: 2

Se***ate: 0

Se***ateLimit: 0

MaxSe***ate: 2

SslRate: 0

SslRateLimit: 0

MaxSslRate: 0

SslFrontendKeyRate: 0

SslFrontendMaxKeyRate: 0

SslFrontendSessionReuse_pct: 0

SslBackendKeyRate: 0

SslBackendMaxKeyRate: 0

SslCacheLookups: 0

SslCacheMisses: 0

CompressBpsIn: 0

CompressBpsOut: 0

CompressBpsRateLim: 0

ZlibMemUsage: 0

MaxZlibMemUsage: 0

Tasks: 8

Run_queue: 1

Idle_pct: 100

Node: haproxy1

Description:

There are many other functions of dynamically maintaining haproxy, such as shutting down and restarting the real server at the back end, and so on.

Redirect access content according to acl

Haproxy has a very practical feature, which can redirect the access content according to acl, and change the configuration of the frontend and backend in the configuration file as follows:

Frontend www

Bind *: 80

Mode http

Option httplog

Log global

Stats uri / haproxy?stats

Acl web1 hdr_reg (host)-I www.web1.com # web1,acl name; hdr_reg (host), fixed format, used to identify host

Acl web2 hdr_reg (host)-I www.web2.com

Use_backend www1 if web1 # use_backend specifies which backend to use, and if is used to identify acl

Use_backend www2 if web2

Backend www1

Mode http

Option redispatch

Balance roundrobin

Option httpchk GET / index.html

Server web1 192.168.83.129:8080 cookie web1 weight 1 check inter 2000 rise 2 fall 3

Backend www2

Mode http

Option redispatch

Balance roundrobin

Option httpchk GET / index.html

Server web2 192.168.83.130:8080 cookie web2 weight 1 check inter 2000 rise 2 fall 3

I don't know why my laptop has been prompted for connection timeout with Google Firefox IE. I have also configured the domain name resolution, so I use proxy 1 to verify it.

In addition to the domain name, the access content can be redirected according to the suffix name of the file.

Frontend www

Bind *: 80

Mode http

Option httplog

Option forwardfor

Log global

Stats uri / haproxy?stats

Acl is_static_reg url_reg / *. (css | jpg | png | js) $

Use_backend static_web if is_static_reg

Default_backend web

Backend web

Mode http

Option redispatch

Balance roundrobin

Option httpchk GET / index.html

Server web1 192.168.83.129:8080 cookie web1 weight 1 check inter 2000 rise 2 fall 3

Server web2 192.168.83.130:8080 cookie web2 weight 1 check inter 2000 rise 2 fall 3

Backend static_web

Mode http

Option redispatch

Balance roundrobin

Option httpchk GET / index.html

Server web2 192.168.83.130:8080 cookie web2 weight 1 check inter 2000 rise 2 fall 3

[root@web2 html] # echo test_static > index.jpg

Verification

Haproxy+keepalived

In order to prevent a single point of failure and a single haproxy cannot afford large concurrency, there are usually two or more haproxy servers as proxies in the production environment.

Prepare another agent haproxy2. The configuration is more or less the same as proxy 1.

[root@haproxy2 ~] # yum-y install haproxy

[root@haproxy1 ~] # scp / etc/haproxy/haproxy.cfg haproxy2:/etc/haproxy/ # copy the configuration file on haproxy1 to haproxy2

[root@haproxy1 ~] # scp / etc/rsyslog.conf haproxy2:/etc/ # # copy the configuration file on haproxy1 to haproxy2

[root@haproxy2 ~] # systemctl enable haproxy

[root@haproxy2 ~] # systemctl restart haproxy

[root@haproxy2 ~] # systemctl restart rsyslog

Verify that haproxy2 is functioning properly

Configure keepalived on haproxy1

[root@haproxy1 ~] # yum-y install keepalived # download keepalived

[root@haproxy1 ~] # tail-2 / etc/sysconfig/keepalived # set the log of keepalived to be sent to log device 6

KEEPALIVED_OPTIONS= "- D-S 6"

Modify the configuration file as follows

[root@haproxy1 ~] # cat / etc/keepalived/keepalived.conf

Global_defs {

Notification_email {

Root@localhost # alarm email receiving address

}

Notification_email_from keepalived@localhost # should be the title of the message

Smtp_server 127.0.0.1 # smtp server address

Smtp_connect_timeout 30 # timeout for connecting to smtp server

Router_id haproxy1 # the router_id of each keepalived node is unique and cannot be repeated

Vrrp_script haproxy {# vrrp_script is to prevent vip from being transferred after keepalived node downtime

Script "kiall-0 haproxy the module used to check the status of the keepalived service. This module only cares about the return value of 0 for the script.

If weight-25 or 1 is returned, if the check fails immediately, subtract 25 from the keepalived priority of the secondary node.

}

}

Vrrp_instance ha1 {

State MASTER # role is MASTER

Interface eno16777736 # specifies the interface on which haproxy checks the network

Virtual_router_id 51 # ID for virtual routing, which must be consistent on all keepalived nodes

Priority 100 # priority

Interval of advert_int 1 # heartbeat check

Authentication {

Auth_type PASS # specifies that the authentication between keepalived nodes is password authentication

Authentication password of auth_pass linux # keepalived node

}

Virtual_ipaddress {

192.168.83.111 Universe 24 dev eno16777736 # vip address

}

Track_script {

Haproxy # calls the haproxy check script

}

}

Configure keepalived on haproxy2

[root@haproxy1 ~] # scp / etc/keepalived/keepalived.conf; scp / etc/rsyslog.conf;scp / etc/sysconfig/keepalived # transfer the configuration file of agent 1 to 2

[root@haproxy2 ~] # cat / etc/keepalived/keepalived.conf # modify field

Router_id haproxy2 # modify the ID of a virtual route

State BACKUP # modify roles

Priority 80 # modify priority

Restart the service of haproxy1 and 2

[root@haproxy1 ~] # systemctl restart keepalived

[root@haproxy1 ~] # systemctl restart haproxy

[root@haproxy1 ~] # systemctl restart rsyslog

[root@haproxy2 ~] # systemctl restart keepalived

[root@haproxy2 ~] # systemctl restart haproxy

[root@haproxy2 ~] # systemctl restart rsyslog

Verification

[root@haproxy1 ~] # ip a | grep 1111Node haproxy1 generates vip

Inet 192.168.83.111/24 scope global secondary eno16777736

[root@haproxy2 ~] # ip a | grep 1111and the slave node also has vip

Access to a real server

[root@haproxy1 ~] # curl http://192.168.83.111

Web1

[root@haproxy1 ~] # curl http://192.168.83.111

Web2

[root@haproxy1 ~] # systemctl stop keepalived # Simulation Agent 1 is down, and outage simulation is also OK. As long as the heartbeat check to 1 fails, 2 will assume that 1 is down, thus preempting vip

[root@haproxy2 ~] # ip a | grep 111 # vip drifts to 2

Inet 192.168.83.111/24 scope global secondary eno16777736

The real server is still accessible

This is a simple experiment to achieve high availability of web clusters through haproxy+keepalived. In fact, there are a lot of configurations at work, which I don't quite understand for the time being. I will send them up to discuss with you almost what I understand. For the time being, I won't send them up to mislead people's children. Ha!

If there are any mistakes, please correct them.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report