Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is a highly available load balancing architecture

2025-04-08 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Load balancing (Cloud Load Balancer) is a service that distributes traffic to multiple CVMs. Load balancing can expand the external service capacity of the application system through traffic distribution and improve the availability of the application system by eliminating a single point of failure. The cloud load balancer service virtualizes multiple CVM resources located in the same region into a high-performance and highly available application service pool by setting virtual service addresses (VIP). Distribute the network requests from the client to the cloud server pool according to the way specified by the application. The load balancer service checks the health status of the CVM instances in the CVM pool and automatically isolates the abnormal instances, thus solving the single point problem of the CVM and improving the overall service capability of the application. Today, I will introduce to you what is the high availability load balancing architecture.

High availability load balancing architecture

1 introduction 1.1 LVS

LVS is the abbreviation of Linux Virtual Server, which means Linux virtual cloud server, which is a virtual server cluster system. This project, founded by Dr. Zhang Wensong in May 1998, is one of the earliest free software projects in China. Currently, there are three IP load balancing technologies (VS/NAT, VS/TUN and VS/DR) and ten scheduling algorithms (rrr | wrr | lc | wlc | lblc | lblcr | dh | sh | sed | nq):

1.1.1 static scheduling

① rr (Round Robin): polling scheduling, round robin scheduling

The principle of the polling scheduling algorithm is that each request from the user is assigned to the internal server in turn, starting from 1 to N (the number of internal servers), and then restart the cycle. The advantage of the algorithm is its simplicity, it does not need to record the status of all current connections, so it is a stateless scheduling. [hint: the processing power of each server is not considered here]

② wrr:weight, weighted (scheduling between hosts in proportion to weights)

Due to the configuration of each server, the installation of business applications, and so on, its processing capacity will be different. Therefore, according to the different processing capacity of the server, we assign different weights to each server so that it can accept the service requests with the corresponding weights.

③ sh:source hashing, source address hash. It mainly implements session binding, which can retain the previously established session information.

The source address hash scheduling algorithm is just the opposite of the destination address hash scheduling algorithm. According to the requested source IP address, it acts as a hash key (Hash Key) to find the corresponding server from the statically assigned hash table. If the server is available and does not overload, the request is sent to the server, otherwise it returns empty. It uses the same hash function as the target address hash scheduling algorithm. Its algorithm flow is basically similar to that of the destination address hash scheduling algorithm, except that the requested destination IP address is replaced with the requested source IP address, so it is not described here.

④ Dh:Destination hashing: destination address hash. Send the request for the same IP address to the same server.

The destination address hash scheduling algorithm is also a load balancing for the target IP address. It is a static mapping algorithm that maps a target IP address to a server through a Hash function. The target address hash scheduling algorithm first finds the corresponding server from the statically assigned hash table as a hash key (Hash Key) according to the requested target IP address, and sends the request to the server if the server is available and not overloaded, otherwise it returns empty.

1.1.2 dynamic scheduling

① lc (Least-Connection): minimum connection

The minimum connection scheduling algorithm allocates new connection requests to the server with the least number of current connections. Minimum connection scheduling is a dynamic scheduling short algorithm, which estimates the load balance of the server by the current number of active connections of the server. The scheduler needs to record the number of connections established by each server. When a request is dispatched to a server, the number of connections is increased by 1, when the connection is aborted or timed out. The number of connections is reduced by one, and when the system is implemented, we also introduce that when the weight of the server is 0, it means that the server is not available and is not scheduled.

Simple algorithm: active*256+inactive (who is small, pick who)

② wlc (Weighted Least-Connection Scheduling): weighted minimum connections.

The weighted minimum connection scheduling algorithm is a superset of minimum connection scheduling, and each server uses the corresponding weights to represent its processing performance. The default weight of the server is 1, and the system administrator can dynamically set the permissions of the server. Weighted minimum connection scheduling makes the number of established connections of the server proportional to its weight as much as possible when scheduling new connections.

Simple algorithm: (active*256+inactive) / weight [(number of active connections + 1) / divided by weight] (whoever is small, pick who)

③ sed (Shortest Expected Delay): minimum expected delay

Based on wlc algorithm

Simple algorithm: (active+1) * 256/weight [(number of active connections + 1) * 256 / divided by weight]

④ nq (never queue): never queue (improved sed)

No queue is required. If the number of connections of a realserver is 0, it will be allocated directly, and there is no need for sed operation.

⑤ LBLC (Locality-Based Least Connection): minimum connections based on locality

The least connection algorithm based on locality is the load balancing scheduling for the target IP address of the request message, which is mainly used in the Cache cluster system, because the IP address of the customer request message in the Cache cluster is changeable. It is assumed that any back-end server can handle any request. The design goal of the algorithm is when the load of the server is basically balanced. The request of the same target IP address is dispatched to the same server to improve the access locality and the main memory Cache hit rate of the server, thus adjusting the processing capacity of the whole cluster system.

The least connection scheduling algorithm based on locality finds the recently used RealServer of the target IP address according to the target RealServer address of the request, and sends the request to the server if the RealServer is available and not overloaded; if the server does not exist, or if the server is overloaded and has half of the workload of the server, an available server is selected with the principle of "least links" and the request is sent to the server.

⑥ LBLCR (Locality-Based Least ConnectionswithReplication): locality-based minimum links with replication

The locality-based least link scheduling algorithm with replication is also a load balancing for the target IP address. The algorithm finds the server group corresponding to the target IP address according to the requested target IP address, selects a server from the server group according to the "minimum connection" principle, and sends the request to the server if the server is not overloaded. If the server is overloaded, select a server from the cluster according to the "minimum connection" principle, join the server to the server group, and send the request to the server. At the same time, when the server group has not been modified for some time, the busiest server is removed from the server group to reduce the degree of replication.

1.1.3 the method of load balancing based on IPVS

L NAT: address translation (similar to DNAT)

1. Cluster points and director must work in the same IP network.

2. RIP is usually a private address and is only used for communication between cluster nodes.

3. Director is located between client and real server and is responsible for handling all access.

4. Realserver must DIP the gateway

5. Director supports port mapping.

6. Realserver can use any type of operating system (os)

7. In large-scale application scenarios, director is easy to become a system bottleneck.

L DR: direct routing (and used as source address)

1. Each cluster node and director must be in the same physical network.

2. RIP can use public network address to realize portable remote management and monitoring.

3. Director is only responsible for processing inbound requests, while realserver sends the shadow message directly to the client.

4. Realserver cannot point the gateway to the DIP, but directly to the front-end gateway.

5. Director does not support port mapping.

6. Most operating systems can be used in realserver

7. Director can handle more realserver

L TUN: tunnel

1. Cluster nodes can span Internet

2. RIP must be a public network address

3. Director is only responsible for processing inbound requests, while realserver sends the shadow message directly to the client.

4. Realserver gateway cannot point to director

5. Only OS with tunnel function can be used in realserver.

6. Port mapping is not supported

1.2 Keepalived

Here, Keepalived is mainly used for the health check of RealServer and the implementation of failover between Master hosts and BackUP hosts to build a high-availability load balancing cluster based on LVS+Keepalived, in which LVS implements load balancing. however, simple LVS cannot monitor the health of back-end nodes, it only accesses back-end service nodes based on specific scheduling algorithms. At the same time, a single LVS has the risk of a single point of failure. Here, with the introduction of Keepalived, you can implement the following functions:

1. Check whether the back-end node is healthy.

two。 Achieve the high availability of LVS itself.

1.3 Haproxy

HAProxy is free and open source software written in C language [1] that provides high availability, load balancing, and TCP and HTTP-based application proxies.

HAProxy is especially suitable for heavily loaded web sites, which usually require session persistence or seven-tier processing. HAProxy runs on current hardware and can support tens of thousands of concurrent connections. And its mode of operation makes it easy and secure to integrate into your current architecture, while protecting your web server from being exposed to the network.

HAProxy implements an event-driven, single-process model that supports a very large number of concurrent connections. Multiprocess or multithreaded models rarely handle thousands of concurrent connections due to memory constraints, system scheduler limitations, and ubiquitous locks. The event-driven model does not have these problems because it implements all these tasks in the user space (User-Space) with better resource and time management. The drawback of this model is that on multi-core systems, these programs are usually poorly scalable. This is why they have to optimize so that each CPU time slice (Cycle) does more work.

2 Architecture Diagram

The architecture diagram shows:

1Mather LVS two servers, virtual IP is used on the Haproxy server to ensure high availability.

2 the main role of Magi Haproxy is to support boast network segment IP, while LVS does not support boast network segment.

3Ginx Proxy mainly has the function of caching to increase performance.

4, backend database level, again using LVS high availability, virtual IP is used on mycat server.

5Jing Mycat functions as read-write separation, increasing performance and high availability of Mysql. Normally, the write is Mysql Master1 and the read is Mysql Master2,Mysql Slave, such as Mysql Master1 failure. Then read and write switch to Mysql Master2.

PS: high performance physical machines are recommended for the top two LVS servers.

3 Application installation configuration 3.1 Experimental Environment

System version: Centos 7 64-bit

Server role

Server IP

Nginx Proxy1

192.168.8.14

Nginx Proxy2

192.168.8.15

Nginx Web1

192.168.8.16

Nginx Web2

192.168.8.17

Tomcat1

192.168.8.18

Tomcat2

192.168.8.19

Tcpserver (ftp) 1

192.168.8.20

Tcpserver (ftp) 2

192.168.8.21

3.2 Nginx Proxy installation configuration

(192.168.8.14 and 192.168.8.15) two server operations

3.2.1 yum source for nginx

Vi / etc/yum.repos.d/nginx.repo

[nginx]

Name=nginx repo

Baseurl= http://nginx.org/packages/centos/7/$basearch/

Gpgcheck=0

Enabled=1

3.2.2 yum installation nginx

Yuminstall nginx

3.2.3 configuring nginx

Vi/etc/nginx/nginx.conf

User nginx

Worker_processes 4

Worker_cpu_affinity 0001 0010 0100 1000

Error_log logs/error.log notice

Events {

Use epoll

Worker_connections 1024

}

Http {

Include mime.types

Default_type application/octet-stream

Log_format main'$http_x_forwarded_for-$remote_user [$time_local] "$request"'

'$status $body_bytes_sent "$http_referer"'

'"$http_user_agent" $remote_addr'

# access_log logs/access.log main

Sendfile on

Tcp_nopush on

Tcp_nodelay on

Server_names_hash_bucket_size 128

Large_client_header_buffers 4 128k

Client_max_body_size 128m

Client_header_buffer_size 32k

Keepalive_timeout 120

Fastcgi_connect_timeout 300

Fastcgi_send_timeout 300

Fastcgi_read_timeout 300

Fastcgi_buffer_size 128k

Fastcgi_buffers 4 128k

Fastcgi_busy_buffers_size 256k

Fastcgi_temp_file_write_size 256k

Gzip on

Gzip_vary on

Gzip_min_length 1k

Gzip_buffers 4 64k

Gzip_http_version 1.0

Gzip_comp_level 9

Gzip_types text/plain application/x-javascript text/css text/javascript application/javascript application/xml p_w_picpath/jpeg p_w_picpath/gif p_w_picpath/png

Proxy_cache_path / etc/nginx/cache levels=1:2 keys_zone=cache_one:500m inactive=1d max_size=30g

Upstream testweb {

Server 192.168.8.16:80

Server 192.168.8.17:80

}

Upstream testtomcat {

Server 192.168.8.18:8080

Server 192.168.8.19:8080

}

Server {

Listen 80

Server_name testweb.com.cn

Access_log logs/testweb.access.log main

Location. *\. (css | js | gif | jpg | png) ${

Proxy_pass http://testweb;

Proxy_cache cache_one

Proxy_cache_valid 200 301 302 5m

Proxy_cache_valid any 1m

Proxy_cache_key $host$uri$is_args$args

Expires 30d

}

Location / {

Proxy_pass http://testweb;

Proxy_redirect off

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

Client_max_body_size 10m

Client_body_buffer_size 128k

Proxy_connect_timeout 300

Proxy_send_timeout 300

Proxy_read_timeout 300

Proxy_buffer_size 4k

Proxy_buffers 4 32k

Proxy_busy_buffers_size 64k

Proxy_temp_file_write_size 64k

}

Error_page 500 502 503 504 / 50x.html

Location = / 50x.html {

Root html

}

}

Server {

Listen 80

Server_name testtomcat.com.cn

Access_log logs/testtomcat.access.log main

Location / {

Proxy_pass http://testtomcat;

Proxy_redirect off

Proxy_set_header Host $host

Proxy_set_header X-Real-IP $remote_addr

Proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for

Client_max_body_size 10m

Client_body_buffer_size 128k

Proxy_connect_timeout 300

Proxy_send_timeout 300

Proxy_read_timeout 300

Proxy_buffer_size 4k

Proxy_buffers 4 32k

Proxy_busy_buffers_size 64k

Proxy_temp_file_write_size 64k

}

Error_page 500 502 503 504 / 50x.html

Location = / 50x.html {

Root html

}

}

}

Mkdir/etc/nginx/logs/

Start nginx

/ etc/init.d/nginxstart

3.3 Nginx Web installation configuration

(192.168.8.16 and 192.168.8.17) two server operations

3.3.1 yum source for nginx

Vi / etc/yum.repos.d/nginx.repo

[nginx]

Name=nginx repo

Baseurl= http://nginx.org/packages/centos/7/$basearch/

Gpgcheck=0

Enabled=1

3.3.2 yum installation nginx

Yuminstall nginx

3.3.3 configure nginx

Configuration according to production situation

Configure a test file in the root of the website, such as:

192.168.8.16 server:

Cattest.html

Web16

192.168.8.17 Server:

Web17

Start nginx

/ etc/init.d/nginxstart

3.4 tomcat installation configuration

(192.168.8.18 and 192.168.8.19) two server operations

3.4.1 install JAVA

Yum install-y java-1.8.0-openjdk

3.4.2 download tomcat

Download http://tomcat.apache.org/ on the official website

3.4.3 install tomcat

Tarzxvf apache-tomcat-8.5.20.tar.gz

Mvapache-tomcat-8.5.20 / usr/local/

3.4.4 Test Page

Cd/usr/local/apache-tomcat-8.5.20/webapps/ROOT/

192.168.8.18 server:

Vitest.html

Tomcat18

192.168.8.19 Server:

Vitest.html

Tomcat19

Start tomcat

/ usr/local/apache-tomcat-8.5.20/bin/startup.sh

3.5 ftp installation configuration

(192.168.8.20 and 192.168.8.21) two server operations

3.5.1 install ftp

Yuminstall vsftpd

3.5.2 configure ftp

Vi/etc/vsftpd/vsftpd.conf

Add configuration:

Pasv_enable=YES

Pasv_promiscuous=YES

Port_enable=YES

Port_promiscuous=NO

Pasv_min_port=10001

Pasv_max_port=10010 # if you have more users, you can zoom in

Start ftp

/ etc/init.d/vsftpdstart

4 haproxy installation configuration 4.1 Lab Environment

System version: Centos 7 64-bit

Server role

Server IP

Haproxy

192.168.8.12

Haproxy

192.168.8.13

(192.168.8.12 and 192.168.8.13) two server operations

4.2 install haproxy

Tarzxvf haproxy-1.7.7.tar.gz

MakeTARGET=linux2628 PREFIX=/usr/local/haproxy

# "TARGET" specifies the kernel version corresponding to the compiled os. Use "uname-r" to query the kernel version, and the README file can query the corresponding relationship.

Makeinstall PREFIX=/usr/local/haproxy

Groupaddhaproxy

Useradd-g haproxy haproxy-s / sbin/nologin

4.3 configure haproxy

There are no configuration files under the default installation directory, only "doc", "sbin" and "share" three directories, you can create directories and configuration files manually.

The configuration file for haproxy consists of the following five parts:

Global global configuration, defaults default configuration, monitoring page configuration, frontend configuration, backend configuration.

Mkdir-p / usr/local/haproxy/etc

Cd/usr/local/haproxy/etc/

Vimhaproxy.cfg

4.3.1 sample configuration

Global

# define global logs, which are configured locally and output via local0. The default is info level, and two entries can be configured.

Log 127.0.0.1 local0 warning

# define log level [error warning info debug]

# log 127.0.0.1 local1 info

# run path

Chroot / usr/local/haproxy

# path for storing PID files

Pidfile / var/run/haproxy.pid

# set the maximum number of concurrent connections per haproxy process, which is equivalent to the command line option "- n"; the result of automatic calculation of "ulimit-n" refers to this parameter setting.

Maxconn 4096

# run the haproxy user, or use the keyword uid

User haproxy

# run the haproxy user group, or use the keyword gid

Group haproxy

# running haproxy in the background

Daemon

# set the number of haproxy processes started, which can only be used for haproxy in daemon mode

# only one process is started by default. Due to various reasons such as difficulties in debugging, multi-process mode is generally used only in scenarios where a single process can only open a few file descriptors.

Nbproc 1

# set the maximum number of file descriptors that can be opened per process, which is calculated automatically by default, so it is not recommended to modify this option.

# ulimit-n 819200

Debug level, generally debug only when a single process is started, and the production environment is disabled.

# debug

# haproxy does not display any relevant information after startup, which is the same as adding the parameter "- Q" when starting haproxy on the command line

# quiet

# define where statistics are saved

Stats socket / usr/local/haproxy/stats

# default configuration

Defaults

# default mode [tcp: layer 4; http:7 layer; health: return only OK]

Mode http

# inherit global log definition output

Log global

# Log category, httplog

# option httplog

# if the backend server needs to record the real ip of the client, you need to add the "X-Forwarded-For" field to the HTTP request

However, when haproxy's own health detection mechanism accesses the back-end server, the access log should not be recorded. Except can be used to exclude 127.0.0.0, that is, haproxy itself.

# option forwardfor except 127.0.0.0/8

Option forwardfor

# enable the server-side shutdown function in the http protocol, and actively close the http channel after each request, so as to support long connections, so that the session can be reused, so that every log record will be recorded.

Option httpclose

If an empty connection is generated, the log of the empty connection will not be recorded.

Option dontlognull

# redistribute the session to another healthy server when the session with the back-end server fails (server failure or other reasons); when the failed server recovers, the session is directed to the restored server

# you can also use the "retries" keyword to set the number of connection attempts when determining a session failure

Option redispatch

Retries 3

When the haproxy load is high, automatically end the links that have been processed for a long time in the current queue.

Option abortonclose

# default http request timeout

Timeout http-request 10s

The default queue timeout is #. When the load is high, the back-end server will put the request from haproxy into a queue.

Timeout queue 1m

# the connection timeout between haproxy and backend server.

Timeout connect 5s

# after the client connects with haproxy, the data transmission is completed, and there is no more data transmission, that is, the timeout for inactive connections.

Timeout client 1m

# timeout of inactive connection between haproxy and backend server.

Timeout server 1m

By default, the timeout for establishing a new http request connection can be released as soon as possible and save resources when the time is short.

Timeout http-keep-alive 10s

# heartbeat detection timeout

Timeout check 10s

# maximum number of concurrent connections

Maxconn 2000

# set the default load balancing method

# balance source

# balnace leastconn

# Statistics page configuration, a combination of frontend and backend, and the name of the monitoring group can be customized as needed

Listen admin_status

# configure monitoring operation mode

Mode http

# configure the access port of statistics page

Bind 0.0.0.0:1080

# maximum number of connections by default on the statistics page

Maxconn 10

# http log format

Option httplog

# enable Statistics

Stats enable

# hide the haproxy version information on the statistics page

Stats hide-version

# Monitoring page automatic refresh time

Stats refresh 30s

# visit url on the statistics page

Stats uri / stats

# password box prompt text on statistics page

Stats realm mCloud\ Haproxy

# user and password of the monitoring page: admin, multiple user names can be set

Stats auth admin:admin

# start / disable backend servers manually, and manage nodes through web

Stats admin if TRUE

# setting haproxy error page

Errorfile 400 / usr/local/haproxy/errorfiles/400.http

Errorfile 403 / usr/local/haproxy/errorfiles/403.http

Errorfile 408 / usr/local/haproxy/errorfiles/408.http

Errorfile 500 / usr/local/haproxy/errorfiles/500.http

Errorfile 502 / usr/local/haproxy/errorfiles/502.http

Errorfile 503 / usr/local/haproxy/errorfiles/503.http

Errorfile 504 / usr/local/haproxy/errorfiles/504.http

# Monitoring the monitoring status of haproxy backend servers

Listen site_status

Bind 0.0.0.0 1081 # listening port

7-tier mode of mode http # http

Log 127.0.0.1 local2 err # [err warning info debug]

Monitor-uri / site_status # website Health check URL, which is used to check whether the website managed by HAProxy is available. It returns 200normally and 503abnormally.

Acl site_dead nbsrv (php_server) lt 1 # the policy when defining the down of a website returns true when the number of valid machines in the specified backend hanging on the load balancer is less than 1.

Acl site_dead nbsrv (html_server) lt 1

Acl site_dead nbsrv (backend_default) lt 1

Monitor fail if site_dead # returns 503 when the policy is met, the online document says 500, and the actual test is 503

Monitor-net 192.168.4.171amp 32 # Log information from 192.168.4.152 will not be recorded and forwarded

Monitor-net 192.168.4.172/32

# frontend with custom name

Frontend HAproxy_Cluster

# define the front-end listening port, which is recommended in the form of bind *: 80. Otherwise, if there is a problem when the cluster is highly available, the vip cannot be accessed by switching to other machines.

Bind 0.0.0.0:80

# acl is followed by the rule name. When the url of the request ends with .php, the match triggers the php_web rule.

When the requested url ends with .css, .jpg, .png, .jpeg, .js, .gif, the static_web rule is matched and triggered.

# acl static_web path_end .gif .png .jpg .css .js .jpeg

# acl static_web url_reg / *. (css | jpg | png | jpeg | js | gif) $

#-I ignores case and matches and triggers dns_name rules when a host that starts with www.test.com is requested.

Acl html_web hdr_beg (host)-I www.haproxytest.com

# acl html_web hdr_beg (host) 10.11.4.152

When the client's IP is x.x.x.x, match and trigger the src_ip rule.

# acl src_ip src x.x.x.x

# if matching acl rule php_web, transfer the request to php_server group for processing; if matching acl rule html_web, transfer the request to html_server group for processing.

Use_backend php_server if php_web

Use_backend html_server if html_web

# if the above rules do not match, transfer the request to the default_backend group for processing.

Default_backend backend_default

# backend backend configuration, configuring php_server group and html_server group

Backend php_server

# define the load balancing method as roundrobin, that is, the algorithm of polling and scheduling based on weight, which is recommended when the server performance is evenly distributed.

# there are several other load balancing methods as follows:

#-static-rr: rotation scheduling is also based on weight, but it is a static method. Adjusting the weight of the back-end unit at run time will not use the new weight.

#-source: match the backend server group based on the hash operation of the request source IP

#-leastconn: not suitable for environments with short sessions, such as http-based applications

#-uri: perform hash operation on the entire URI

#-uri_param: forward the parameters in URI

#-hdr (): forward according to the http header. If there is no such header, switch to using roundrobin.

Balance roundrobin

Mode http

# allow insertion of serverid into cookie, which can be defined after serverid

Cookie SERVERID

# heartbeat detection is to detect back-end server index.html files, and there are other ways

Option httpchk GET / index.html

# backend server definition. Maxconn 1024 represents the maximum number of connections to the server, cookie 1 indicates that serverid is 1, and weight represents weight (default 1, maximum 265c0 means not participating in load balancer)

# check inter 1500 is to detect the heartbeat rate, rise 2 is 2 times to correctly consider the server available, fall 3 is 3 times to fail to consider the server unavailable.

Server php1 192.168.4.171:80 maxconn 1024 cookie 1 weight 3 check inter 1500 rise 2 fall 3

Backend html_server

Balance source

Mode http

Server html1 192.168.4.172:80 maxconn 1024 cookie 1 weight 3 check inter 1500 rise 2 fall 3

Backend backend_default

Balance source

Mode http

Server default1 192.168.4.171:80 maxconn 1024 cookie 1 weight 3 check inter 1500 rise 2 fall 3

4.3.2 production configuration

Global

Log 127.0.0.1 local0 warning

Chroot / usr/local/haproxy

Pidfile / var/run/haproxy.pid

Maxconn 4096

User haproxy

Group haproxy

Daemon

Nbproc 1

Stats socket / usr/local/haproxy/stats

Defaults

Mode http

Log global

Option forwardfor except 127.0.0.0/8

Option httpclose

Option dontlognull

Option redispatch

Retries 3

Option abortonclose

Timeout http-request 10s

Timeout queue 1m

Timeout connect 5s

# timeout client 1m

# timeout server 1m

Timeout http-keep-alive 10s

Timeout check 10s

Maxconn 200000

Listen admin_status

Mode http

Bind 0.0.0.0:1080

Maxconn 10

Option httplog

Stats enable

Stats hide-version

Stats refresh 30s

Stats uri / stats

Stats realm mCloud\ Haproxy

Stats auth admin:123456

Stats admin if TRUE

Errorfile 400 / usr/local/haproxy/errorfiles/400.http

Errorfile 403 / usr/local/haproxy/errorfiles/403.http

Errorfile 408 / usr/local/haproxy/errorfiles/408.http

Errorfile 500 / usr/local/haproxy/errorfiles/500.http

Errorfile 502 / usr/local/haproxy/errorfiles/502.http

Errorfile 503 / usr/local/haproxy/errorfiles/503.http

Errorfile 504 / usr/local/haproxy/errorfiles/504.http

Listen site_status

Bind 0.0.0.0:1081

Mode http

Log 127.0.0.1 local2 err

Monitor-uri / site_status

Acl site_dead nbsrv (nginx_proxy1) lt 1

Monitor fail if site_dead

Monitor-net 192.168.8.14/32

Monitor-net 192.168.8.15/32

Monitor-net 192.168.8.20/32

Monitor-net 192.168.8.21/32

Frontend nginx_cluster

Mode http

Bind 0.0.0.0:80

Acl nginx_1 hdr_beg (host)-I testweb.com.cn

Acl nginx_2 hdr_beg (host)-I testtomcat.com.cn

Use_backend nginx_proxy1 if nginx_1

Use_backend nginx_proxy1 if nginx_2

Backend nginx_proxy1

Mode http

Balance roundrobin

Server nginx1 192.168.8.14:80 maxconn 10240 cookie 1 weight 3 check inter 1500 rise 2 fall 3

Server nginx2 192.168.8.15:80 maxconn 10240 cookie 2 weight 3 check inter 1500 rise 2 fall 3

Listen ftp_service

Bind 0.0.0.0:21

Bind 0.0.0.0 10001-10010

Mode tcp

Option tcplog

Balance roundrobin

Server ftp_20 192.168.8.20 weight 2 check port 21 inter 10s rise 1 fall 2

Server ftp_21 192.168.8.21 weight 2 check port 21 inter 10s rise 1 fall 2

Error file

Cp-r/usr/local/src/haproxy-1.7.7/examples/errorfiles/ / usr/local/haproxy/

Log file

Mkdir-p / usr/local/haproxy/log

Touch/usr/local/haproxy/log/haproxy.log

Ln-s/usr/local/haproxy/log/haproxy.log / var/log/

Chownhaproxy:haproxy / var/log/haproxy.log

Vim/etc/sysconfig/rsyslog

SYSLOGD_OPTIONS= "- c2-r-m 0"

Cd/etc/rsyslog.d/

Touchhaproxy.conf

Chownhaproxy:haproxy haproxy.conf

Vimhaproxy.conf

# Provides UDP syslog reception

$ModLoad imudp

$UDPServerRun 514

# haproxy.log

#

Local0.* / usr/local/haproxy/log/haproxy.log

# local1.* / usr/local/haproxy/log/haproxy.log

Local2.* / usr/local/haproxy/log/haproxy.log

& ~

Systemctlrestart rsyslog.service

# haproxy does not have logs by default and relies on rsyslog to collect logs

# the "& ~" at the end of the file. Without this configuration, the log will be written to the messages file synchronously in addition to the specified file

Chown-R haproxy:haproxy / usr/local/haproxy/

Mkdir-p / etc/haproxy

Ln-s/usr/local/haproxy/etc/haproxy.cfg / etc/haproxy/

Chown-R haproxy:haproxy / etc/haproxy

Cp/usr/local/src/haproxy-1.7.7/examples/haproxy.init / etc/rc.d/init.d/haproxy

Chownhaproxy:haproxy / etc/rc.d/init.d/haproxy

# / etc/rc.d/init.d/haproxy will have an error:

/ etc/init.d/haproxy:line 26: [: =: unary operator expected

Modify haproxy

Vihaproxy

Line 26:

[[${NETWORKING} = "no"]] & & exit 0

Chmod+x / etc/rc.d/init.d/haproxy

Ln-s/usr/local/haproxy/sbin/haproxy / usr/sbin/

Chownhaproxy:haproxy / usr/sbin/haproxy

Start haproxy

/ etc/init.d/haproxystart

5 keepalived installation configuration 5.1 Lab Environment

Server role

Server IP

VIP1

192.168.8.100

LVS1

192.168.8.10

LVS2

192.168.8.11

(192.168.8.10 and 192.168.8.11) two server operations

5.2 install keepalived

Yum install-y keepalived ipvsadm

5.3Configuration keepalived5.3.1 sample configuration

Vim/etc/keepalived/keepalived.conf

! Configuration File for keepalived

Global_defs {

Notification_email {

Acassen@firewall.loc # sets the alarm email address, which can be set up more than one per line.

Failover@firewall.loc # needs to enable the local sendmail service

Sysadmin@firewall.loc

}

Notification_email_from Alexandre.Cassen@firewall.loc # sets the sending address of the mail

Smtp_server 127.0.0.1 # set the smtp server address

Smtp_connect_timeout 30 # sets the timeout for connecting to smtp server

Router_id LVS_DEVEL # represents an identity of the running keepalived server. The information displayed in the subject of the message when sending an email

}

Vrrp_instance VI_1 {

State BACKUP # specifies the role of keepalived. MASTER indicates that this host is the primary server, and BACKUP indicates that this host is the standby server. But after the master dies, resuming it will lead to resource grabbing. So BACKUP is set up here to prevent looting of resources.

Interface eth0 # specifies the interface of the HA monitoring network

Virtual_router_id 51 # virtual routing identity, which is a number, and the same vrrp instance uses a unique identity. That is, under the same vrrp_instance, MASTER and BACKUP must be consistent.

Priority 100# defines priority. The larger the number, the higher the priority. Under the same vrrp_instance, the priority of MASTER must be greater than that of BACKUP.

Advert_int 1 # sets the time interval (in seconds) for synchronization checks between MASTER and BACKUP load balancers

Nopreempt # does not seize resources, which means that it will not take the master back after it is alive.

Authentication {# set authentication type and password

Auth_type PASS # sets the verification type, mainly including PASS and AH

Auth_pass 1111 # sets the authentication password. Under the same vrrp_instance, MASTER and BACKUP must use the same password in order to communicate normally.

}

Virtual_ipaddress {# set virtual IP addresses. You can set multiple virtual IP addresses, one per line

192.168.8.97

}

}

Virtual_server 192.168.8.97 3306 {# to set up a virtual server, you need to specify a virtual IP address and service port, and the IP and the port are separated by a space

Delay_loop 6 # sets the running check time (in seconds)

Lb_algo rr # sets the load scheduling algorithm. Here it is set to rr, that is, polling algorithm.

Lb_kind DR # sets the mechanism of LVS to achieve load balancing. There are three modes: NAT, TUN and DR.

Nat_mask 255.255.255.0

Persistence_timeout 50 # session duration in seconds. This option is very useful for dynamic web pages and provides a good solution for session sharing in cluster systems. With this session persistence feature, the user's request is distributed to a service node until the duration of the session is exceeded. It should be noted that this session persistence time is the maximum unresponsive timeout, that is, when a user is operating a dynamic page, if no action is performed within 50 seconds, the next operation will be distributed to another node. However, if the user has been operating the dynamic page, it is not subject to the 50-second time limit.

Protocol TCP # specifies the type of forwarding protocol, including TCP and UDP

Real_server 192.168.8.90 3306 {# configure service node 1, you need to specify the real IP address and port of the real server, and the IP and the port are separated by a space

Weight 1 # configures the weights of the service node. The weights are expressed by numbers. The larger the number, the higher the weights. The weights can be set to servers with different performance. By allocating different loads, we can set higher weights for servers with high performance and relatively lower weights for servers with lower performance, so that system resources can be used and allocated reasonably.

Status detection setting section of TCP_CHECK {# realserver (in seconds)

Connect_timeout 3 # indicates a 3-second no response timeout

Nb_get_retry 3 # indicates the number of retries

Delay_before_retry 3 # indicates the retry interval

Connect_port 3306 # Health check Port

}

}

Real_server 192.168.8.91 3306 {# configure service node 1, you need to specify the real IP address and port of the real server, and the IP and the port are separated by a space

Weight 1 # configures the weights of the service node. The weights are expressed by numbers. The larger the number, the higher the weights. The weights can be set to servers with different performance. By allocating different loads, we can set higher weights for servers with high performance and relatively lower weights for servers with lower performance, so that system resources can be used and allocated reasonably.

Status detection setting section of TCP_CHECK {# realserver (in seconds)

Connect_timeout 3 # connection timeout

Number of nb_get_retry 3 # reconnections

Delay_before_retry 3 # indicates the retry interval

Connect_port 3306 # Health check Port

}

}

}

5.3.2 production configuration

On LVS-DR-Master, the configuration is as follows (192.168.8.10 operation):

! Configuration File for keepalived

Global_defs {

Notification_email {

Acassen@firewall.loc

Failover@firewall.loc

Sysadmin@firewall.loc

}

Notification_email_from Alexandre.Cassen@firewall.loc

Smtp_server 192.168.200.1

Smtp_connect_timeout 30

Router_id LVS_DEVEL

}

Vrrp_instance VI_1 {

State BACKUP

Interface ens160

Virtual_router_id 51

Priority 100

Advert_int 1

Authentication {

Auth_type PASS

Auth_pass 123456

}

Virtual_ipaddress {

192.168.8.100

}

}

Virtual_server 192.168.8.100 80 {

Delay_loop 6

Lb_algo rr

Lb_kind DR

Nat_mask 255.255.255.0

Persistence_timeout 50

Protocol TCP

Real_server 192.168.8.12 80 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

Connect_port 80

}

}

Real_server 192.168.8.13 80 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

Connect_port 80

}

}

}

Virtual_server 192.168.8.100 21 {

Delay_loop 6

Lb_algo rr

Lb_kind DR

Nat_mask 255.255.255.0

Persistence_timeout 50

Protocol TCP

Real_server 192.168.8.12 21 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

Connect_port 21

}

}

Real_server 192.168.8.13 21 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

Connect_port 21

}

}

}

On LVS-DR-Backup, the configuration is as follows (192.168.8.11 operation):

! Configuration File for keepalived

Global_defs {

Notification_email {

Acassen@firewall.loc

Failover@firewall.loc

Sysadmin@firewall.loc

}

Notification_email_from Alexandre.Cassen@firewall.loc

Smtp_server 192.168.200.1

Smtp_connect_timeout 30

Router_id LVS_DEVEL

}

Vrrp_instance VI_1 {

State BACKUP

Interface ens160

Virtual_router_id 51

Priority 90

Advert_int 1

Authentication {

Auth_type PASS

Auth_pass 123456

}

Virtual_ipaddress {

192.168.8.100

}

}

Virtual_server 192.168.8.100 80 {

Delay_loop 6

Lb_algo rr

Lb_kind DR

Nat_mask 255.255.255.0

Persistence_timeout 50

Protocol TCP

Real_server 192.168.8.12 80 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

Connect_port 80

}

}

Real_server 192.168.8.13 80 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

Connect_port 80

}

}

}

Virtual_server 192.168.8.100 21 {

Delay_loop 6

Lb_algo rr

Lb_kind DR

Nat_mask 255.255.255.0

Persistence_timeout 50

Protocol TCP

Real_server 192.168.8.12 21 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

Connect_port 21

}

}

Real_server 192.168.8.13 21 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

Connect_port 21

}

}

}

5.4 Virtual IP script

Haproxy server (192.168.8.12 and 192.168.8.13) operation

Chmod+x / etc/rc.d/init.d/functions

Vi/usr/local/bin/realserver.sh

#! / bin/bash

# description: Config realserver

VIP=192.168.8.100

/ etc/rc.d/init.d/functions

Case "$1" in

Start)

/ sbin/ifconfig lo:0$ VIP netmask 255.255.255.255 broadcast $VIP

/ sbin/route add-host $VIP dev lo:0

Echo "1" > / proc/sys/net/ipv4/conf/lo/arp_ignore

Echo "2" > / proc/sys/net/ipv4/conf/lo/arp_announce

Echo "1" > / proc/sys/net/ipv4/conf/all/arp_ignore

Echo "2" > / proc/sys/net/ipv4/conf/all/arp_announce

Sysctl-p > / dev/null 2 > & 1

Echo "RealServer Start OK"

Stop)

/ sbin/ifconfig lo:0 down

/ sbin/route del $VIP > / dev/null 2 > & 1

Echo "0" > / proc/sys/net/ipv4/conf/lo/arp_ignore

Echo "0" > / proc/sys/net/ipv4/conf/lo/arp_announce

Echo "0" > / proc/sys/net/ipv4/conf/all/arp_ignore

Echo "0" > / proc/sys/net/ipv4/conf/all/arp_announce

Echo "RealServer Stoped"

*)

Echo "Usage: $0 {start | stop}"

Exit 1

Esac

Exit 0

Startup script

/ usr/local/bin/realserver.shstart

5.5 start keepalived

LVS-DR-Master (192.168.8.12) and LVS-DR-Backup (192.168.8.13) operate:

/ etc/init.d/keepalivedstart

Use the ipvsadm-L command to see if VIP can be successfully mapped to the back-end service. If it fails, you can use the / var/log/messages log to locate the cause of the keepalived startup failure.

IPVirtual Server version 1.2.1 (size=4096)

ProtLocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP test1:ftp rr persistent 50

-> 192.168.8.12:ftp Route 1 00

-> 192.168.8.13:ftp Route 1 00

TCP test1:http rr persistent 50

-> 192.168.8.12:http Route 1 00

-> 192.168.8.13:http Route 1 00

If there is anything else you need to know, you can find our professional technical engineer on the official website. The technical engineer has more than ten years of experience in the industry, so it will be more detailed and professional than the editor's answer. Official website link www.yisu.com

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report