Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure a KVM dual-node high availability cluster based on dual-host DRBD

2025-04-04 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly shows you "how to configure dual-host DRBD-based KVM dual-node high availability cluster", the content is easy to understand, clear, hope to help you solve your doubts, the following let the editor lead you to study and learn "how to configure dual-host DRBD-based KVM dual-node high availability cluster" this article.

Experimental purpose: kvm high availability platform and high availability architecture based on local storage and smooth migration of virtual machines: pacemaker+corosync and managed by pcs required components: DRBD,DLM,gfs2,clvm,pcs,pacemeker,corosync,libvirtd,qemu,qemu-img system environment: two kvm nodes are the latest centos7.4, each node, mount a sdb 40G disk

Experimental environment: the kvm node runs on the host machine of ESXI6.5 (see figure)

Software installation (two-node operation)

# installation of DRBD management software (first adding key and elrepo sources) rpm-import https://www.elrepo.org/RPM-GPG-KEY-elrepo.orgrpm-Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpmyum install kmod-drbd84 drbd84-utils-y # Virtualization software installation yum groups install-y "Virtualization Platform" yum groups install-y "Virtualization Hypervisor" yum groups install-y "Virtualization Tools" yum groups install- Y "Virtualization Client" # Cluster and supporting software installation yum install bash-completion ntpdate tigervnc-server-yyum install pacemaker corosync pcs psmisc policycoreutils-python fence-agents-all-y#gfs2 and dlm and clvm software yum install dlm lvm2-cluster gfs2-utils-y # upgrade standard kvm components to ev version (optional) yum install centos-release-qemu-ev-yyum install qemu-kvm-ev-y # has been tested After installing it When creating a virtual machine, it will get stuck # or run the following command to rpm-- import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org & & rpm-Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm & & yum install kmod-drbd84 drbd84-utils-y & & yum groups install-y "Virtualization Platform" & & yum groups install-y "Virtualization Hypervisor" & & yum groups install- Y "Virtualization Tools" & & yum groups install-y "Virtualization Client" & yum install bash-completion ntpdate tigervnc-server centos-release-qemu-ev-y & & yum install pacemaker corosync pcs psmisc policycoreutils-python fence-agents-all-y & & yum install dlm lvm2-cluster gfs2-utils-y & & reboot

Preparation stage

1 Hostnam Hosts parses 10.0.0.31 node110.0.0.32 node22:ssh key mutual trust ssh-keygen-t rsa-P''ssh-copy-id-I / .ssh/id_rsa.pub root@node1 # to oneself password-free ssh-copy-id-I ~ / .ssh/id_rsa.pub root@node2 # to node2 password-free (bi-directional) 3: each node node is ready to mount a 40G local magnet Disk sdb4: configure time zone and clock cp-f / usr/share/zoneinfo/Asia/Shanghai / etc/localtimecrontab-dev/null5 30 * / usr/sbin/ntpdate time.windows.com & > / dev/null5: create a directory on all nodes mkdir / kvm-hosts6: configure firewalld Firewall Set the private network segment of corosync,drbd to fully open firewall-cmd-- zone=trusted-- add-source=10.0.0.0/24-- permanentfirewall-cmd-- zone=trusted-- add-source=172.168.1.0/24-- permanentfirewall-cmd-- reload7: configure selinuxyum install-y policycoreutils-python # to install this software package There will be the following command semanage permissive-a drbd_t8: disk preparation # create a lv for the local 40G disk (note that the size of the disk should be the same) (both nodes should be done, it is recommended to configure the name of lv to the same) fdisk / dev/sdbpartprobepvcreate / dev/sdb1vgcreate vgdrbd0 / dev/sdb1lvcreate-n lvdrbd0-L 40G vgdrbd0

First: configure DRBD (two-node operation)

# modify the global configuration file: change vi / etc/drbd.d/global_common.confusage-count yes; to no, which is the usage count. The drbd team collects how many vi / etc/drbd.d/r0.resresource R0 {protocol C; meta-disk internal; device / dev/drbd0; disk / dev/vgdrbd0/lvdrbd0 are created using drbd# on the Internet. Syncer {verify-alg sha1;} on node1 {address 172.168.1.41 syncer 7789;} on node2 {address 172.168.1.42 syncer 7789;} # if a single master drbd does not need to configure the following parameters, here you need to configure net {allow-two-primaries for two hosts After-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect;} disk {fencing resource-and-stonith;} handlers {fence-peer "/ usr/lib/drbd/crm-fence-peer.sh" After-resync-target "/ usr/lib/drbd/crm-unfence-peer.sh" }} # initialize drbdadm create-md R0 cat / proc/drbd # status modprobe drbd # load drbd module at this time, drbdadm up R0 cat / proc/drbd # you can see status # synchronization (on one of the nodes, make it primary And check whether to synchronize from the specified network card) drbdadm primary R0-- force ifstat # View network card traffic cat / proc/drbd # check synchronization progress # set drbd to boot echo "drbdadm up R0" > > / etc/rc.local chmod + x / etc/rc.d/rc.local

Second: create a cluster

Systemctl start pcsdsystemctl enable pcsdecho "7845" | passwd-- stdin hacluster # the first three steps are two-node operation, then only one node needs to operate pcs cluster auth node1 node2-u hacluster-p 7845pcs cluster setup-- name kvm-ha-cluster node1 node2 # to create a cluster named kvm-ha-cluster, and then gfs2 needs to use pcs cluster start-- allpcs cluster enable-- all # to boot all cluster nodes automatically (do not set the cluster to boot self-boot in a production environment)

Third: configure STONITH (since the node's hosting platform is ESXI, fence_vmware_soap is used here)

# check whether fence_vmware_soappcs stonith list is installed on two nodes | grep fence_vmware_soap# on all nodes, check whether you can communicate with the esxi host [root@node1 ~] fence_vmware_soap-a 192.168.5.1-z-ssl-insecure-- action list-- username= "root"-- password= "tianyu@esxi" node1564d59df-c34e-78e9-87d2-6551bdf96b14node2564d585f-f120-4be2-0ad4-0e1964d4b7b9# try whether fence_vmware_soap can control the esxi host. Operate on the virtual machine (e.g. restart the virtual machine node2) [root@node1 ~] # fence_vmware_soap-a 192.168.5.1-z-ssl-insecure-- action list-l root-p tianyu@esxi-- plug= "node2"-- action=rebootSuccess: Rebooted explains:-a refers to the management address of ESXI,-z means to connect to port 443 using ssl,-l is the administrative user name of esxi,-p is the administrative password,-plug is the name of the virtual machine. It can be UUID if the name is not unique.-- action is the execution action (reboot | off | on) # configure STONITHpcs cluster cib stonith_cfgpcs-f stonith_cfg stonith create MyVMwareFence fence_vmware_soap ipaddr=192.168.5.1 ipport=443 ssl_insecure=1 inet4_only=1 login= "root" tianyu@esxi "action=reboot pcmk_host_map=" node1:564d59df-c34e-78e9-87d2-6551bdf96b14 Node2:564d585f-f120-4be2-0ad4-0e1964d4b7b9 "pcmk_host_check=static-list pcmk_host_list=" node1,node2 "power_wait=3 op monitor interval=60spcs-f stonith_cfg property set stonith-enabled=truepcs cluster cib-push stonith_cfg # Update # Note 1:pcmk_host_map here is the name of the virtual machine displayed on the ESXI, not the hostname of the kvm node at the system level 2:pcmk_host_map followed by the format" virtual machine name: UUID " Virtual machine name: UUID "# this is the way to view pcs's stonith settings for fence_vmware_soap. Pcs stonith describe fence_vmware_soap# to view the stonith resources just configured [root@node1 ~] # pcs stonith show-- full Resource: MyVMwareFence (class=stonith type=fence_vmware_soap) Attributes: action=reboot inet4_only=1 ipaddr=192.168.5.1 ipport=443 login=root passwd=tianyu@esxi pcmk_host_check=static-list pcmk_host_list=node1,node2 pcmk_host_map=node1:564df454-4553-2940-fac6-085387383a62 Node2:564def17-cb33-c0fc-3e3f-1ad408818d62 power_wait=3 ssl_insecure=1 Operations: monitor interval=60s (MyVMwareFence-monitor-interval-60s) # View the actions that the just configured stonith will perform when a brain fissure occurs [root@node1 ~] # pcs property-- all | grep stonith-action stonith-action: reboot tests whether the STONITH setting is set correctly and takes effect pcs status # first check whether the stonith resource MyVMwareFence you just created has been started on a node (of course Then perform the following verification) stonith_admin-- reboot node2 # restart the node2 node Verify successfully

Fourth: configure DLM

Pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true# to see if dlm started pcs statussystemctl status pacemaker

Fifth: add DRBD resources to the cluster

# first of all To ensure that both states are Secondary, the data status is UpToDate [root@node1 ~] # cat / proc/drbd version: 8.4.10-1 (api:1/proto:86-101) GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@ 2017-09-15 14:23:22 0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0# if the drbd status is now like this, Primary/Secondarydrbdadm down R0 # do drbdadm up R0 # on the Primary side Then check the cat / proc/drbd # add resource (this will change the drbd status of both nodes to Primary/Primary) pcs cluster cib drbd_cfgpcs-f drbd_cfg resource create VMdata ocf:linbit:drbd drbd_resource=r0 op monitor interval=60spcs-f drbd_cfg resource master VMdataclone VMdata master-max=2 master-node-max=1 clone-max=2 clone-node-max=1 notify=truepcs-f drbd_cfg resource show # check whether pcs cluster cib-push drbd_cfg # submit # View Status on both sides of the drbd cat / proc/drbd # the result is Primary/Primary ok [root@node1 ~] # cat / proc/drbdversion: 8.4.10-1 (api:1/proto:86-101) GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by mockbuild@ 2017-09-15 14:23:22 0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-ns:0 nr:0 dw:0 dr:912 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

Sixth: create a CLVM and configure constraints

# set lvm working mode to cluster mode (two-node operation) lvmconf-- enable-clusterreboot# adds CLVM resources to the cluster pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true # View will find that clvm has started systemctl status pacemaker# configuration constraints pcs constraint order start dlm-clone then clvmd-clonepcs constraint colocation add clvmd-clone with dlm-clonepcs constraint order promote VMdataclone then start clvmd-clonepcs constraint colocation add clvmd-clone with VMdataclone# verification view constraints pcs constraint

Seventh: create a LV for the cluster

# depending on the scenario, you need to create the filter attribute of lvm. Avoid lvm seeing duplicate data (two-node operation) # one of the nodes [root@node1 ~] # lvscan ACTIVE'/ dev/vgdrbd0/lvdrbd0' [5.00 GiB] inherit ACTIVE'/ dev/cl/swap' [2.00 GiB] inherit ACTIVE'/ dev/cl/root' [28.99 GiB] inheritpvcreate / dev/drbd0pvscan # found error # (two-node operation) Vi / etc/lvm/lvm.conf # find filter Modify it to the following filter = ["a | / dev/sd* |", "a | / dev/drbd* |", "r |. * |"] # a: accept, r: reject, where sd* is the local disk and drbd* is the device created. Modify it according to your experimental environment You may be vd*# to check pvscan again # No error # refresh lvmvgscan-v# on all nodes just create lvmpvcreate / dev/drbd0partprobe on one of the nodes Multipath-r vgcreate vgvm0 / dev/drbd0lvcreate-n lvvm0-l 100%FREE vgvm0lvscan [root@node1 ~] # vgs VG # PV # LV # SN Attr VSize VFree cl 1 20 wz--n-

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report