In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article is about how to deploy a KVM dual-node high-availability cluster through NFS shared storage. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
Network file system is one of the file systems supported by FreeBSD, also known as NFS. NFS allows a system to share directories and files with others on the network. By using NFS, users and programs can access files on remote systems as if they were local.
Purpose: running virtual machines can be smoothly migrated online without terminal virtual machines running highly available architecture: pacemaker+corosync and managed by pcs system environment: three machines are all the latest components required by centos7.4: nfs,pcs,pacemeker,corosync,libvirtd,qemu Qemu-img constraint relationship: NFS > > VirtualDomain KVM dual-node High availability Cluster Architecture based on NFS shared Storage KVM dual-node High availability Cluster Architecture based on NFS shared Storage kvm Host Node Software installation # Virtualization Software installation yum groups install-y "Virtualization Platform" yum groups install-y "Virtualization Hypervisor" yum groups install-y "Virtualization Tools" yum groups install-y "Virtualization Client" # Cluster and accompanying software installation yum install bash-completion ntpdate tigervnc- Server nfs-utils-yyum install pacemaker corosync pcs psmisc policycoreutils-python fence-agents-all-y # upgrade the standard kvm component to ev version (optional) yum install centos-release-qemu-ev-yyum install qemu-kvm-ev-y # or run the following command Yum groups install-y "Virtualization Platform" & & yum groups install-y "Virtualization Hypervisor" & & yum groups install-y "Virtualization Tools" & & yum groups install-y "Virtualization Client" & yum install bash-completion ntpdate tigervnc-server centos-release-qemu-ev nfs-utils-y & & yum install pacemaker corosync pcs psmisc policycoreutils-python fence-agents-all qemu-kvm-ev-y & & yum update-y & & reboot preparation phase: (kvm each node needs to carry out) 1: set each hostname Hosts file (the following table item) Enable selinux and firewalld firewall vim / etc/hosts192.168.1.31 kvm-pt1192.168.1.32 kvm-pt2172.168.1.33 kvm-nfs10.0.0.31 node110.0.0.32 node22:ssh key mutual trust ssh-keygen-t rsa-P''ssh-copy-id-I ~ / .ssh/id_rsa.pub root@node1 # to your own password-free ssh-copy-id-I ~ / .ssh/id_rsa.pub root@node2 # to node2 Password (bidirectional) 3: time zone cp-f / usr/share/zoneinfo/Asia/Shanghai / etc/localtime4: timing time synchronization yum install ntpdate-ycrontab-estrangement ycrontab 30 * / usr/sbin/ntpdate time.windows.com & > / dev/null5: firewall firewall-cmd-- permanent-- add-service=high-availabilityfirewall-cmd-- zone=trusted-- add-source=10.0.0.0/24-- permanentfirewall-cmd-- zone=trusted-- add-source=192.168. 1.0 take 24-permanentfirewall-cmd-zone=trusted-add-source=172.168.1.0/24-permanentfirewall-cmd-reload6: respectively in kvm-pt1 Create a directory mkdir / kvm-hosts on three kvm-pt2,kvm-nfs machines first: configure the NFS node
Share the / kvm-hosts directory and let kvm-pt1 and kvm-pt2 boot and mount to the corresponding directory
Yum-y install nfs-utils rpcbind# set the shared directory vim / etc/exports/kvm-hosts * (rw,async,no_root_squash) # to start the nfs service, and set it to boot systemctl start nfssystemctl start rpcbindsystemctl enable nfssystemctl enable rpcbind# on the kvm-nfs host Firewalld releases nfs service port firewall-cmd-- permanent-- add-service=nfsfirewall-cmd-- permanent-- add-service=rpc-bindfirewall-cmd-- permanent-- add-service=mountdfirewall-cmd-- reloadfirewall-cmd-- list-all # check the resources in firewalld direction # respectively in kvm-pt1 Test on kvm-pt2 whether resources on kvm-nfs can be mounted (on kvm-pt1 and kvm-pt2) # Edit this file to automatically mount vim / etc/fstab kvm-nfs:/kvm-hosts / kvm-hosts nfs _ netdev 0 0 2: configure pcs daemon of kvm host node systemctl start pcsdsystemctl enable pcsdsystemctl status pcsd.service # View status echo "7845" | passwd-stdin hacluster# configure hacluster account password (when installing cluster software User created by default, but the password is disabled) (this user password must be the same between node nodes) pcs cluster auth node1 node2-u hacluster-p 7845pcs cluster setup-- name kvm-ha-cluster node1 node2pcs cluster start-- allpcs cluster enable-- all third: create a virtual machine on kvm-pt1 Put the disk file in the / kvm-hosts directory stored by nfs # create the virtual machine disk file first (kvm-pt1 has mounted the / kvm-hosts directory of nfs to the local / kvm-hosts directory) qemu-img create-f qcow2 / kvm-hosts/web01.qcow2 10G# first resolve the restrictions of selinux on kvm remote virtual machines (this operation needs to be performed on both kvm-pt1 and kvm-pt2) setsebool-P virt_use_nfs release vnc services on the firewall No, no, Listen=0.0.0.0-- noautoconsole-- os-type=linux-- os-variant=rhel7 fourth: virtual machine migration test (online migration based on shared storage: it is best to put both xml files and disk files on shared storage) # release firewall port # configure firewall rules (in the following direction) TCP: Port 2224ji3121jue 21064UDP: Port 5405DLM (such as DLM lock manager using clvm/GFS2): Port 21064firewall-cmd-- permanent- -add-port=16509/tcpfirewall-cmd-- permanent-- add-port=49152-49215/tcpfirewall-cmd-- reloadfirewall-cmd-- list-all# use the virsh migrate command on kvm-pt1 to migrate online (and migrate back on kvm-pt2) virsh migrate web01 qemu+ssh://root@node2/system-- live-- unsafe-- persistent-- undefinesource# on kvm-pt1 Export a copy of the xml file of the virtual machine to shared storage and correctly name virsh dumpxml web01 > / kvm-hosts/web01.xml# on kvm-pt1. Undefine the virtual machine virsh undefine web01 you just created. Fifth: configure STONITH.
This experimental environment is virtual and there is no isolation device, so it has to be turned off here before we can continue the experiment.
Pcs property set stonith-enabled=false VI: add virtual machine resources to the cluster
Each virtual machine needs to create a resource, which transfers control of the virtual machine to pcs.
Pcs resource create web01_res VirtualDomain\ hypervisor= "qemu:///system"\ config= "/ kvm-hosts/hosts-xml/web01.xml"\ migration_transport= "ssh"\ meta allow-migrate= "true" # meta allow-migrate= "true" determines the migration mode 7: migration testing
The test results show that the standard version of kvm is a shutdown at one end and then gets up at the other end, but the yum install qemu-kvm-ev with a kvm upgrade is a smooth migration without interruption of service.
# Mobile resource pcs resource move web01_res # will warn you that there is a cost to migrate (it creates an id, which is a localtion location constraint, a negative value, that is, it can never be migrated back) pcs resource move web01_res node2# node standby pcs cluster standby node2# or cancel standby pcs cluster unstandby node2# node downtime pcs cluster stoppcs resource desribe ocf:heartbeat:VirtualDomain # see how to write the VirtualDomain script # 8: when a negative id occurs during migration Maybe the resource cannot start pcs constraint again-- full # View all the resulting localtion location constraints # Delete location constraints pcs constraint remove cli-ban-web01_res-on-node2 # after deletion, you can start the virtual machine. Thank you for reading! This is the end of the article on "how to deploy a KVM dual-node high availability cluster through NFS shared storage". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.