In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-10 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article will explain in detail how to deploy ISCSI highly available clusters through KVM shared storage. The editor thinks it is very practical, so I share it with you for reference. I hope you can get something after reading this article.
ISCSI, or Internet SCSI, is a standard developed by IETF to map SCSI blocks to Ethernet packets. Fundamentally speaking, it is a new storage technology based on IP Storage theory, which combines SCSI interface technology widely used in storage industry with IP network technology, and can build SAN on IP network.
Purpose: running virtual machines can be smoothly migrated online without terminal virtual machines running highly available architecture: pacemaker+corosync and managed by pcs system environment: three machines are all the latest components required by centos7.4: DLM,gfs2,clvm,pcs,pacemeker,corosync,libvirtd,qemu Qemu-img constraint relationships: DLM > > CLVM > > GFS2 File system > > VirtualDomain KVM High availability Cluster configuration based on ISCSI shared Storage KVM High availability Cluster configuration based on ISCSI shared Storage kvm Host Node Software installation # Virtualization Software installation yum groups install-y "Virtualization Platform" yum groups install-y "Virtualization Hypervisor" yum groups install-y "Virtualization Tools" yum groups install-y "Virtualization Client" # Cluster and accompanying software installation yum install bash-completion ntpdate Tigervnc-server iscsi-initiator-utils-yyum install pacemaker corosync pcs psmisc policycoreutils-python fence-agents-all-yyum install dlm lvm2-cluster gfs2-utils-y # upgrade standard kvm component to ev version (optional) yum install centos-release-qemu-ev-yyum install qemu-kvm-ev-y # tested After it is installed, the virtual machine will get stuck # or run the following command Yum groups install-y "Virtualization Platform" & & yum groups install-y "Virtualization Hypervisor" & & yum groups install-y "Virtualization Tools" yum groups install-y "Virtualization Client" & & yum install centos-release-qemu-ev tigervnc-server iscsi-initiator-utils vim pacemaker corosync pcs psmisc policycoreutils-python fence-agents-all dlm lvm2-cluster gfs2-utils bash-completion-y & & yum update-y & & reboot preparation stage: (kvm each node needs to carry out) 1:hosts text (vi / etc/hosts) 192.168.1.31 kvm-pt1192.168.1.32 kvm-pt2172.168.1.33 san10.0.0.31 node110.0.0.32 node22:ssh key Mutual Trust ssh-keygen-t rsa-P''ssh-copy-id-I ~ / .ssh/id_rsa.pub root@node1 # to self password-free ssh-copy-id-I ~ / .ssh/id_rsa.pub root@node2 # to node2 password-free (bidirectional) 3: time zone cp-f / usr/share/zoneinfo/Asia/Shanghai / etc/localtime4: timing time synchronization yum install ntpdate-time synchronization 30 * / usr/sbin/ntpdate time.windows.com & > / dev/null5: firewall firewall-cmd-- permanent-- add-service=high-availabilityfirewall-cmd-- zone=trusted-- add-source=10.0.0.0/24-- permanentfirewall-cmd-- zone=trusted-- add-source=192.168.1.0/24- -permanentfirewall-cmd-- zone=trusted-- add-source=172.168.1.0/24-- permanentfirewall-cmd-- reload6: all kvm host nodes create relevant directories mkdir / kvm-hosts first: configure pcs daemon systemctl start pcsdsystemctl enable pcsdsystemctl status pcsd.service # View status echo "7845" | passwd-- stdin haclusterpcs cluster auth node1 node2-u hacluster-p 7845pcs cluster setup-- name kvm-ha-cluster node1 node2 # create a cluster named kvm-ha-cluster Later gfs2 needs to use pcs cluster start again-- allpcs cluster enable-- all # boot automatically all cluster nodes (do not set the cluster to boot self-boot in a production environment) second: configure storage node san and mount iscsi devices
Made by linux-io (sharing two disks sdb,sdc) (see blog for configuration process) https://boke.wsfnk.com/archives/345.html
Sdb (42g): stand-alone disk, used to create a cluster-based lvm for virtual machine storage, sdc (1m): memory allocation, used as STONITH # here disk is used as a STONITH device, of course, other things such as ilo3 can be used. Ipmi, etc. # modify the iscsi name of each kvm to something popular and meaningful (for each kvm host node) [root@kvm-pt1 ~] # cat / etc/iscsi/initiatorname.iscsiInitiatorName=iqn.1994-05.com.redhat:node1# Discovery Mount iscsiadm-- mode discovery-- type sendtargets-- portal 1172.168.1.33iscsiadm-m node-L all third: configure STONITH isolation device DLM distributed Lock and gfs2 File system # configure SONITH isolation device (disk) any node (sdb is used for storage Sdc is used as stonith isolation device) [root@kvm-pt1 ~] # ll / dev/disk/by-id/ | grep sdlrwxrwxrwx. 1 root root 10 November 28 15:24 lvm-pv-uuid-wOhqpz-ze94-64Rc-U2ME-STdU-4NUz-AOJ5B3->.. / sda2lrwxrwxrwx. 1 root root 9 November 28 15:25 scsi-360014053b477d3fba5a4039a52358f0f->.. /.. / sdblrwxrwxrwx. 1 root root 9 November 28 15:25 scsi-36001405419b8568d022462c9c17adca4->.. /.. / sdclrwxrwxrwx. 1 root root 9 November 28 15:25 wwn-0x60014053b477d3fba5a4039a52358f0f->.. /.. / sdblrwxrwxrwx. 1 root root 9 November 28 15:25 wwn-0x6001405419b8568d022462c9c17adca4->.. /.. / sdc# Note: wwn is used here Do not use the number pcs stonith create scsi-shooter fence_scsi pcmk_host_list= "node1 node2" devices= "/ dev/disk/by-id/wwn-0x6001405419b8568d022462c9c17adca4" meta provides=unfencing# that begins with scsi to configure distributed lock DLM (any node) method 1: pcs cluster cib dlm_cfgpcs-f dlm_cfg resource create dlm ocf:pacemaker:controld op monitor interval=60spcs-f dlm_cfg resource clone dlm clone-max=2 clone-node-max=1pcs cluster cib-push dlm_cfg method 2: (one step in place) pcs resource Create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true# configuration clvm (all kvm nodes) lvmconf-- enable-clusterreboot fourth: add clvm resources to the cluster # add cloned resources That is, run clvmdpcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=truepcs status# configuration constraints on each node node (clvmd must be started after dlm starts, and must be on the same node) pcs constraint order start dlm-clone then clvmd-clonepcs constraint colocation add clvmd-clone with dlm-clone# to see if clvmd is started (whether there is a clvmd thread), ok's systemctl status pacemaker# to view constraints pcs constraint fifth: create lvm in the cluster And mount the gfs2 file system # the sdb in the iscsi device found and logged in will be divided into a zone and designated as 8e type fdisk / dev/sdbpartprobe Multipath-r # multipath software reload (no multipath is set here) pvcreate / dev/sdb1vgcreate vmvg0 / dev/sdb1vgslvcreate-n lvvm0-l 100%FREE vmvg0 # 100% cannot be executed here Here set to lvcreate-n lvvm0-L 38G vmvg0# to create the GFS2 file system mkfs.gfs2-p lock_dlm-j 2-t kvm-ha-cluster:kvm / dev/vmvg0/lvvm0 #-t is preceded by the cluster name followed by a custom # add GFS2 file system to the cluster # # add clone resources That is, the file system pcs resource create VMFS Filesystem device= "/ dev/vmvg0/lvvm0" directory= "/ kvm-hosts" fstype= "gfs2" clone# is mounted on each node to see if all nodes have mounted lvvm0 to the / kvm-hosts directory. The result: ok status # and test whether reading and writing can be done at the same time. And check whether the data of each node is synchronized: ok VI: configuration constraints and selinuxpcs constraint order clvmd-clone then VMFS-clonepcs constraint colocation add VMFS-clone with clvmd-clone# again check the constraints pcs constraint# configuration SELINUX (otherwise virtual machines cannot access storage files) (all nodes do) semanage fcontext-a-t virt_image_t "/ kvm-hosts (/. *)?" # if there is no semanage You can install yum install policycoreutils-pythonrestorecon-R-v / kvm-hosts seventh: create a virtual machine # create a virtual machine qemu-img create-f qcow2 / kvm-hosts/web01.qcow2 10Gvirt-install-- name web01-- virt-type kvm--ram 1024-- cdrom=/kvm-hosts/CentOS-7-x86_64-DVD-1611.iso-- disk path=/kvm-hosts/web01.qcow2-- network network=default-- graphics vnc Listen=0.0.0.0-noautoconsole-os-type=linux-os-variant=rhel7# configuration third-party management functions connect and display virtual machines on kvm-pt (all node nodes want) firewall-cmd-- permanent-- add-service=vnc-serverfirewall-cmd-- reload# configuration firewall (all kvm nodes) firewall-cmd-- permanent-- add-port=16509/tcp # this is virsh-c qemu+tcp://node2/system mode Will not be used here, but still release firewall-cmd-- permanent-- add-port=49152-49215/tcp # migration port firewall-cmd-- before reload# creation: migration test (virt-manage and command line) result: all ok Can migrate virsh migrate web01 qemu+ssh://root@node2/system-live-unsafe-persistent-undefinesource# export xml files virsh dumpxml web01 > / kvm-hosts/web01.xmlvirsh undefine web01# to create virtual machines (disk files and xml configuration files of virtual machines are placed on shared storage) (virtual machines are controlled by cluster software Not controlled by the local libvirt) pcs resource create web01_res VirtualDomain\ hypervisor= "qemu:///system"\ config= "/ kvm-hosts/web01.xml"\ migration_transport= "ssh"\ meta allow-migrate= "true"\ # this is wrong, do not use it, I was not careful As a result, it is impossible to migrate smoothly for a long time (ssh mode cannot be written at the end) pcs resource create web01_res VirtualDomain\ hypervisor= "qemu:///system"\ config= "/ kvm-hosts/web01.xml"\ meta allow-migrate= "true" priority= "100"\ migration_transport=ssh# configuration constraints (for each virtual machine configuration, you need to configure the following similar constraints) pcs constraint order start VMFS-clone then web01_res # start the file system first After starting the virtual machine resource pcs constraint colocation add web01_res with VMFS-clone # resource and file system to view the constraints in the same location pcs constraint #, you can add-- full# configuration, the virtual machine can start normally 8: migration test # pcs cluster standby node2 # smooth migration ok#pcs resource move web01_res node2 # smooth migration ok#pcs cluster stop # smooth migration ok#init 6 # smooth migration no 9: migration scheme
For node maintenance, it is recommended that you first use the move command to migrate the virtual machine to other available node, and maintain the node in stop
This is the end of the article on "how to deploy ISCSI high availability clusters through KVM shared storage". I hope the above content can be helpful to you, so that you can learn more knowledge. if you think the article is good, please share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.