Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

KVM+GFS distributed file system highly available cluster

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Overview of GlusterFS

GFS is an extensible distributed file system for large, distributed applications that access large amounts of data. It runs on cheap ordinary hardware and provides fault tolerance. It can provide high-performance services to a large number of users.

Open source distributed file system

It is composed of storage server, client and NFS/Samba storage gateway.

GlusterFS features: scalability and high performance; high availability; global unified namespace; elastic volume management; modular stack architecture based on standard protocols; modular stack structure through the combination of modules to achieve complex functions

How GlusterFS works (process)

Resilient HASH algorithm: a 32-bit integer is obtained by HASH algorithm; it is divided into N connected subspaces, and each space corresponds to a Brick; elastic HASH algorithm's advantages: ensuring that the data is evenly distributed in each Brick; solving the dependence on the metadata server, and then solving the single point of failure and access bottleneck. GlusterFS volume type: (1) distributed volume: no file is divided into parts; the HASH value is saved by extending the file attributes; the underlying file system is supported by ext3, ext4, ZFS, XFS and other features: files are distributed on different servers, without redundancy; it is easier and cheaper to expand the size of the volume; a single point of failure will cause data loss; rely on the underlying data protection. (2) stripe volume: the file is divided into N blocks (N stripe nodes) according to the offset, and the polling is stored in each Brick Server node; when storing large files, the performance is particularly outstanding; there is no redundancy, similar to raid0 characteristics: data is divided into smaller chunks and distributed to different stripe areas in the block server farm; distribution reduces load and smaller files speed up access. No data redundancy (3) replication volume: keep one or more copies of the same file; disk utilization is low because the copy is to be saved; storage space on multiple nodes is inconsistent. Then install the bucket effect to take the capacity of the lowest node as the total capacity characteristic of the volume: all servers in the volume keep a complete copy; the number of copies of the volume can be determined when the customer creates it. There are at least two block servers or more servers; disaster tolerance. (4) distributed stripe volume: the function of both distributed and stripe volume; mainly used for large file access processing; at least 4 servers are needed. (5) distributed replication volume: the function of both distributed volume and replication volume; it is used in the case of redundancy:

Experimental environment node1 192.168.13.128node2 192.168.13.129node3 192.168.13.130node4 192.168.13.131kvm 192.168.13.133node node server adds a hard disk

Enable virtualization on the kvm virtual machine

1, configure the local hosts file on the node node server and the kvm server, respectively Install the necessary gluster software [root@localhost ~] # hostnamectl set-hostname node1 [root@localhost ~] # su [root@localhost ~] # hostnamectl set-hostname kvm [root@node1 ~] # vim / etc/hosts # # modify the local parsing file 192.168.13.128 turn off the firewall [root@node1 ~] # systemctl on all servers Stop firewalld.service # # turn off firewall [root@node1 ~] # setenforce 0configuration # copy configuration files to all servers [root@node1 ~] # scp / etc/hosts root@192.168.13.129:/etc/hosts [root@node1 ~] # scp / etc/hosts root@192.168.13.130:/etc/hosts [root@node1 ~] # scp / etc/hosts root@192.168.13.131:/etc/hosts [root@ Node1 ~] # scp / etc/hosts root@192.168.13.133:/etc/hosts # # Mount on all node node servers Modify the yum configuration file [root@node1 ~] # mkdir / gfs [root@node1 ~] # mount.cifs / / 192.168.100.3/LNMP-C7 / gfs/ # # Mount [root@node1 ~] # cd / etc/yum.repo.d/ [root@node1 yum.repos.d] # mkdir bak # # create a backup directory [root@node1 yum.repos.d] # mv CentOS-* bak/ [root@node1 yum.repos.d] # vim abc.repo # # all node nodes need to configure the gfsrepo path under the yum source [abc] name=abcbaseurl= file:///gfs/gfsrepo # # mount directory gpgcheck=0enabled=1 [root@node1 yum.repos.d] # yum clean all & & yum makecache # # establish metadata cache [root@node1 yum.repos.d] # yum install-y glusterfs glusterfs-server gluster-fuse glusterfs-rdma## install the necessary software [root@node1 yum.repos.d] # systemctl start glusterd # # enable gluster Service [root@node1 yum.repos.d] # systemctl enable glusterd # # Boot self-startup [root@node1 yum.repos.d] # ntpdate ntp1.aliyun.com # # synchronization time 2 Use disk partition script for partition mount (all node node servers) [root@node1 yum.repos.d] # cd / opt/ [root@node1 opt] # vim disk.sh / / mount disk script One-click operation #! / bin/bashecho "the disks exist list:" fdisk-l | grep 'disk / dev/sd [amerz]' echo "= =" PS3= "chose which disk you want to create:" select VAR in `disk / dev/sd* | grep-o'sd [bMZ]'| uniq` quitdo case $VAR in sda) fdisk-l / dev/sda break Sd [bmerz]) # create partitions echo "n p w" | fdisk / dev/$VAR # make filesystem mkfs.xfs-I size=512 / dev/$ {VAR} "1" & > / dev/null # mount the system Mkdir-p / data/$ {VAR} "1" & > / dev/null echo-e "/ dev/$ {VAR}" 1 "/ data/$ {VAR}" 1 "xfs defaults 0\ n" > > / etc/fstab mount-a & > / dev/null break ; quit) break;; *) echo "wrong disk,please check again" Esacdone [root@node1 opt] # chmod + x fdisk.sh # # add execution permission [root@node1 opt] # / fdisk.sh # # execute script [root@node1 opt] # df-hT # # View mount information 3, create distributed replication volume # # add storage trust pool You can create a distributed replication volume gluster volume create dis-vol node1:/data/sdb1 node2:/data/sdb1 force// by simply adding three other nodes to one host: [root@node1 opt] # gluster peer probe node2 [root@node1 opt] # gluster peer probe node3 [root@node1 opt] # gluster peer probe node4 [root@node1 opt] # gluster volume create models replica 2 node1:/data/sdb1 node2:/data/sdb1 node3:/data/sdb1 node4:/data/sdb1 force## Create a distributed volume gluster volume create stripe-vol stripe 2 node1:/data/sdc1 node2:/data/sdc1 force// create a stripe volume gluster volume create rep-vol replica 2 node3:/data/sdb1 node4:/data/sdb1 force// create a replication volume gluster volume create dis-stripe stripe 2 node1:/data/sdd1 node2:/data/sdd1 node3:/data/sdd1 node4:/data/sdd1 force// create a distributed stripe volume [root@node1 opt] # gluster volume start models # # start distributed replication volume 4 Mount the created distributed replication volume [root@kvm ~] # mkdir / abc [root@kvm ~] # mount.cifs / / 192.168.100.3/iOS / abc/ [root@kvm ~] # cp / abc/CentOS-7-x86_64-DVD-1708.iso / opt/ & # # copy the centos7 image to / opt/ to run [root@kvm ~] # cd / etc/yum.repo.d/ [root@] in the background Kvm yum.repos.d] # mkdir bak # # create backup directory [root@kvm yum.repos.d] # mv CentOS-* bak/ [root@kvm yum.repos.d] # scp-r root@192.168.13.128:/gfs/gfsrepo / # # copy the gfs source to the root directory [root@kvm yum.repos.d] # vim abc.repo # # all node nodes need to configure yum source [abc] name=abcbaseurl= file:///gfsrepo # # The path to the source file gpgcheck=0enabled=1 [root@kvm yum.repos.d] # umount / abc/ # # centos image is copied and unmounted [root@kvm yum.repos.d] # yum install-y glusterfs glusterfs-fuse # # install the necessary software [root@kvm yum.repos.d] # mv bak/*. / # # release the original yum source [root@kvm yum.repos.d] # rm-rf bak/ [root@kvm yum.repos.d] # yum list # # yum list update [root@kvm yum.repos.d] # mkdir / kvmdata # # create mount point [root@kvm yum.repos.d] # mount.glusterfs node1:models / kvmdata # # mount distributed replication volumes to the mount point [root@kvm yum.repos.d] # df-hT # # you can see that the total is 160g and now it's 80G5 Deploy the virtualization platform [root@kvm ~] # yum groupinstall "GNOME Desktop"-y # # Desktop Environment yum install qemu-kvm- y # # kvm kernel yum install qemu-kvm-tools-y # # Debug tool yum install virt-install-y # # Command Line tool yum install qemu-img-y # component on the yum install qemu-img server to create a disk Launch virtual machine yum install bridge-utils-y # # Network support tool yum install libvirt-y # # Virtual machine management tool yum install virt-manager-y # # graphical management of virtual machine [root@kvm ~] # egrep'(vmx | svm)'/ proc/cpuinfo # # see if cpu supports [root@kvm ~] # lsmod | grep kvm # # check whether kvm installs [root@kvm ~] # systemctl start libvirtd # # enable service [root@kvm ~] # systemctl status libvirtd [root@kvm ~] # systemctl enable libvirtd # # enable self-booting [root@kvm ~] # cd / etc/sysconfig/network-scripts/ [root@kvm network-scripts] # vim ifcfg-ens33 # # add BRIDGE=br0 [root@kvm network-scripts] # cp-p ifcfg-ens33 ifcfg-br0 # # copy configuration file to bridge [root@kvm network-scripts] # vim ifcfg-br0TYPE=Bridge # # Bridge mode BOOTPROTO=static # # static NAME=br0 # # name: br0DEVICE=br0IPADDR=192.168.13.133 # # ip address NETMASKE=255.255.255.0 # # Subnet Mask GATEWAY=192.168.13.1 # # Gateway [root@kvm network-scripts] # service network restart # # restart the network card [root@kvm network-scripts] # cd / kvmdata/ # # switch to the GFS mount point [root@kvm kvmdata] # mkdir kgc_disk kgc_iso # # create a file system directory and an image directory [root@kvm Kvmdata] # cp / opt/CentOS-7-x86_64-DVD-1708.iso kgc_iso/ & # # copy the image file to the image directory 6 Use the graphical interface of kvm to operate [root@kvm ~] # virt-manager

7. View the information on the disk of the node node server [root@node1 ~] # cd / data/sdb1/ # # distributed on each node node server [root@node1 sdb1] # lskgc_disk kgc_iso thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report