In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Main points of content:
I. case overview:
Second, deployment preparation:
3. Deployment examples:
4. Check the storage mode:
I. case overview:
This architecture is to combine the Glusterfs distributed file system mentioned in the previous blog, combined with KVM virtualization, to achieve a high availability effect.
(1) principle: distributed storage and redundancy of kvm virtual machine files are carried out by using the distributed replication volume of Glusterfs. Distributed replication volumes are mainly used to store a file on two or more nodes when redundancy is needed. When one node's data is lost or damaged, kvm can still find the virtual machine files on the other node through the volume group to ensure the normal operation of the virtual machine. When the node is repaired, Glusterfs automatically synchronizes the node data that contains data in the same group.
(2) characteristics of Glusterfs architecture:
Compute, storage, and Iamp O resources are aggregated into the global namespace, and each server is treated as a node, expanding capacity by adding additional nodes or additional storage to each node. Improve performance by deploying storage across more nodes.
Supports file-based mirroring and replication, striping, load balancing, failover, scheduling, disk caching, storage quotas, volume snapshots, and so on.
There is no connection between the clients of Glusterfs, which itself depends on the elastic hashing algorithm rather than using a centralized or distributed metadata model.
Glusterfs provides data reliability and availability through a variety of replication options: replication volumes, distributed volumes.
(3) schematic diagram:
II. Deployment:
1. Environment deployment:
Role / hostname IP address node1192.168.220.179node2192.168.220.131node3192.168.220.140node4192.168.220.136kvm192.168.220.137
2. Case requirements:
KVM+Glusterfs mode is adopted to ensure distributed deployment and distributed redundancy of virtual machine storage. Avoid when the virtual machine file is corrupted or lost. Thus, when it is damaged or lost, there is a real-time backup to ensure the normal operation of the business.
3. Deployment ideas:
Install KVM-> all nodes deploy Glusterfs-> client mount (Glusterfs) kvm creates a virtual machine using the mounted Glusterfs directory
3. Deployment examples:
Step 1: install and deploy KVM Virtualization platform
The virtual machine configuration is as follows: add a new hard disk; check all virtualization engine options. Otherwise, it cannot be created
(1) Mount the image file:
[root@kvm ~] # mkdir / abc [root@kvm ~] # mount.cifs / / 192.168.41.104/ISO / abc/ [root@kvm ~] # cp / abc/CentOS-7-x86_64-DVD-1708.iso / opt/ copy the image file to the local directory
(2) install the software required for KVM:
Yum groupinstall "GNOME Desktop"-y / / install desktop environment yum install qemu-kvm- y / / KVM module yum install qemu-kvm-tools-y / / KVM debugging tool yum install virt-install-y / / Command line tool yum install qemu-img-y / / qemu component for building virtual machines, create disks Launch virtual machine yum install bridge-utils-y / / Network support tool yum install libvirt-y / / Virtual machine management tool yum install virt-manager-y / / Image manage virtual machine
(3) check whether virtualization is installed successfully:
Cat / proc/cpuinfo | grep vmx / / check whether Virtualization lsmod is supported | grep kvm / / check whether KVM is installed
(4) configure the bridge network card:
1. Add this line at the end of vim / etc/sysconfig/network-scripts/ifcfg-ens33: BRIDGE=br0 / / br0 is the name of the bridge network card 2. Cd / etc/sysconfig/network-scripts/cp-p ifcfg-ens33 ifcfg-br0vim ifcfg-br0 modifies the bridge network card information as follows: TYPE=BridgePROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=staticDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=br0DEVICE=br0ONBOOT=yesIPADDR=192.168.220.137NETMASK=255.255.255.0GATEWAY=192.168.220.13, restart the network card: systemctl restart network
(4) enable virtualization:
Systemctl start libvirtdsystemctl enable libvirtd
Step 2: GlusterFS deployment
Add a new disk to four node virtual machines
(1) modify their hostnames and turn off the firewall.
(2) modify the / etc/hosts file. The operation of the four nodes is the same:
Vim / etc/hosts add the following: 192.168.220.179 node1192.168.220.131 node2192.168.220.140 node3192.168.220.136 node4192.168.220.137 kvm
(3) install GlusterFS:
Cd / opt/mkdir / abcmount.cifs / / 192.168.10.157/MHA / abc / / remotely mount to the local cd / etc/yum.repos.d/mkdir bak mv Cent* bak/ move the original sources to the newly created folder vim GLFS.repo / / create a new source [GLFS] name=glfsbaseurl= file:///abc/gfsrepogpgcheck=0enabled=1
(4) time synchronization settings:
Ntpdate ntp1.aliyun.com / / time synchronization (each node needs to operate) add storage trust pool, add all nodes on node1: [root@localhost yum.repos.d] # gluster peer probe node2peer probe: success. [root@localhost yum.repos.d] # gluster peer probe node3peer probe: success. [root@localhost yum.repos.d] # gluster peer probe node4peer probe: success. [root@localhost yum.repos.d] # gluster peer status / / View the status of all nodes
(5) disk configuration:
Fdisk / dev/sdb / / configure mkfs.xfs / dev/sdb1 / / format mkdir-p / data/sdb1/ / create mount point mount / dev/sdb1 / data/sdb1/ mount
(6) create a distributed replication volume:
[root@node1 ~] # gluster volume create models replica 2 node1:/data/sdb1 node2:/data/sdb1 node3:/data/sdb1 node4:/data/sdb1 force [root@node1 ~] # gluster volume start models / / Open volume volume start: models: success
Step 3: Mount the glusterfs volume on the client
(1) modify hosts file:
Vim / etc/hosts add the following hostname and corresponding IP address: 192.168.220.179 node1192.168.220.131 node2192.168.220.140 node3192.168.220.136 node4192.168.220.137 kvm
(2) GlusterFS deployment:
[root@kvm ~] # cd / etc/yum.repos.d/ [root@kvm yum.repos.d] # mkdir bak [root@kvm yum.repos.d] # mv Cent* bak/ [root@kvm yum.repos.d] # mkdir / aaa [root@kvm yum.repos.d] # mount.cifs / / 192.168.41.104/MHA / aaa [root@kvm yum.repos.d] # vim GLFS.repo add the following code: [GLFS] name=glfsbaseurl= file:///aaa/gfsrepogpgcheck=0enabled=1yum Install-y glusterfs glusterfs-fuse / / install the dependency package and then transfer the original CentOS source: [root@kvm yum.repos.d] # mv bak/*. /
(3) Mount the volume:
Mkdir / kvmdata/ / create mount point mount.glusterfs node1:models / kvmdata/ Mount distributed replication volumes
Create two more files, one as a kvm virtualized storage disk and one as a virtualized image:
Cd / kvmdata/mkdir kgc_disk kgc_iso/ / kgc_disk as the disk storage location; kgc_iso as the image storage location cd / opt/mv CentOS-7-x86_64-DVD-1708.iso / kvmdata/kgc_iso/ copy the image to the file just created virt-manager / / enter the virtualization creation
(4) Virtual system Manager:
1. Create two storage pools: store;iso
2. Select the path: the kgc_disk;kgc_iso you just created
3. Add a storage volume named centos7:
4. Create a new virtual machine: select the two newly created images and disk storage for the path
Start the virtual machine when you select the host to boot, and then select to start the installation:
Then the installation interface appears:
4. Check the storage mode:
Since we have just done KVM + GFS, we can see the mirrored and disk-stored files on node1:
Due to the nature of distributed replication volumes, the same volumes are on the other three node nodes:
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.