In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
GlusterFS Overview the open source distributed file system consists of a storage server, a client, and a NFS/Samba storage gateway to form a metadata-free server (data transfer component RDMA)
GlusterFS features scalability, high performance, high availability Global Unified Namespace Elastic Volume Management-> Cloud characteristics: horizontal (number of instances), vertical (hardware performance) ECS (virtual), OSS and RDS (bare metal) based on standard protocol GlusterFS terminology Brick (node) Volume (volume) FUSE (client interaction module) VFS (virtualized file system) Glusterd (service) modular stack architecture modular, Stack architecture through the combination of modules Implement complex functions
How GlusterFS works GlusterFS workflow
The elastic HASH algorithm obtains a 32-bit integer divided into N continuous subspaces through the HASH algorithm, and each space corresponds to the advantages of the Brick elastic HASH algorithm: ensuring that the data is evenly distributed in each Brick solves the dependence on the metadata server, and then solves the single point of failure and the volume type of the access bottleneck GlusterFS.
1. Distributed volumes (fastdfs and gfs build)
The underlying file systems that do not block files and save hash values by extending file attributes are ext3, ext4, ZFS, XFS, etc.
Characteristics of distributed volumes
● files are distributed on different servers without redundancy. ● can easily and cheaply expand the size of volumes. ● single point of failure will result in data loss. ● depends on underlying data protection.
two。 Stripe roll
● divides the file into N blocks (N stripe nodes) according to the offset. When the polled storage is stored in each BrickServer node ● stores large files, the performance is particularly outstanding. ● does not have redundancy, similar to Raid0.
Characteristics of stripe coil
● data is divided into smaller chunks and distributed to different stripes in the block server farm. ● distribution reduces load and smaller files accelerate access speed ● has no data redundancy.
3. Copy Volum
● save one or more copies of the same file ● replication mode because the disk utilization is low and the storage space on multiple nodes of ● is inconsistent, then the capacity of the lowest node ● will be taken as the total capacity of the volume according to the bucket effect.
Characteristics of replication Volum
All servers in the ● volume keep a complete copy the number of copies of the ● volume can be created by the customer to determine that the ● is redundant by at least two block servers or more servers ●
4. Distributed stripe volume
● features both distributed and striped volumes ● is mainly used for large file access processing ● requires at least four servers
5. Distributed replication Volum
● features both distributed and replication volumes ● is used in cluster environments where redundancy is required
Volume type volume name volume type space size Brickdis-volume distributed volume 40Gnode1 (/ b1), node2 (/ b1) stripe-volume stripe volume 40Gnode1 (/ C1), node2 (/ C1) rep-volume replication volume 20Gnode3 (/ b1), node4 (/ b1) dis-stripe distributed stripe volume 40Gnode1 (/ D1), node2 (/ D1), node3 (/ D1), node4 (/ D1) dis-rep distributed replication volume 20Gnode1 (/ E1), node2 (/ E1), node3 (/ E1), Node4 (/ E1) experiment preparation 1. Add 4 disks to each of the four servers
2. Modify the name of the server
Modified to node1, node2, node3, node4 respectively
[root@localhost ~] # hostnamectl set-hostname node1 [root@localhost ~] # su3, format and mount the disks on four servers
Here we use scripts to perform mounts
# enter the opt directory [root@node1 ~] # cd / opt# disk formatting and mounting script [root@node1 opt] # vim a.shallows! / bin/bashecho "the disks exist list:" fdisk-l | grep 'disk / dev/sd [a murz]' echo "= =" PS3= "chose which disk you want to create:" select VAR in `disk / dev/sd* | grep-o'sd [BMZ]'| Uniq`quitdo case $VAR in sda) fdisk-l / dev/sda break Sd [bmerz]) # create partitions echo "n p w" | fdisk / dev/$VAR # make filesystem mkfs.xfs-I size=512 / dev/$ {VAR} "1" & > / dev/null # mount the system mkdir-p / data/$ {VAR} "1" & > / dev/null echo-e "/ dev/$ { VAR} "1" / data/$ {VAR} "1" xfs defaults 0\ n "> > / etc/fstab mount-a & > / dev/null break ; quit) break;; *) echo "wrong disk,please check again";; esacdone# gives script execution permission [root@node1 opt] # chmod + x a.sh
Push the script to the other three servers via scp
Scp a.sh root@192.168.45.134:/optscp a.sh root@192.168.45.130:/optscp a.sh root@192.168.45.136:/opt executes the script on four servers and completes the
This is just a sample.
[root@node1 opt] #. / a.shthe disks exist list:===1) sdb2) sdd4) sde5) quitchose which disk you want to create:1 / / Select the disk Welcome to fdisk (util-linux 2.23.2) .Changes will remain in memory only, until you decide to write them.Be careful before using the write command.Device does not contain a recognized partition tableBuilding a new DOS disklabel with disk identifier 0x37029e96.Command (m for help): Partition type: P primary (0 primary, 0 extended) 4 free) e extendedSelect (default p): Partition number (1-4, default 1): First sector (2048-41943039, default 2048): Using default value 2048Last sector, + sectors or + size {K Magi M ·G} (2048-41943039, default 41943039): Using default value 41943039Partition 1 of type Linux and of size 20 GiB is setCommand (m for help): The partition table has been altered calling ioctl () to re-read partition table.Syncing disks. Check the mount on the four servers
4. Set up the hosts file
Modify it on the first node1
# add vim / etc/hosts192.168.45.133 node1192.168.45.130 node2192.168.45.134 node3192.168.45.136 node4 at the end of the file
Push hosts files to other servers and clients through scp
# push the / etc/hosts file to another host [root@node1 opt] # scp / etc/hostsroot@192.168.45.130: / etc/hostsroot@192.168.45.130's password: hosts 100% 242 23.6KB/s 00:00 [root@node1 opt] # scp / etc/hostsroot@192.16 8.45.134:/etc/hostsroot@192.168.45 .134s password: hosts 100242 146.0KB/s 00:00 [root@node1 opt] # scp / etc/hostsroot@192.168.45.136: / etc/hostsroot@192.168.45.136's password: hosts 100242 146.0KB/s 00:00
Check the push status on other servers
Turn off the firewalls of all servers and clients [root@node1 ~] # systemctl stop firewalld.service [root@node1 ~] # setenforce 0 build yum warehouses on clients and servers # enter the yum file path [root@node1 ~] # cd / etc/yum.repos.d/# create an empty folder [root@node1 yum.repos.d] # mkdir abc# and move all CentOS- files to abc [root@node1 yum.repos.d] # mv CentOS-* abc# create private yum source [root@node1 yum.repos.d] # vim GLFS.repo [demo] name=demobaseurl= http://123.56.134.27/demogpgcheck=0enable=1[gfsrepo]name=gfsrepobaseurl=http://123.56.134.27/gfsrepogpgcheck=0enable=1# reload yum source [root@node1 yum.repos.d] # yum list install the necessary software package [root@node1 yum.repos.d] # yum-y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
Do the same thing on the other three stations
Start glusterd on four servers and set it to self-boot [root@node1 yum.repos.d] # systemctl start glusterd.service [root@node1 yum.repos.d] # systemctl enable glusterd.service to add node information [root@node1 yum.repos.d] # gluster peer probe node2peer probe: success. [root@node1 yum.repos.d] # gluster peer probe node3peer probe: success. [root@node1 yum.repos.d] # gluster peer probe node4peer probe: success.
View node information on other servers
[root@node1 yum.repos.d] # gluster peer status
Create distributed volume # create distributed volume [root@node1 yum.repos.d] # gluster volume create dis-vol node1:/data/sdb1 node2:/data/sdb1 force# check information [root@node1 yum.repos.d] # gluster volume info dis-vol# view distributed existing volume [root@node1 yum.repos.d] # gluster volume list# boot volume [root@node1 yum.repos.d] # gluster volume start dis-vol mount on client # Recursive creation Build mount point [root@manager yum.repos.d] # mkdir-p / text/dis# mount the volume you just created to the mount point you just created [root@manager yum.repos.d] # mount.glusterfs node1:dis-vol / text/ dis```! [insert picture description here] (https://img-blog.csdnimg.cn/20191218165913488.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L1BhcmhvaWE=,size_16,color_FFFFFF, Stripe-vol 70) # create stripe volume ```sql# create volume [root@node1 yum.repos.d] # gluster volume create stripe-vol stripe 2 node1:/data/sdc1 node2:/data/sdc1 force# view existing volume [root@node1 yum.repos.d] # gluster volume listdis-volstripe-vol# boot stripe volume [root@node1 yum.repos.d] # gluster volume start stripe-vol volume start: stripe-vol: success mount on client # create mount point [root@] Manager yum.repos.d] # mkdir / text/strip# mount stripe volume [root@manager yum.repos.d] # mount.glusterfs node1:/stripe-vol / text/strip/ create replication volume # create replication volume [root@node1 yum.repos.d] # gluster volume create rep-vol replica 2 node3:/data/sdb1 node4:/data/sdb1 forcevolume create: rep-vol: success: please start the volume to access data# enable replication volume [root@node1 yum.repos.d] # gluster Volume start rep-vol volume start: rep-vol: success
Prevent the client from replicating the volume
[root@manager yum.repos.d] # mkdir / text/rep [root@manager yum.repos.d] # mount.glusterfs node3:rep-vol / text/rep create distributed stripe # create distributed stripe volume [root@node1 yum.repos.d] # gluster volume create dis-stripe stripe 2 node1:/data/sdd1 node2:/data/sdd1 node3:/data/sdd1 node4:/data/sdd1 forcevolume create: dis-stripe: success: please start the volume to access data# start distributed Stripe volume [root@node1 yum.repos.d] # gluster volume start dis-stripe volume start: dis-stripe: success
Mount on the client
[root@manager yum.repos.d] # mkdir / text/dis-strip [root@manager yum.repos.d] # mount.glusterfs node4:dis-stripe / text/dis-strip/ create distributed replication volume # create distributed replication volume [root@node2 yum.repos.d] # gluster volume create dis-rep replica 2 node1:/data/sde1 node2:/data/sde1 node3:/data/sde1 node4:/data/sde1 forcevolume create: dis-rep: success: please start the volume to access data # enable replication volume [root@node2 yum.repos.d] # gluster volume start dis-rep volume start: dis-rep: success# View existing volume [root@node2 yum.repos.d] # gluster volume listdis-repdis-stripedis-volrep-volstripe-vol mount [root@manager yum.repos.d] # mkdir / text/dis-rep [root@manager yum.repos.d] # mount.glusterfs node3:dis-rep / text/dis-rep/- on the client -above we have finished creating and mounting the volume Now let's test the volume and first create five 40m files [root@manager yum.repos.d] # dd if=/dev/zero of=/demo1.log bs=1M count=4040+0 records in40+0 records out41943040 bytes (42 MB) copied, 0.0175819 s, 0.0175819 GB/s [root@manager yum.repos.d] # dd if=/dev/zero of=/demo2.log bs=1M count=4040+0 records in40+0 records out41943040 bytes (42 MB) copied, 0.269746 s on the client. 155 MB/s [root@manager yum.repos.d] # dd if=/dev/zero of=/demo3.log bs=1M count=4040+0 records in40+0 records out41943040 bytes (42 MB) copied, 0.34134 s, 123 MB/s [root@manager yum.repos.d] # dd if=/dev/zero of=/demo4.log bs=1M count=4040+0 records in40+0 records out41943040 bytes (42 MB) copied, 1.55335 s, 27.0 MB/s [root@manager yum.repos.d] # dd if=/dev/zero of=/demo5.log bs=1M count=4040+0 records in40+0 records out41943040 bytes (42 MB) copied 1.47974 s 28.3 MB/s then copy five files to different volumes [root@manager yum.repos.d] # cp / demo* / text/dis [root@manager yum.repos.d] # cp / demo* / text/strip [root@manager yum.repos.d] # cp / demo* / text/rep [root@manager yum.repos.d] # cp / demo* / text/dis-strip [root@manager yum.repos.d] # cp / demo* / text / dis-rep View Volume contents View distributed Volume
View stripe volum
View replication Volum
View distributed stripe volum
View distributed replication volum
Failure testing shuts down node2 server observation result [root@manager yum.repos.d] # ls / text/dis dis-rep dis-strip rep strip [root@manager yum.repos.d] # ls / text/disdemo1.log demo2.log demo3.log demo4.log [root@manager yum.repos.d] # ls / text/dis-repdemo1.log demo2.log demo3.log demo4.log demo5.log [root@manager yum.repos.d] # ls / text/dis-strip/demo5.log [root@manager yum.repos.d] # ls / text/rep/demo1.log demo2.log demo3.log demo4.log demo5.log [root@manager yum.repos.d] # ls / text/strip/ [root@manager yum.repos.d] # conclusion:-distributed volume missing demo5.log file-stripe volume inaccessible-replication volume normal access-distributed stripe volume missing file-distributed replication volume normal access delete volume
To delete a volume, you need to stop the volume. When you delete a volume, the volume group must be on.
# stop volume [root@manager yum.repos.d] # gluster volume delete dis-vol# delete volume [root@manager yum.repos.d] # gluster volume delete dis-vol access control # deny only [root@manager yum.repos.d] # gluster volume set dis-vol auth.reject 192.168.45.13 only allow [root@manager yum.repos.d] # gluster volume set dis-vol auth.allow 192.168.45.133 Thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.