In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
About GlusterFS the open source distributed file system consists of a storage server, a client and a NFS/Samba storage gateway. GlusterFS features high scalability and common performance. Global unified namespace elastic volume management is based on the standard protocol GlusterFS Overview Brick storage node Volume volume fuse kernel module, user-side interaction module vfs virtual Glusterd service
Take a look at this picture: the upper layer of virtualization management wants to be an application. Cache, read-write head, stripe volume, proxy want to be the API interface in the middle of the rdma transmission is equivalent to a driver the next layer of real devices corresponding to a hardware clusterFS workflow flexible HASH algorithm through the HASH algorithm to a 32-bit integer division of N consecutive subspaces Each space corresponds to a Brick elastic HASH algorithm's advantages to ensure that the data is evenly distributed in each Brick to solve the dependence on the metadata server, and then to solve the single point of failure and access bottleneck through the HASH algorithm to a 32-bit algorithm, to calculate and select, because each of your nodes stores a part of data, how do you identify sorting, through the algorithm. GlusterFS volumes with four Brick nodes, with an average distribution of range space to the power of 232m
Hash algorithm is used to find the storage space of the corresponding brick node and allocate the data storage. To call the volume type of each node data clusterfs distributed volume replication volume distributed stripe replication volume distributed volume without partitioning files supporting underlying file systems such as ext3,ext4,ZFS,XFS and so on * * distributed volumes have the following characteristics * * files are distributed on different servers Without redundancy, a single point of failure of expanding the size of a volume more easily and cheaply will cause data loss and rely on the underlying data protection. We have a way to solve it, because the files it stores are intact, and we can make a mirror volume and make a backup stripe volume to divide the file into N blocks (N stripe nodes) according to the offset. Polling storage in each Brickserver node stores large files with outstanding performance and no redundancy. Similar to Raid0** characteristics * * data is divided into smaller chunks and distributed to different stripes in the block server farm, reducing load and accelerating access speed without data redundancy replication volumes keep one or more copies of the same file replication mode, because the disk utilization is lower, the storage space of multiple nodes is inconsistent. Then take the capacity of the lowest node according to the bucket effect as the total capacity of the volume * * characteristics * * all servers in the volume keep a complete copy volume and the number of copies of the volume can be decided by at least two block servers or more servers with redundant distributed stripe volumes and distributed stripe volumes. the function is mainly used for large file access processing at least Less need for 4 servers distributed replication volumes for both distributed volumes and replication volumes for redundant cases # # GFS distributed file system cluster project # # cluster environment! [insert picture description here] (https://img-blog.csdnimg.cn/20191218153045975.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L1BhcmhvaWE=,size_16,color_FFFFFF, | Brick | |-| dis-volume | distributed volume | 40g | node1 (/ b1), node2 (/ b1) | | stripe-volume | striped volume | 40g | node1 (/ C1), node2 (/ C1) | | rep-volume | replicated volume | 20g | node3 (/ b1), node4 (/ b1) | | dis-stripe | distributed striped volume | 40g | node1 (/ D1), node2 (/ D1), node3 (/ D1), Node4 (/ D1) | | dis-rep | distributed replication volume | 20g | node1 (/ E1), node2 (/ E1), node3 (/ E1), node4 (/ E1) | # # Experimental preparation # 1. Add 4 disks for each of the four servers! [insert picture description here] (https://img-blog.csdnimg.cn/2019121816001274.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L1BhcmhvaWE=,size_16,color_FFFFFF,t_70)#### 2, modify the name of the server to node1, node2, node3, Node4``sql [root@localhost ~] # hostnamectl set-hostname node1 [root@localhost ~] # su3, format the disks on the four servers, and mount them.
Here we use scripts to perform mounts
# enter the opt directory [root@node1 ~] # cd / opt# disk formatting and mounting script [root@node1 opt] # vim a.shallows! / bin/bashecho "the disks exist list:" fdisk-l | grep 'disk / dev/sd [a murz]' echo "= =" PS3= "chose which disk you want to create:" select VAR in `disk / dev/sd* | grep-o'sd [BMZ]'| Uniq`quitdo case $VAR in sda) fdisk-l / dev/sda break Sd [bmerz]) # create partitions echo "n p w" | fdisk / dev/$VAR # make filesystem mkfs.xfs-I size=512 / dev/$ {VAR} "1" & > / dev/null # mount the system mkdir-p / data/$ {VAR} "1" & > / dev/null echo-e "/ dev/$ { VAR} "1" / data/$ {VAR} "1" xfs defaults 0\ n "> > / etc/fstab mount-a & > / dev/null break ; quit) break;; *) echo "wrong disk,please check again";; esacdone# gives script execution permission [root@node1 opt] # chmod + x a.sh
Push the script to the other three servers via scp
Scp a.sh root@192.168.45.134:/optscp a.sh root@192.168.45.130:/optscp a.sh root@192.168.45.136:/opt executes the script on four servers and completes the
This is just a sample.
[root@node1 opt] #. / a.shthe disks exist list:===1) sdb2) sdd4) sde5) quitchose which disk you want to create:1 / / Select the disk Welcome to fdisk (util-linux 2.23.2) .Changes will remain in memory only, until you decide to write them.Be careful before using the write command.Device does not contain a recognized partition tableBuilding a new DOS disklabel with disk identifier 0x37029e96.Command (m for help): Partition type: P primary (0 primary, 0 extended) 4 free) e extendedSelect (default p): Partition number (1-4, default 1): First sector (2048-41943039, default 2048): Using default value 2048Last sector, + sectors or + size {K Magi M ·G} (2048-41943039, default 41943039): Using default value 41943039Partition 1 of type Linux and of size 20 GiB is setCommand (m for help): The partition table has been altered calling ioctl () to re-read partition table.Syncing disks. Check the mount on the four servers
4. Set up the hosts file
Modify it on the first node1
# add vim / etc/hosts192.168.45.133 node1192.168.45.130 node2192.168.45.134 node3192.168.45.136 node4 at the end of the file
Push hosts files to other servers and clients through scp
# push the / etc/hosts file to another host [root@node1 opt] # scp / etc/hostsroot@192.168.45.130: / etc/hostsroot@192.168.45.130's password: hosts 100% 242 23.6KB/s 00:00 [root@node1 opt] # scp / etc/hostsroot@192.16 8.45.134:/etc/hostsroot@192.168.45 .134s password: hosts 100242 146.0KB/s 00:00 [root@node1 opt] # scp / etc/hostsroot@192.168.45.136: / etc/hostsroot@192.168.45.136's password: hosts
Check the push status on other servers
Turn off the firewalls of all servers and clients [root@node1 ~] # systemctl stop firewalld.service [root@node1 ~] # setenforce 0 build yum warehouses on clients and servers # enter the yum file path [root@node1 ~] # cd / etc/yum.repos.d/# create an empty folder [root@node1 yum.repos.d] # mkdir abc# and move all CentOS- files to abc [root@node1 yum.repos.d] # mv CentOS-* abc# create private yum source [root@node1 yum.repos.d] # vim GLFS.repo [demo] name=demobaseurl= http://123.56.134.27/demogpgcheck=0enable=1[gfsrepo]name=gfsrepobaseurl=http://123.56.134.27/gfsrepogpgcheck=0enable=1# reload yum source [root@node1 yum.repos.d] # yum list install the necessary software package [root@node1 yum.repos.d] # yum-y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
Do the same thing on the other three stations
Start glusterd on four servers and set it to self-boot [root@node1 yum.repos.d] # systemctl start glusterd.service [root@node1 yum.repos.d] # systemctl enable glusterd.service to add node information [root@node1 yum.repos.d] # gluster peer probe node2peer probe: success. [root@node1 yum.repos.d] # gluster peer probe node3peer probe: success. [root@node1 yum.repos.d] # gluster peer probe node4peer probe: success.
View node information on other servers
[root@node1 yum.repos.d] # gluster peer status
Create distributed Volume # create distributed Volume [root@node1 yum.repos.d] # gluster volume create dis-vol node1:/data/sdb1 node2:/data/sdb1 force# check Information [root@node1 yum.repos.d] # gluster volume info dis-vol# View distributed existing Volume [root@node1 yum.repos.d] # gluster volume list# Boot Volume [root@node1 yum.repos.d] # gluster volume start dis-vol
Mount on the client # Recursive create mount point [root@manager yum.repos.d] # mkdir-p / text/dis# mount the volume you just created to the mount point you just created [root@manager yum.repos.d] # mount.glusterfs node1:dis-vol / text/dis
Create stripe volume # create volume [root@node1 yum.repos.d] # gluster volume create stripe-vol stripe 2 node1:/data/sdc1 node2:/data/sdc1 force# view existing volume [root@node1 yum.repos.d] # gluster volume listdis-volstripe-vol# boot stripe volume [root@node1 yum.repos.d] # gluster volume start stripe-vol volume start: stripe-vol: success mount on client # create mount point [root@manager yum.repos.d] # mkdir / text/strip# mount stripe volume [root@manager yum.repos.d] # mount.glusterfs node1:/stripe-vol / text/strip/
Check the mount status
Create replication volume # create replication volume [root@node1 yum.repos.d] # gluster volume create rep-vol replica 2 node3:/data/sdb1 node4:/data/sdb1 forcevolume create: rep-vol: success: please start the volume to access data# enable replication volume [root@node1 yum.repos.d] # gluster volume start rep-vol volume start: rep-vol: success
Prevent the client from replicating the volume
[root@manager yum.repos.d] # mkdir / text/rep [root@manager yum.repos.d] # mount.glusterfs node3:rep-vol / text/rep
View mount
Create distributed stripe # create distributed stripe volume [root@node1 yum.repos.d] # gluster volume create dis-stripe stripe 2 node1:/data/sdd1 node2:/data/sdd1 node3:/data/sdd1 node4:/data/sdd1 forcevolume create: dis-stripe: success: please start the volume to access data# launch distributed stripe volume [root@node1 yum.repos.d] # gluster volume start dis-stripe volume start: dis-stripe: success
Mount on the client
[root@manager yum.repos.d] # mkdir / text/dis-strip [root@manager yum.repos.d] # mount.glusterfs node4:dis-stripe / text/dis-strip/ create distributed replication volume # create distributed replication volume [root@node2 yum.repos.d] # gluster volume create dis-rep replica 2 node1:/data/sde1 node2:/data/sde1 node3:/data/sde1 node4:/data/sde1 forcevolume create: dis-rep: success: please start the volume to access data # enable replication volume [root@node2 yum.repos.d] # gluster volume start dis-rep volume start: dis-rep: success# View existing volume [root@node2 yum.repos.d] # gluster volume listdis-repdis-stripedis-volrep-volstripe-vol mount [root@manager yum.repos.d] # mkdir / text/dis-rep [root@manager yum.repos.d] # mount.glusterfs node3:dis-rep / text/dis-rep/ on the client
View mount
-above we have finished creating and mounting the volume-
Now let's test the volume and first create five 40m files [root@manager yum.repos.d] # dd if=/dev/zero of=/demo1.log bs=1M count=4040+0 records in40+0 records out41943040 bytes (42 MB) copied, 0.0175819 s, 0.0175819 GB/s [root@manager yum.repos.d] # dd if=/dev/zero of=/demo2.log bs=1M count=4040+0 records in40+0 records out41943040 bytes (42 MB) copied, 0.269746 s on the client. 155 MB/s [root@manager yum.repos.d] # dd if=/dev/zero of=/demo3.log bs=1M count=4040+0 records in40+0 records out41943040 bytes (42 MB) copied, 0.34134 s, 123 MB/s [root@manager yum.repos.d] # dd if=/dev/zero of=/demo4.log bs=1M count=4040+0 records in40+0 records out41943040 bytes (42 MB) copied, 1.55335 s, 27.0 MB/s [root@manager yum.repos.d] # dd if=/dev/zero of=/demo5.log bs=1M count=4040+0 records in40+0 records out41943040 bytes (42 MB) copied 1.47974 s 28.3 MB/s then copy five files to different volumes [root@manager yum.repos.d] # cp / demo* / text/dis [root@manager yum.repos.d] # cp / demo* / text/strip [root@manager yum.repos.d] # cp / demo* / text/rep [root@manager yum.repos.d] # cp / demo* / text/dis-strip [root@manager yum.repos.d] # cp / demo* / text / dis-rep View Volume contents View distributed Volume
View stripe volum
View replication Volum
View distributed stripe volum
View distributed replication volum
Failure testing shuts down node2 server observation result [root@manager yum.repos.d] # ls / text/dis dis-rep dis-strip rep strip [root@manager yum.repos.d] # ls / text/disdemo1.log demo2.log demo3.log demo4.log [root@manager yum.repos.d] # ls / text/dis-repdemo1.log demo2.log demo3.log demo4.log demo5.log [root@manager yum.repos.d] # ls / text/dis-strip/demo5.log [root@manager yum.repos.d] # ls / text/rep/demo1.log demo2.log demo3.log demo4.log demo5.log [root@manager yum.repos.d] # ls / text/strip/ [root@manager yum.repos.d] #
The results show that:
Distribution volume missing demo5.log file stripe volume unable to access replication volume normal access distributed stripe volume missing file distributed replication volume normal access delete volume
To delete a volume, you need to stop the volume. When you delete a volume, the volume group must be on.
# stop volume [root@manager yum.repos.d] # gluster volume delete dis-vol# delete volume [root@manager yum.repos.d] # gluster volume delete dis-vol access control # deny only [root@manager yum.repos.d] # gluster volume set dis-vol auth.reject 192.168.45.13 only allow [root@manager yum.repos.d] # gluster volume set dis-vol auth.allow 192.168.45.133
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.