Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

GFS distributed File system Cluster-- and its Construction

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

Main points of content

1. Brief introduction to GlusterFS:

2. Volume type of GlusterFS:

3. GlusterFS deployment:

Step 1: first mount the disks of each virtual machine to facilitate operation. You can use the following script

Step 2: operation on four node nodes

Step 3: time synchronization

Step 5: create GlusterFS Volume

Step 6: client configuration

Step 7: test

1. Brief introduction to GlusterFS:

GFS is an extensible distributed file system for large, distributed applications that access large amounts of data. It runs on cheap ordinary hardware and provides fault tolerance. It can provide high-performance services to a large number of users.

Open source distributed file system

It is composed of storage server, client and NFS/Samba storage gateway.

(1) characteristics of GlusterFS:

Scalability and high performanc

High availability

Global uniform namespace

Elastic volume management

Based on standard protocol

(2) Modular stack architecture:

1. Modular, stackable structure

2. Through the combination of modules, complex functions are realized.

3. GlusterFS workflow:

4. Elastic HASH algorithm:

Get a 32-bit integer by HASH algorithm

Divided into N connected subspaces, each corresponding to a Brick

Advantages of resilient HASH algorithm:

Ensure that the data is evenly distributed in each Brick

It solves the dependence on the metadata server, and then solves the single point of failure and server access bottleneck.

2. Volume type of GlusterFS:

(1) distributed volumes:

The file is not divided into blocks.

Save HASH values by extending file properties

The underlying file systems supported are ext3, ext4, ZFS, XFS, etc.

Features:

Files are distributed on different servers and do not have redundancy

Expand the size of the volume more easily and cheaply

A single point of failure can result in data loss

Rely on underlying data protection.

(2) stripe rolls:

The file is divided into N blocks (N stripe nodes) according to the offset, and the polled is stored in each Brick Server node.

Performance is particularly outstanding when storing large files

No redundancy, similar to raid0

Features:

The data is divided into smaller chunks and distributed to different stripes in the block server farm

Distribution reduces load and smaller files speed up access

No data redundancy

(3) copy the volume:

Keep one or more copies of the same document

Replication mode has low disk utilization because you want to save the copy

If the storage space on multiple nodes is inconsistent, the capacity of the lowest node is taken as the total capacity of the volume.

Features:

All servers in the volume keep a complete copy

The number of copies of the volume can be determined by the customer when it is created

At least two block servers or more servers

It is disaster-tolerant.

(4) distributed stripe volume:

Both distributed and striped volume functions

Mainly used for large file access processing

At least 4 servers are required.

(5) distributed replication volumes:

Take into account the functions of distributed and replicated volumes

For situations where redundancy is required

3. GlusterFS deployment:

Environmental preparation:

Five virtual machines, one as a client and four as nodes, with 4 disks added to each virtual machine (20g each is fine)

Role space size node1 (192.168.220.172) 80g (20g × 4) node2 (192.168.220.131) 80g (20g × 4) node3 (192.168.220.140) 80g (20g × 4) node4 (192.168.220.136) 80g (20g × 4) 80g (20g × 4) client (192.168.220.137) 80g (20g × 4)

Step 1: first mount the disks of each virtual machine to facilitate operation. You can use the following script

Vim disk.sh / / Mount the disk script with one click #! / bin/bashecho "the disks exist list:" fdisk-l | grep 'disk / dev/sd [arelazi]' echo "=" PS3= "chose which disk you want to create:" select VAR in `ls / dev/sd* | grep-o'sd [Bmerz]'| Uniq` quitdo case $VAR in sda) fdisk-l / dev/sda break Sd [bmerz]) # create partitions echo "n p w" | fdisk / dev/$VAR # make filesystem mkfs.xfs-I size=512 / dev/$ {VAR} "1" & > / dev/null#mount the system mkdir-p / data/$ {VAR} "1" "& > / dev/null echo-e" / dev/$ {VAR} "1" / data/$ {VAR} "1" xfs defaults 0\ n "> > / etc/fstab mount-a & > / dev/null break ; quit) break;; *) echo "wrong disk,please check again";; esacdone

Step 2: operation on four node nodes

(1) modify the hostname (node1, node2, node3, node4) and turn off the firewall.

Hostnamectl set-hostname node1hostnamectl set-hostname node2hostnamectl set-hostname node3hostnamectl set-hostname node4

(2) Edit the hosts file and add the hostname and IP address:

Vim / etc/hosts # Last line insert 192.168.220.172 node1192.168.220.131 node2192.168.220.140 node3192.168.220.136 node4

(3) write the library of yum source and install GlusterFS:

Cd / opt/mkdir / abcmount.cifs / / 192.168.10.157/MHA / abc / / remotely mount to the local cd / etc/yum.repos.d/mkdir bak mv Cent* bak/ move the original sources to the newly created folder vim GLFS.repo / / create a new source [GLFS] name=glfsbaseurl= file:///abc/gfsrepogpgcheck=0enabled=1

(4) install the software package:

Yum-y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma

(5) start the service:

Systemctl start glusterdsystemctl status glusterd

Step 3: time synchronization

Ntpdate ntp1.aliyun.com / / time synchronization (each node requires operation)

Add a storage trust pool by adding three other nodes on one host:

This is done on the node1 node:

Gluster peer probe node2gluster peer probe node3gluster peer probe node4gluster peer status / / View the status of all nodes

Step 5: create GlusterFS Volume

(1) create a distributed volume:

Gluster volume create dis-vol node1:/data/sdb1 node2:/data/sdb1 force / / is created using two disks on node1 and node2; dis-vol is the disk name; force means to force gluster volume start dis-vol / / start gluster volume info dis-vol / / to view the status

(2) create a stripe volume:

Gluster volume create stripe-vol stripe 2 node1:/data/sdc1 node2:/data/sdc1 forcegluster volume start stripe-volgluster volume info stripe-vol

(3) create a replication volume:

Gluster volume create rep-vol replica 2 node3:/data/sdb1 node4:/data/sdb1 forcegluster volume start rep-volgluster volume info rep-vol

(4) create a distributed stripe volume (at least 4 nodes):

Gluster volume create dis-stripe stripe 2 node1:/data/sdd1 node2:/data/sdd1 node3:/data/sdd1 node4:/data/sdd1 forcegluster volume start dis-stripegluster volume info dis-stripe

(5) create a distributed replication volume (at least 4 nodes):

Gluster volume create dis-rep replica 2 node1:/data/sde1 node2:/data/sde1 node3:/data/sde1 node4:/data/sde1 forcegluster volume start dis-repgluster volume info dis-rep

Step 6: client configuration

(1) turn off the firewall

Systemctl stop firewalldsetenforce 0

(2) configure and install GFS source:

Cd / opt/mkdir / abcmount.cifs / / 192.168.10.157/MHA / abc / / remotely mount to local cd / etc/yum.repos.d/vim GLFS.repo / / create a new source [GLFS] name=glfsbaseurl= file:///abc/gfsrepogpgcheck=0enabled=1yum-y install glusterfs glusterfs-fuse / / installation package

(3) modify hosts file:

Vim / etc/hosts192.168.220.172 node1192.168.220.131 node2192.168.220.140 node3192.168.220.136 node4

(4) create a temporary mount point:

Mkdir-p / text/dis/ / Recursive create a mount point mount.glusterfs node1:dis-vol / text/dis/ mount distributed volumes mkdir / text/stripmount.glusterfs node1:stripe-vol / text/strip/ mount stripe volumes mkdir / text/repmount.glusterfs node3:rep-vol / text/rep/ mount replication volumes mkdir / text/dis-strmount.glusterfs node2:dis-stripe / text/dis- Str/ Mount distributed stripe volumes mkdir / text/dis-repmount.glusterfs node4:dis-rep / text/dis-rep/ Mount distributed replication volumes

Df-hT: view mount information:

Step 7: test

(1) create 5 40m files:

Dd if=/dev/zero of=/demo1.log bs=1M count=40dd if=/dev/zero of=/demo2.log bs=1M count=40dd if=/dev/zero of=/demo3.log bs=1M count=40dd if=/dev/zero of=/demo4.log bs=1M count=40dd if=/dev/zero of=/demo5.log bs=1M count=40

(2) copy the 5 files you just created to different volumes:

Cp / demo* / text/discp / demo* / text/stripcp / demo* / text/rep/cp / demo* / text/dis-strcp / demo* / text/dis-rep

(3) check how the volumes are distributed: ll-h / data/sdb1

1. Distributed volumes:

It can be seen that every file is complete.

2. Stripe volume:

All files are divided into halves for distributed storage.

3. Copy the volume:

All files are copied completely and stored.

4. Distributed stripe volume:

5. Distributed replication volumes:

(4) failure test:

Now shut down the second node server to simulate downtime, and then check each volume on the client:

You can find:

Distributed volumes all files are in

Copy the volume all files are in

Demo5.log is the only file that has been mounted to the distributed stripe volume, and 4 files are missing

Mount distributed replication volumes all files are in

All the files of the stripe volume are lost.

(5) other operations:

1. Delete the volume (stop first, then delete):

Gluster volume stop volume name gluster volume delete volume name

2. Blacklist and whitelist settings:

Gluster volume set volume name auth.reject 192.168.220.100 / refuse to mount a host gluster volume set volume name auth.allow 192.168.220.100 / allow a host to mount

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report