Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

GlusterFs distributed file system cluster

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

GlusterFs is an open source distributed file system

GlusterFs is mainly composed of storage server, client and NFS/samba storage gateway.

GlusterFs features: scalability and high performance, high availability, global unified namespace, elastic volume management, based on standard protocols

GlusterFs terms: Brick (storage block), Vlume (logical volume), FUSE (kernel module), VFS,Glusterd (background management process)

How GlusterFs works:

The advantages of the flexible HASH algorithm are as follows: 1) to ensure that the data is evenly distributed in each Brick.

(2) the dependence on metadata server is solved, and then the single point of failure and access bottleneck are solved.

Seven GlusterFS volumes: distributed volume, stripe volume, replication volume, distributed stripe volume, distributed stripe replication volume, stripe replication volume, distributed stripe replication volume

Among the seven volumes of GlusterFS, there are redundancy: replication volume, distributed replication volume, stripe replication volume, distributed stripe replication volume.

The experimental steps are as follows: four servers and one client.

The information for the four servers is as follows:

Do the following on all nodes

Open 4 virtual machines, add disks according to the above table, partition through fdisk, mkfs format, create the corresponding mount directory, and mount the formatted disk to the corresponding directory, and finally modify the / etc/fstab configuration file to make it take effect permanently.

Attach an order, taking node1 as an example

(1) create a mount directory: mkidr-p / b3 / c4 / d5 / e6

(2) Partition all hard drives. Take sdb as an example: fdisk / dev/sdb

(3) format: # mkfs.ext4 / dev/sdb1

(4) Mount: # mount / dev/sdb1 / b3

(5) permanent mounting

# vim / etc/fstab

/ dev/sdb1 / b3 ext4 defaults 0 0

/ dev/sdc1 / c4 ext4 defaults 0 0

/ dev/sdd1 / d5 ext4 defaults 0 0

/ dev/sde1 / e6 ext4 defaults 0 0

two,

1. Turn off all firewalls and SElinux

# systemctl stop firewalld

# systemctl disable firewalld

# setenforce 0

two。 Configure hosts files (to be configured on all hosts)

Vim / etc/hosts

192.168.1.1 node1

192.168.1.2 node2

192.168.1.3 node3

192.168.1.4 node4

3. Install the software on all servers (GlusterFS)

# yum-y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma

4. Start glusterfs (on all nodes)

# systemctl start glusterd

# systemctl enable glusterd

5. Add a node (executed on node1), add a node1-node4 node

[root@node1 ~] # gluster peer probe node1

Peer probe: success. Probe on localhost not needed

[root@node1 ~] # gluster peer probe node2

Peer probe: success.

[root@node1 ~] # gluster peer probe node3

Peer probe: success.

[root@node1 ~] # gluster peer probe node4

Peer probe: success.

6. Create a volume (create a volume on node1) (bold is the command)

(1) create a distributed volume:

[root@node1 ~] # gluster volume create dis-volume node1:/e6 node2:/e6 force

Volume create: dis-volume: success: please start the volume to access data

Start the volume:

[root@node1 ~] # gluster volume start dis-volume

Volume start: dis-volume: success

No specified type. Distributed volumes are created by default.

(2) create a stripe volume

[root@node1 ~] # gluster volume create stripe-volume stripe 2 node1:/d5 node2:/d5 force

Volume create: stripe-volume: success: please start the volume to access data

Start the volume:

[root@node1 ~] # gluster volume start stripe-volume

Volume start: stripe-volume: success

(3) create a replication volume

[root@node1 ~] # gluster volume create rep-volume replica 2 node3:/d5 node4:/d5 force

Volume create: rep-volume: success: please start the volume to access data

Start the volume:

[root@node1 ~] # gluster volume start rep-volume

Volume start: rep-volume: success

(4) create a distributed stripe volume

[root@node1 ~] # gluster volume create dis-stripe stripe 2 node1:/b3 node2:/b3 node3:/b3 node4:/b3 force

Volume create: dis-stripe: success: please start the volume to access data

Start:

[root@node1 ~] # gluster volume start dis-stripe

Volume start: dis-stripe: success

(5) create a distributed replication volume

[root@node1 ~] # gluster volume create dis-rep replica 2 node1:/c4 node2:/c4 node3:/c4 node4:/c4 force

Volume create: dis-rep: success: please start the volume to access data

Start the volume:

[root@node1 ~] # gluster volume start dis-rep

Volume start: dis-rep: success

Third, the client configuration is as follows:

1. Install client softwar

[root@node5 yum.repos.d] # yum-y install glusterfs glusterfs-fuse

2 create a mount directory

[root@node5] # mkdir-p / test / {dis,stripe,rep,dis_and_stripe,dis_and_rep}

3. Modify the hosts file

[root@node5 ~] # vim / etc/hosts

192.168.1.1 node1

192.168.1.2 node2

192.168.1.3 node3

192.168.1.6 node6

192.168.1.4 node4

192.168.1.5 node5

4. Mount the gluster file system

[root@node5 test] # mount-t glusterfs node1:dis-volume / test/dis

[root@node5 test] # mount-t glusterfs node1:stripe-volume / test/stripe

[root@node5 test] # mount-t glusterfs node1:rep-volume / test/rep

[root@node5 test] # mount-t glusterfs node1:dis-volume / test/dis_and_stripe

[root@node5 test] # mount-t glusterfs node1:dis-rep / test/dis_and_rep

4. Test the gluster file system

1. Write a file to the volume (the first 5 items generate a 43m file, and the last 5 stones copy the file to the volume)

[root@node5 ~] # dd if=/dev/zero of=demo1.log bs=43M count=1

[root@node5 ~] # dd if=/dev/zero of=demo2.log bs=43M count=1

[root@node5 ~] # dd if=/dev/zero of=demo3.log bs=43M count=1

[root@node5 ~] # dd if=/dev/zero of=demo4.log bs=43M count=1

[root@node5 ~] # dd if=/dev/zero of=demo5.log bs=43M count=1

[root@node5 ~] # cp demo / test/dis

[root@node5 ~] # cp demo / test/stripe

[root@node5 ~] # cp demo / test/rep

[root@node5 ~] # cp demo / test/dis_and_stripe

[root@node5] # * cp demo / test/dis_and_rep

. View file distribution

(1) View the distribution of distributed volume files

[root@node1] # ll-h / e6

Total consumption 130m

-rw-r--r-- 2 root root 43m October 25 19:35 demo1.log

-rw-r--r-- 2 root root 43m October 25 19:35 demo2.log

-rw-r--r-- 2 root root 43m October 25 19:35 demo3.log

Drwx- 2 root root 16K September 23 09:07 lost+found

[root@node2] # ll-h / e6

The total dosage is 86m

-rw-r--r-- 2 root root 43m October 25 19:35 demo4.log

-rw-r--r-- 2 root root 43m October 25 19:36 demo5.log

[root@node2 ~] #

(2) View the file distribution of stripe volume

[root@node1] # ll-h / d5

Total consumption 108m

-rw-r--r-- 2 root root 22m October 25 19:34 demo1.log

-rw-r--r-- 2 root root 22m October 25 19:34 demo2.log

-rw-r--r-- 2 root root 22m October 25 19:34 demo3.log

-rw-r--r-- 2 root root 22m October 25 19:34 demo4.log

-rw-r--r-- 2 root root 22m October 25 19:36 demo5.log

Drwx- 2 root root 16K September 23 09:06 lost+found

[root@node2] # ll-h / d5

Total consumption 108m

-rw-r--r-- 2 root root 22m October 25 19:34 demo1.log

-rw-r--r-- 2 root root 22m October 25 19:34 demo2.log

-rw-r--r-- 2 root root 22m October 25 19:34 demo3.log

-rw-r--r-- 2 root root 22m October 25 19:34 demo4.log

-rw-r--r-- 2 root root 22m October 25 19:36 demo5.log

Drwx- 2 root root 16K September 23 09:14 lost+found

(3) View the file distribution of replication volumes

[root@node3] # ll-h / d5

Total dosage 216m

-rw-r--r-- 2 root root 43m October 25 19:34 demo1.log

-rw-r--r-- 2 root root 43m October 25 19:34 demo2.log

-rw-r--r-- 2 root root 43m October 25 19:34 demo3.log

-rw-r--r-- 2 root root 43m October 25 19:34 demo4.log

-rw-r--r-- 2 root root 43m October 25 19:36 demo5.log

Drwx- 2 root root 16K September 23 09:59 lost+found

[root@node4] # ll-h / d5mm *

Total dosage 216m

-rw-r--r-- 2 root root 43m October 25 19:34 demo1.log

-rw-r--r-- 2 root root 43m October 25 19:34 demo2.log

-rw-r--r-- 2 root root 43m October 25 19:34 demo3.log

-rw-r--r-- 2 root root 43m October 25 19:34 demo4.log

-rw-r--r-- 2 root root 43m October 25 19:36 demo5.log

Drwx- 2 root root 16K September 23 10:08 lost+found

The experiment is over.

Other maintenance commands

(1) View glusterfs Volume

[root@node1 ~] # gluster volume list

Dis-rep

Dis-stripe

Dis-volume

Rep-volume

Stripe-volume

(2) View the status of all volumes

[root@node1 ~] # gluster volume status

(3) View the information of all volumes

[root@node1 ~] # gluster volume info

(4) set the access control of the volume

[root@node1 ~] # gluster volume set dis-rep auth.allow 192.168.1.*

Volume set: success

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report