Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to install and configure GlusterFS

2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

Editor to share with you how to install and configure GlusterFS, I believe most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!

GlusterFS is an open source distributed file system acquired by Red Hat in 2011. It has high scalability, high performance, high availability, scalable flexibility, no metadata server design so that glusterfs does not have a single point of failure hidden danger, for more information, please see the official website: www.gluster.org.

Deployment environment:

OS: CentOS release 6.5 (Final) x64

Server:

C1:192.168.242.132

C2:192.168.242.133

C3:192.168.242.134

C4:192.168.242.135

Hosts:

192.168.242.132 c1

192.168.242.133 c2

192.168.242.134 c3

192.168.242.135 c4

Specific operations:

Execute on c1/c2/c3/c4

[root@c1] # wget-P / etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

[root@c1 yum.repos.d] # yum install-y glusterfs glusterfs-server glusterfs-fuse

[root@c1 yum.repos.d] # / etc/init.d/glusterd start

Starting glusterd: [OK]

[root@c1 yum.repos.d] # chkconfig glusterd on

Configure clusters on C1

[root@c1 ~] # gluster peer probe C1

Peer probe: success. Probe on localhost not needed

[root@c1 ~] # gluster peer probe c2

Peer probe: success.

[root@c1 ~] # gluster peer probe c3

Peer probe: success.

[root@c1 ~] # gluster peer probe c4

Peer probe: success.

If C1 is identified as the ip address in the peer table, communication problems may occur later in the cluster process.

We can use ip to fix it:

[root@c3 ~] # gluster peer status

Number of Peers: 3

Hostname: 192.168.242.132

Uuid: 6e8d6880-ec36-4331-a806-2e8fb4fda7be

State: Peer in Cluster (Connected)

Hostname: c2

Uuid: 9a722f50-911e-4181-823d-572296640486

State: Peer in Cluster (Connected)

Hostname: c4

Uuid: 1ee3588a-8a16-47ff-ba59-c0285a2a95bd

State: Peer in Cluster (Connected)

[root@c3 ~] # gluster peer detach 192.168.242.132

Peer detach: success

[root@c3 ~] # gluster peer probe C1

Peer probe: success.

[root@c3 ~] # gluster peer status

Number of Peers: 3

Hostname: c2

Uuid: 9a722f50-911e-4181-823d-572296640486

State: Peer in Cluster (Connected)

Hostname: c4

Uuid: 1ee3588a-8a16-47ff-ba59-c0285a2a95bd

State: Peer in Cluster (Connected)

Hostname: c1

Uuid: 6e8d6880-ec36-4331-a806-2e8fb4fda7be

State: Peer in Cluster (Connected)

Create a cluster disk on C1

[root@c1 ~] # gluster volume create datavolume1 replica 2 transport tcp c1:/usr/local/share/datavolume1 c2:/usr/local/share/datavolume1 c3:/usr/local/share/datavolume1 c4:/usr/local/share/datavolume1 force

Volume create: datavolume1: success: please start the volume to access data

[root@c1 ~] # gluster volume create datavolume2 replica 2 transport tcp c1:/usr/local/share/datavolume2 c2:/usr/local/share/datavolume2 c3:/usr/local/share/datavolume2 c4:/usr/local/share/datavolume2 force

Volume create: datavolume2: success: please start the volume to access data

[root@c1 ~] # gluster volume create datavolume3 replica 2 transport tcp c1:/usr/local/share/datavolume3 c2:/usr/local/share/datavolume3 c3:/usr/local/share/datavolume3 c4:/usr/local/share/datavolume3 force

Volume create: datavolume3: success: please start the volume to access data

[root@c1 ~] # gluster volume start datavolume1

Volume start: datavolume1: success

[root@c1 ~] # gluster volume start datavolume2

Volume start: datavolume2: success

[root@c1 ~] # gluster volume start datavolume3

Volume start: datavolume3: success

[root@c1 ~] # gluster volume info

Volume Name: datavolume1

Type: Distributed-Replicate

Volume ID: 819d3dc4-2a3a-4342-b49b-3b7961ef624f

Status: Started

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: c1:/usr/local/share/datavolume1

Brick2: c2:/usr/local/share/datavolume1

Brick3: c3:/usr/local/share/datavolume1

Brick4: c4:/usr/local/share/datavolume1

Volume Name: datavolume2

Type: Distributed-Replicate

Volume ID: d9ebaee7-ef91-4467-9e44-217a63635bfc

Status: Started

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: c1:/usr/local/share/datavolume2

Brick2: c2:/usr/local/share/datavolume2

Brick3: c3:/usr/local/share/datavolume2

Brick4: c4:/usr/local/share/datavolume2

Volume Name: datavolume3

Type: Distributed-Replicate

Volume ID: 1e8b21db-f377-468b-b76e-868edde93f15

Status: Started

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: c1:/usr/local/share/datavolume3

Brick2: c2:/usr/local/share/datavolume3

Brick3: c3:/usr/local/share/datavolume3

Brick4: c4:/usr/local/share/datavolume3

Deployment of client environment

Centos OS 6.5x64 and join hosts

[root@c5] # wget-P / etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

[root@c5 ~] # yum install-y glusterfs glusterfs-fuse

[root@c5 ~] # mkdir-p / mnt/ {datavolume1,datavolume2,datavolume3}

[root@c5] # mount-t glusterfs-o ro c1:datavolume1 / mnt/datavolume1/

[root@c5] # mount-t glusterfs-o ro c1:datavolume2 / mnt/datavolume2/

[root@c5] # mount-t glusterfs-o ro c1:datavolume3 / mnt/datavolume3/

Me3

[root@c5] # df-h

Filesystem Size Used Avail Use% Mounted on

/ dev/mapper/VolGroup-lv_root

38G 840M 36G 3% /

Tmpfs 242m 0 242m 0% / dev/shm

/ dev/sda1 485m 32m 429m 7% / boot

C1:datavolume1 57G 2.4G 52G 5% / mnt/datavolume1

C1:datavolume2 57G 2.4G 52G 5% / mnt/datavolume2

C1:datavolume3 57G 2.4G 52G 5% / mnt/datavolume3

Client test

[root@c5 ~] # umount / mnt/datavolume1/

[root@c5] # mount-t glusterfs c1:datavolume1 / mnt/datavolume1/

[root@c5 ~] # touch / mnt/datavolume1/test.txt

[root@c5 ~] # ls / mnt/datavolume1/test.txt

/ mnt/datavolume1/test.txt

[root@c2 ~] # ls-al/ usr/local/share/datavolume1/

Total 16

Drwxr-xr-x. 3 root root 4096 May 15 03:50.

Drwxr-xr-x. 8 root root 4096 May 15 02:28..

Drw---. 6 root root 4096 May 15 03:50 .glusterfs

-rw-r-r-. 2 root root 0 May 20 2014 test.txt

[root@c1 ~] # ls-al/ usr/local/share/datavolume1/

Total 16

Drwxr-xr-x. 3 root root 4096 May 15 03:50.

Drwxr-xr-x. 8 root root 4096 May 15 02:28..

Drw---. 6 root root 4096 May 15 03:50 .glusterfs

-rw-r-r-. 2 root root 0 May 20 2014 test.txt

Delete the GlusterFS disk:

Gluster volume stop datavolume1

Gluster volume delete datavolume1

Unmount the GlusterFS disk:

Gluster peer detach idc1-server4

Access Control:

Gluster volume set datavolume1 auth.allow 192.168.242.Zongjue 192.168.241.*

Add a GlusterFS node:

Gluster peer probe c6

Gluster peer probe c7

Gluster volume add-brick datavolume1 c6:/usr/local/share/datavolume1 c7:/usr/local/share/datavolume1

Migrate GlusterFS disk data:

Gluster volume remove-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 start

Gluster volume remove-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 status

Gluster volume remove-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 commit

Data redistribution:

Gluster volume rebalance datavolume1 start

Gluster volume rebalance datavolume1 status

Gluster volume rebalance datavolume1 stop

Repair GlusterFS disk data (for example, in the case of C1 downtime):

Gluster volume replace-brick datavolume1 c1:/usr/local/share/datavolume1 c6:/usr/local/share/datavolume1 commit-force

Gluster volume heal datavolume1 full

These are all the contents of the article "how to install and configure GlusterFS". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report