Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

CentOS 6.4.How to install and set up GlusterFS

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces the knowledge of how to install and set up GlusterFS in CentOS 6.4. in the operation of practical cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Because of its good expansibility, GlusterFS is used by many users. Using GlusterFS can solve the problems of network storage, redundant backup and so on, so how to install GlusterFS under Linux? Today, the editor takes CentOS6.4 as an example to introduce the method of installing and configuring GlusterFS in CentOS6.4.

Environment introduction:

OS: CentOS 6.4 x86_64 Minimal

Servers: sc2-log1,sc2-log2,sc2-log3,sc2-log4

Client: sc2-ads15

Specific steps:

1. Install the GlusterFS package on sc2-log {1-4}:

The code is as follows

# wget-P / etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

# yum install-y glusterfs-3.4.2-1.el6 glusterfs-server-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6

# / etc/init.d/glusterd start

# chkconfig glusterfsd on

two。 Configure the entire GlusterFS cluster on sc2-log1:

The code is as follows

[root@sc2-log1 ~] # gluster peer probe sc2-log1

1 peer probe: success: on localhost not needed

[root@sc2-log1 ~] # gluster peer probe sc2-log2

1 peer probe: success

[root@sc2-log1 ~] # gluster peer probe sc2-log3

1 peer probe: success

[root@sc2-log1 ~] # gluster peer probe sc2-log4

1 peer probe: success

[root@sc2-log1 ~] # gluster peer status

01 Number of Peers: 3

02

03 Hostname: sc2-log2

04 Port: 24007

05 Uuid: 399973af-bae9-4326-9cbd-b5b05e5d2927

06 State: Peer in Cluster (Connected)

07

08 Hostname: sc2-log3

09 Port: 24007

10 Uuid: 833a7b8d-e3b3-4099-baf9-416ee7213337

11 State: Peer in Cluster (Connected)

twelve

13 Hostname: sc2-log4

14 Port: 24007

15 Uuid: 54bf115a-0119-4021-af80-7a6bca137fd9

16 State: Peer in Cluster (Connected)

3. Create a data storage directory on sc2-log {1-4}:

The code is as follows

# mkdir-p / usr/local/share/ {models,geoip,wurfl}

# ls-l / usr/local/share/

1 total 24

2 drwxr-xr-x 2 root root 4096 Apr 1 12:19 geoip

3 drwxr-xr-x 2 root root 4096 Apr 1 12:19 models

4 drwxr-xr-x 2 root root 4096 Apr 1 12:19 wurfl

4. Create a GlusterFS disk on sc2-log1:

The code is as follows

[root@sc2-log1 ~] # gluster volume create models replica 4 sc2-log1:/usr/local/share/models sc2-log2:/usr/local/share/models sc2-log3:/usr/local/share/models sc2-log4:/usr/local/share/models force

1 volume create: models: success: please start the volume to access data

[root@sc2-log1 ~] # gluster volume create geoip replica 4 sc2-log1:/usr/local/share/geoip sc2-log2:/usr/local/share/geoip sc2-log3:/usr/local/share/geoip sc2-log4:/usr/local/share/geoip force

1 volume create: geoip: success: please start the volume to access data

[root@sc2-log1 ~] # gluster volume create wurfl replica 4 sc2-log1:/usr/local/share/wurfl sc2-log2:/usr/local/share/wurfl sc2-log3:/usr/local/share/wurfl sc2-log4:/usr/local/share/wurfl force

1 volume create: wurfl: success: please start the volume to access data

[root@sc2-log1 ~] # gluster volume start models

1 volume start: models: success

[root@sc2-log1 ~] # gluster volume start geoip

1 volume start: geoip: success

[root@sc2-log1 ~] # gluster volume start wurfl

1 volume start: wurfl: success

[root@sc2-log1 ~] # gluster volume info

01 Volume Name: models

02 Type: Replicate

03 Volume ID: b29b22bd-6d8c-45c0-b199-91fa5a76801f

04 Status: Started

05 Number of Bricks: 1 x 4 = 4

06 Transport-type: tcp

07 Bricks:

08 Brick1: sc2-log1:/usr/local/share/models

09 Brick2: sc2-log2:/usr/local/share/models

10 Brick3: sc2-log3:/usr/local/share/models

11 Brick4: sc2-log4:/usr/local/share/models

twelve

13 Volume Name: geoip

14 Type: Replicate

15 Volume ID: 69b0caa8-7c23-4712-beae-6f536b1cffa3

16 Status: Started

17 Number of Bricks: 1 x 4 = 4

18 Transport-type: tcp

19 Bricks:

20 Brick1: sc2-log1:/usr/local/share/geoip

21 Brick2: sc2-log2:/usr/local/share/geoip

22 Brick3: sc2-log3:/usr/local/share/geoip

23 Brick4: sc2-log4:/usr/local/share/geoip

twenty-four

25 Volume Name: wurfl

26 Type: Replicate

27 Volume ID: c723a99d-eeab-4865-819a-c0926cf7b88a

28 Status: Started

29 Number of Bricks: 1 x 4 = 4

30 Transport-type: tcp

31 Bricks:

32 Brick1: sc2-log1:/usr/local/share/wurfl

33 Brick2: sc2-log2:/usr/local/share/wurfl

34 Brick3: sc2-log3:/usr/local/share/wurfl

35 Brick4: sc2-log4:/usr/local/share/wurfl

5. Deploy the client on sc2-ads15 and mount GlusterFS the file system:

[sc2-ads15] [root@sc2-ads15 ~] # wget-P / etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

[sc2-ads15] [root@sc2-ads15 ~] # yum install-y glusterfs-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6

[sc2-ads15] [root@sc2-ads15 ~] # mkdir-p / mnt/ {models,geoip,wurfl}

[sc2-ads15] [root@sc2-ads15 ~] # mount-t glusterfs-o ro sc2-log3:models / mnt/models/

[sc2-ads15] [root@sc2-ads15 ~] # mount-t glusterfs-o ro sc2-log3:geoip / mnt/geoip/

[sc2-ads15] [root@sc2-ads15 ~] # mount-t glusterfs-o ro sc2-log3:wurfl / mnt/wurfl/

[sc2-ads15] [root@sc2-ads15 ~] # df-h

1 Filesystem Size Used Avail Use% Mounted on

2 / dev/mapper/vg_t-lv_root

3 59G 7.7G 48G 14% /

4 tmpfs 3.9G 0 3.9G 0% / dev/shm

5 / dev/xvda1 485m 33M 428M 8% / boot

6 sc2-log3:models 98G 8.6G 85G 10% / mnt/models

7 sc2-log3:geoip 98G 8.6G 85G 10% / mnt/geoip

8 sc2-log3:wurfl 98G 8.6G 85G 10% / mnt/wurfl

6. Related data read and write usability test:

Write data to the sc2-ads15 mount point:

The code is as follows

[sc2-ads15] [root@sc2-ads15 ~] # umount / mnt/models

[sc2-ads15] [root@sc2-ads15 ~] # mount-t glusterfs sc2-log3:models / mnt/models/

[sc2-ads15] [root@sc2-ads15 ~] # echo "This is sc2-ads15" / mnt/models/hello.txt

[sc2-ads15] [root@sc2-ads15 ~] # mkdir / mnt/testdir

View it in the sc2-log1 data directory:

[root@sc2-log1 ~] # ls/ usr/local/share/models/

1 hello.txt testdir

Result: data was written successfully

Write data directly to the sc2-log1 data directory:

The code is as follows

[root@sc2-log1 ~] # echo "This is sc2-log1" / usr/local/share/models/hello.2.txt

[root@sc2-log1 ~] # mkdir / usr/local/share/models/test2

View on the sc2-ads15 mount point:

[sc2-ads15] [root@sc2-ads15 ~] # ls / mnt/models

[sc2-ads15] [root@sc2-ads15 ~] # ls-l / mnt/models

1 hello.txt testdir

Result: data write failed

Write data to the sc2-log1 mount point:

The code is as follows

[root@sc2-log1] # mount-t glusterfs sc2-log1:models / mnt/models/

[root@sc2-log1 ~] # echo "This is sc2-log1" / mnt/models/hello.3.txt

[root@sc2-log1 ~] # mkdir / mnt/models/test3

View on the sc2-ads15 mount point:

[sc2-ads15] [root@sc2-ads15 models] # ls / mnt/models

1 hello.2.txt hello.3.txt hello.txt test2 test3 testdir

Result: the data was written successfully, and the data that failed in the previous write was successfully loaded.

Final conclusion:

Writing data directly to the data directory will cause other nodes to fail data synchronization because they are not notified.

The right thing to do is to do all read and write operations through the mount point.

7. Other operation notes:

Delete the GlusterFS disk:

The code is as follows

# gluster volume stop models

# gluster volume delete models

Unmount the GlusterFS disk:

The code is as follows

Sc2-log4

ACL access Control:

The code is as follows

# gluster volume set models auth.allow 10.60.1.popular Magazine 10.70.1.*

Add a GlusterFS node:

The code is as follows

# gluster peer probe sc2-log5

# gluster peer probe sc2-log6

# gluster volume add-brick models sc2-log5:/data/gluster sc2-log6:/data/gluster

Migrate GlusterFS disk data:

The code is as follows

# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models start

# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models status

# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit

Repair GlusterFS disk data (for example, in the case of sc2-log1 downtime):

The code is as follows

# gluster volume replace-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit-force

# gluster volume heal models full

This is the end of how to install and set up GlusterFS in CentOS 6.4. thank you for your reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report