Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Distributed File system-GlusterFS Best practice

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

1. Background

GlusterFS is an open source distributed file system with powerful scale-out capabilities that can support several PB storage capacity and handle thousands of clients. GlusterFS gathers physically distributed storage resources together with TCP/IP or InfiniBand RDMA, a "converted cable" technology that supports multiple concurrent links, and uses a single global namespace to manage data. GlusterFS is based on a stackable user space design that provides excellent performance for a wide variety of data loads.

GlusterFS supports standard clients that run standard applications on any standard IP network

two。 advantage

* Linear scale-out and high performance

* High availability

* Global uniform namespace

* Elastic hashing algorithm and elastic volume management

* based on standard protocol

* full software implementation (Software Only)

* user space implementation (User Space)

* Modular Stack Architecture (Modular Stackable Architecture)

* Storage of raw data format (Data Stored in Native Formats)

* metadata-free service design (No Metadata with the Elastic Hash Algorithm)

3. Environment

Server_1 CentOS 7.2.1511 (Core) 192.168.60.201

Server_2 CentOS 7.2.1511 (Core) 192.168.60.202

4. Installation

* server_1 install centos-release-gluster

[root@server_1 ~] # yum install centos-release-gluster-y

* server_1 install glusterfs-server

[root@server_1 ~] # yum install glusterfs-server-y

* server_1 starts glusterfs-server service

[root@server_1 ~] # systemctl start glusterd

* server_2 install centos-release-gluster

[root@server_2 ~] # yum install centos-release-gluster-y

* server_2 install glusterfs-server

[root@server_2 ~] # yum install glusterfs-server-y

* server_2 starts glusterfs-server service

[root@server_2 ~] # systemctl start glusterd

5. Establish a trust pool [one-way trust can be established]

* server_1 builds trust in server_2

[root@server_1 ~] # gluster peer probe 192.168.60.202peer probe: success.

* check trust pool establishment

[root@server_1 ~] # gluster peer statusNumber of Peers: 1Hostname: 192.168.60.202Uuid: 84d98fd8-4500-46d3-9d67-8bafacb5898bState: Peer in Cluster (Connected) [root@server_2 ~] # gluster peer statusNumber of Peers: 1Hostname: 192.168.60.201Uuid: 20722daf-35c4-422c-99ff-6b0a41d07eb4State: Peer in Cluster (Connected)

6. Create a distributed volume

* server_1 and server_2 create a data storage directory

[root@server_1 ~] # mkdir-p / data/ exp1 [root @ server_2 ~] # mkdir-p / data/exp2

* use the command to create a distributed volume named test-volume

[root@server_1 ~] # gluster volume create test-volume 192.168.60.201:/data/exp1 192.168.60.202:/data/exp2 forcevolume create: test-volume: success: please start the volume to access data

* View volume information

[root@server_1 ~] # gluster volume info test-volume Volume Name: test-volumeType: DistributeVolume ID: 457ca1ff-ac55-4d59-b827-fb80fc0f4184Status: CreatedSnapshot Count: 0Number of Bricks: 2Transport-type: tcpBricks:Brick1: 192.168.60.201:/data/exp1Brick2: 192.168.60.202:/data/exp2Options Reconfigured:transport.address-family: inetnfs.disable: on [root @ server_2 ~] # gluster volume info test-volume Volume Name: test-volumeType: DistributeVolume ID: 457ca1ff-ac55-4d59-b827 -fb80fc0f4184Status: CreatedSnapshot Count: 0Number of Bricks: 2Transport-type: tcpBricks:Brick1: 192.168.60.201:/data/exp1Brick2: 192.168.60.202:/data/exp2Options Reconfigured:transport.address-family: inetnfs.disable: on

* Boot Volume

[root@server_1 ~] # gluster volume start test-volumevolume start: test-volume: success

7. Create replication volumes [compare Raid 1]

* server_1 and server_2 create a data storage directory

[root@server_1 ~] # mkdir-p / data/ exp3 [root @ server_2 ~] # mkdir-p / data/exp4

* use the command to create a replication volume named repl-volume

[root@server_1 ~] # gluster volume create repl-volume replica 2 transport tcp 192.168.60.201:/data/exp3 192.168.60.202:/data/exp4 forcevolume create: repl-volume: success: please start the volume to access data

* View volume information

[root@server_1 ~] # gluster volume info repl-volume Volume Name: repl-volumeType: ReplicateVolume ID: 1924ed7b-73d4-45a9-af6d-fd19abb384cdStatus: CreatedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: 192.168.60.201:/data/exp3Brick2: 192.168.60.202:/data/exp4Options Reconfigured:transport.address-family: inetnfs.disable: on [root @ server_2 ~] # gluster volume info repl-volume Volume Name: repl-volumeType: ReplicateVolume ID: 1924ed7b-73d4 -45a9-af6d-fd19abb384cdStatus: CreatedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: 192.168.60.201:/data/exp3Brick2: 192.168.60.202:/data/exp4Options Reconfigured:transport.address-family: inetnfs.disable: on

* Boot Volume

[root@server_1 ~] # gluster volume start repl-volumevolume start: repl-volume: success

8. Create a stripe volume [compare Raid 0]

* server_1 and server_2 create a data storage directory

[root@server_1 ~] # mkdir-p / data/ exp5 [root @ server_2 ~] # mkdir-p / data/exp6

* use the command to create a replication volume named raid0-volume

[root@server_1 ~] # gluster volume create raid0-volume stripe 2 transport tcp 192.168.60.201:/data/exp5 192.168.60.202:/data/exp6 forcevolume create: raid0-volume: success: please start the volume to access data

* View volume information

[root@server_1 ~] # gluster volume info raid0-volume Volume Name: raid0-volumeType: StripeVolume ID: 13b36adb-7e8b-46e2-8949-f54eab5356f6Status: CreatedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: 192.168.60.201:/data/exp5Brick2: 192.168.60.202:/data/exp6Options Reconfigured:transport.address-family: inetnfs.disable: on [root @ server_2 ~] # gluster volume info raid0-volume Volume Name: raid0-volumeType: StripeVolume ID: 13b36adb-7e8b -46e2-8949-f54eab5356f6Status: CreatedSnapshot Count: 0Number of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: 192.168.60.201:/data/exp5Brick2: 192.168.60.202:/data/exp6Options Reconfigured:transport.address-family: inetnfs.disable: on

* Boot Volume

[root@server_1 ~] # gluster volume start raid0-volumevolume start: raid0-volume: success

9. Client application

* install glusterfs-cli

[root@client ~] # yum install glusterfs-cli-y

* create a mount directory

[root@client ~] # mkdir / mnt/g1 / mnt/g2 / mnt/g3

* Mount the volume

[root@server_1 ~] # mount.glusterfs 192.168.60.201:/test-volume / mnt/ G1 [root @ server_1 ~] # mount.glusterfs 192.168.60.202:/repl-volume / mnt/ G2 [root @ server_1 ~] # mount.glusterfs 192.168.60.201:/raid0-volume / mnt/g3

10. Expand Volum

* create a storage directory

[root@server_1] # mkdir-p / data/exp9

* expand the volume

[root@server_1 ~] # gluster volume add-brick test-volume 192.168.60.201:/data/exp9 forcevolume add-brick: success

* rebalancing

[root@server_1 ~] # gluster volume rebalance test-volume startvolume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process.ID: 008c3f28-d8a1-4f05-b63c-4543c51050ec

11. Summary

In order to demand-driven technology, there is no difference in technology itself, only in business.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report