Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

GFS basic configuration installation

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/03 Report--

Basic Overview of GFS basic configuration installation

​ GFS is an extensible distributed file system for large, distributed applications that access large amounts of data. It runs on cheap ordinary hardware and provides fault tolerance. It can provide high-performance services to a large number of users.

Basic deployment step 1, experiment preparation name role IP address centos7- 1node1192.168.142.66centos7-2node2192.168.142.77centos7-3node3192.168.142.132centos7-4node4192.168.142.136 centos7-minclient 192.168.142.172 start installation (1) add hard drives for experimental purposes

Centos7-1

/ dev/sdb1 20G 33M 20G 1% / mnt/sdb1/dev/sdc1 20G 33M 20G 1% / mnt/sdc1/dev/sdd1 20G 33M 20G 1% / mnt/sdd1/dev/sde1 20G 33M 20G 1% / mnt/sde1

Centos7-2

/ dev/sdb1 20G 33M 20G 1% / mnt/sdb1/dev/sdc1 20G 33M 20G 1% / mnt/sdc1/dev/sdd1 20G 33M 20G 1% / mnt/sdd1/dev/sde1 20G 33M 20G 1% / mnt/sde1

Centos7-3

/ dev/sdb1 20G 33M 20G 1% / mnt/sdb1/dev/sdc1 20G 33M 20G 1% / mnt/sdc1/dev/sdd1 20G 33M 20G 1% / mnt/sdd1/dev/sde1 20G 33M 20G 1% / mnt/sde1

Centos7-4

/ dev/sdb1 20G 33m 20G 1% / mnt/sdb1/dev/sdc1 20G 33m 20G 1% / mnt/sdc1/dev/sdd1 20G 33M 20G 1% / mnt/sdd1/dev/sde1 20G 33m 20G 1% / mnt/sde1 (2) install GFS (all storage nodes need to be installed)

Modify the local hosts file to facilitate identification

[root@node1 yum.repos.d] # vim / etc/hosts192.168.142.66 node1192.168.142.77 node2192.168.142.132 node3192.168.142.136 node4

Configure a local YUM source

(the original self-contained YUM warehouse can not meet the demand)

[root@node1 mnt] # cd / etc/yum.repos.d/ [root@node1 yum.repos.d] # mkdir bak [root@node1 yum.repos.d] # mv CentOS-* bak/ [root@node3 zhy] # cp-r gfsrepo/ / mnt/ [root@node3 yum.repos.d] # vim GFSrep.repo// manually add [GFSrep] name=GFSbaseurl= file:///mnt/gfsrepogpgcheck=0enabled=1

Install the gfs component and open it

[root@node1 yum.repos.d] # yum-y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma// enable service [root@node1 yum.repos.d] # systemctl start glusterd [root@node1 yum.repos.d] # systemctl enable glusterd [root@node1 yum.repos.d] # systemctl stop firewalld.service [root@node1 yum.repos.d] # setenforce 0

Synchronize with Ali time server

[root@node1 yum.repos.d] # ntpdate ntp1.aliyun.com18 Dec 19:55:56 ntpdate [2843]: adjust time server 120.25.115.20 offset 0.010820 sec

Add a storage trust pool (just add it on any storage node)

[root@node1 yum.repos.d] # gluster peer probe node2peer probe: success. [root@node1 yum.repos.d] # gluster peer probe node3peer probe: success. [root@node1 yum.repos.d] # gluster peer probe node4peer probe: success. [root@node1 yum.repos.d] # gluster peer status / / View the status of each node 2. Create distributed volumes

Features: (distributed by HASH algorithm)

No redundancy

Single point of failure, data will be lost

[root@node1 yum.repos.d] # gluster volume create fenbu node1:/mnt/sdb1 node2:/mnt/sdb1 force// uses node1's sdb1 and node2's sdb1 as nodes to deploy distributed volumes. "force" forces volume create: fenbu: success: please start the volume to access data / / to view distributed volume information [root@node1 yum.repos.d] # gluster volume info fenbuVolume Name: fenbuType: DistributeVolume ID: e7833052-a4c7-4c9f-9660-dc60db737543. Bricks:Brick1: node1:/mnt/sdb1Brick2: node2:/mnt/sdb1// View volume group list [root@node1 yum.repos.d] # gluster volume list// enable distributed volumes [root@node1 yum.repos.d] # gluster volume start fenbuvolume start: fenbu: success III. Create stripe volumes

Features:

The file is divided into N blocks (N bar nodes) according to the offset, and the polled is stored in each Brick Server node.

Performance is particularly outstanding when storing large files

No redundancy, similar to Raid0

[root@node1 mnt] # gluster volume create tiaodai stripe 2 node1:/mnt/sdc1 node2:/mnt/sdc1 force// "stripe" is divided into two zones / / enable stripe volume [root@node1 mnt] # gluster volume start tiaodaivolume start: tiaodai: success// to view stripe volume information [root@node1 yum.repos.d] # gluster volume info fenbuVolume Name: tiaodai. Status: Started . Bricks:Brick1: node1:/mnt/sdc1Brick2: node2:/mnt/sdc1Options Reconfigured: IV. Create replication volume

Features:

Keep one or more copies of the same document

Slow reading and writing speed

Have redundancy

Must consist of two or more databases

[root@node1 mnt] # gluster volume create fuzhi replica 2 node3:/mnt/sdb1 node4:/mnt/sdb1 force// "replica" number of copies created [root@node1 mnt] # gluster volume start fuzhivolume start: fuzhi: success [root@node1 mnt] # gluster volume info fuzhiVolume Name: fuzhi. Status: Started . Bricks:Brick1: node3:/mnt/sdb1Brick2: node4:/mnt/sdb1 5. Establish a distributed stripe volume

Features:

Take into account the functions of distributed volumes and stripe volumes

For large file processing

At least four servers are required

[root@node1 mnt] # gluster volume create fenbu-tiao stripe 2 node1:/mnt/sdd1 node2:/mnt/sdd1 node3:/mnt/sdd1 node4:/mnt/sdd1 force// "stripe" has stripe volume function, so sharding [root@node1 mnt] # gluster volume start fenbu-tiaovolume start: fenbu-tiao: success [root@node1 mnt] # gluster volume info fenbu-tiaoVolume Name: fenbu-tiao is required. Status: Started . Bricks:Brick1: node1:/mnt/sdd1Brick2: node2:/mnt/sdd1Brick3: node3:/mnt/sdd1Brick4: node4:/mnt/ SDD 16. Establish distributed replication volumes

Features:

Balance the functions of distributed and replicated volumes

Have redundant function

[root@node1 mnt] # gluster volume create fenbu-copy replica 2 node1:/mnt/sde1 node2:/mnt/sde1 node3:/mnt/sde1 node4:/mnt/sde1 force [root@node1 mnt] # gluster volume start fenbu-copyvolume start: fenbu-copy: success [root@node1 mnt] # gluster volume info fenbu-copyVolume Name: fenbu-copy . Status: Started . Bricks:Brick1: node1:/mnt/sde1Brick2: node2:/mnt/sde1Brick3: node3:/mnt/sde1Brick4: node4:/mnt/ SDE 17, client configuration

Modify the local Hosts file

[root@node1 yum.repos.d] # vim / etc/hosts192.168.142.66 node1192.168.142.77 node2192.168.142.132 node3192.168.142.136 node4

Configure a local YUM source

[root@node1 mnt] # cd / etc/yum.repos.d/ [root@node1 yum.repos.d] # mkdir bak [root@node1 yum.repos.d] # mv CentOS-* bak/ [root@node3 zhy] # cp-r gfsrepo/ / mnt/ [root@node3 yum.repos.d] # vim GFSrep.repo// manually add [GFSrep] name=GFSbaseurl= file:///mnt/gfsrepogpgcheck=0enabled=1

Install gfs components

[root@node1 yum.repos.d] # yum-y install glusterfs glusterfs-fuse

Mount the newly created volume of GFS

[root@client yum.repos.d] # mkdir-p / data/fenbu / / New distributed volume mount point [root@client yum.repos.d] # mkdir-p / data/tiaodai / / New stripe volume mount point [root@client yum.repos.d] # mkdir-p / data/fuzhi / / New replication volume mount point [root@client yum.repos.d] # mkdir-p / Data/fenbu-tiao / / New distributed stripe volume mount point [root@client yum.repos.d] # mkdir-p / data/fenbu-copy / / New distributed replication volume [root@client yum.repos.d] # mount.glusterfs node1:fenbu / data/fenbu/ Mount the distributed volume [root@client yum.repos.d] # df-hT file system type Capacity used available mount point node1:fenbu fuse.glusterfs 40G 65m 40G 1% / data/fenbu [root@client yum.repos.d] # mount.glusterfs node1:tiaodai / data/tiaodai/ mount stripe volume [root@client yum.repos.d] # df-hT file system type capacity already Mount replication volume [root@client yum.repos.d] # df-hT file system type capacity available available% mount% mount with available% mount point node1:tiaodai fuse.glusterfs 40G 65m 40G 1% / data/tiaodai [root@client yum.repos.d] # mount.glusterfs node3:fuzhi / data/fuzhi / / mount replication volume [root@client yum.repos.d] # available Point node3:fuzhi fuse.glusterfs 20G 33m 20G 1% / data/fuzhi [root@client yum.repos.d] # mount.glusterfs node1:fenbu-tiao / data/fenbu-tiao/ Mount distributed stripe volume [root@client yum.repos.d] # df-hT file system type capacity used available% mount point node1: Fenbu-tiao fuse.glusterfs 80g 130m 80G 1% / data/fenbu-tiao [root@client yum.repos.d] # mount.glusterfs node4:fenbu-copy / data/fenbu-copy/ Mount distributed replication volumes [root@client yum.repos.d] # df-hT File system Type capacity available available% mount point node4:fenbu-copy Fuse.glusterfs 40G 65m 40G 1% / data/fenbu- copy8, Test all kinds of volumes

Create six test files

[root@client data] # dd if=/dev/zero of=test1.log bs=10M count=10 [root@client data] # dd if=/dev/zero of=test2.log bs=10M count=10 [root@client data] # dd if=/dev/zero of=test3.log bs=10M count=10 [root@client data] # dd if=/dev/zero of=test4.log bs=10M count=10 [root@client data] # dd if=/dev/zero of=test5.log bs=10M count=10 [root@client data] # dd if=/dev/zero of=test6.log bs=10M count=10

Copy files to individual volumes

[root@client data] # cp test* fenbu [root@client data] # cp test* fenbu-copy/ [root@client data] # cp test* fenbu-tiao/ [root@client data] # cp test* fuzhi/ [root@client data] # cp test* tiaodai/

View distributed volumes (node1:sdb1, node2:sdb1)

/ / node1 [root@node1 mnt] # ll-h sdb1/ total dosage 400m Murray RW Murray. 2 root root 100m December 18 23:55 test1.log-rw-r--r--. 2 root root 100m December 18 23:56 test2.log-rw-r--r--. 2 root root 100m December 18 23:56 test4.log-rw-r--r--. 2 root root 100m December 18 23:56 test6.log//node2 [root@node2 mnt] # ll-h sdb1/ total dosage 200M Murray RW Ruki Rafael. 2 root root 100m December 18 23:56 test3.log-rw-r--r--. 2 root root 100m December 18 23:56 test5.log

View stripe volumes (node1:sdc1, node2:sdc1)

/ / divide each data into two blocks (divided into several parts determined by the stripe at the time of establishment) / / node1 [root@node1 mnt] # ll-h sdc1/. The total amount of storage is 300m. 2 root root 50m December 18 23:57 test1.log-rw-r--r--. 2 root root 50m December 18 23:57 test2.log-rw-r--r--. 2 root root 50m December 18 23:57 test3.log-rw-r--r--. 2 root root 50m December 18 23:57 test4.log-rw-r--r--. 2 root root 50m December 18 23:57 test5.log-rw-r--r--. 2 root root 50m December 18 23:57 test6.log//node2 [root@node2 mnt] # ll-h sdc1/ total dosage 300m Melissa RW Muhami Rafael. 2 root root 50m December 18 23:57 test1.log-rw-r--r--. 2 root root 50m December 18 23:57 test2.log-rw-r--r--. 2 root root 50m December 18 23:57 test3.log-rw-r--r--. 2 root root 50m December 18 23:57 test4.log-rw-r--r--. 2 root root 50m December 18 23:57 test5.log-rw-r--r--. 2 root root 50m December 18 23:57 test6.log

View replication volumes (node3:sdb1, node4:sdb1)

/ / all data are copied and stored in one copy / / node3 [root@node3 mnt] # ll-h sdb1/ total consumption 600m Murray RWMUR Murray. 2 root root 100m December 18 23:57 test1.log-rw-r--r--. 2 root root 100m December 18 23:57 test2.log-rw-r--r--. 2 root root 100m December 18 23:57 test3.log-rw-r--r--. 2 root root 100m December 18 23:57 test4.log-rw-r--r--. 2 root root 100m December 18 23:57 test5.log-rw-r--r--. 2 root root 100m December 18 23:57 test6.log//node4 [root@node4 mnt] # ll-h sdb1/ total dosage 600m Melissa RW Muhami Rafael. 2 root root 100m December 18 23:57 test1.log-rw-r--r--. 2 root root 100m December 18 23:57 test2.log-rw-r--r--. 2 root root 100m December 18 23:57 test3.log-rw-r--r--. 2 root root 100m December 18 23:57 test4.log-rw-r--r--. 2 root root 100m December 18 23:57 test5.log-rw-r--r--. 2 root root 100m December 18 23:57 test6.log

View distributed stripe volumes (node1:sdd1, node2:sdd1, node3:sdd1, node4:sdd1)

/ / node1 & node2 [root@node1 mnt] # ll-h sdd1/ with a total amount of 200m RW, R, etc. 2 root root 50m December 18 23:57 test1.log-rw-r--r--. 2 root root 50m December 18 23:57 test2.log-rw-r--r--. 2 root root 50m December 18 23:57 test4.log-rw-r--r--. 2 root root 50m December 18 23:57 test6.log [root@node2 mnt] # ll-h sdd1/ total dosage 200M Murray RWMI Rafael. 2 root root 50m December 18 23:57 test1.log-rw-r--r--. 2 root root 50m December 18 23:57 test2.log-rw-r--r--. 2 root root 50m December 18 23:57 test4.log-rw-r--r--. 2 root root 50m December 18 23:57 test6.log//node3 & node4 [root@node3 mnt] # ll-h sdd1/ Total dosage 100m Melbourne Rafael. 2 root root 50m December 18 23:57 test3.log-rw-r--r--. 2 root root 50m December 18 23:57 test5.log [root@node4 mnt] # ll-h sdd1/ total dosage 100m Melissa RW Ruki. 2 root root 50m December 18 23:57 test3.log-rw-r--r--. 2 root root 50m December 18 23:57 test5.log

View distributed replication volumes (node1~4:sde1)

/ / node1 & node2 [root@node1 mnt] # ll-h sde1/ the total amount of consumption is 400m, RW, RW, RMI, RMI. 2 root root 100m December 18 23:56 test1.log-rw-r--r--. 2 root root 100m December 18 23:56 test2.log-rw-r--r--. 2 root root 100m December 18 23:56 test4.log-rw-r--r--. 2 root root 100m December 18 23:56 test6.log [root@node2 mnt] # ll-h sde1/ total dosage 400M Melissa RW Muhami Rafael. 2 root root 100m December 18 23:56 test1.log-rw-r--r--. 2 root root 100m December 18 23:56 test2.log-rw-r--r--. 2 root root 100m December 18 23:56 test4.log-rw-r--r--. 2 root root 100m December 18 23:56 test6.log//node3 & node4 [root@node3 mnt] # ll-h sde1/ total dosage 200m RWMI. 2 root root 100m December 18 23:56 test3.log-rw-r--r--. 2 root root 100m December 18 23:56 test5.log [root@node4 mnt] # ll-h sde1/ total dosage 200M Murray RW Ruki Rafael. 2 root root 100m December 18 23:56 test3.log-rw-r--r--. 2 root root 100m December 18 23:56 test5.log

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report