In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
Distributed file system Glusterfs
Gluster File System is free software, mainly developed by Z RESEARCH. It has more than a dozen developers and has been very active recently. It is mainly used in cluster system and has good expansibility. The structure of the software is well designed, easy to expand and configure, through the flexible collocation of each module to get a targeted solution. The following problems can be solved: network storage, federated storage (merging storage space on multiple nodes), redundant backup, load balancing of large files (blocking).
1.1 system environment
Prepare three machines:
[root@node1 data] # cat / etc/redhat-release
CentOS release 6.8 (Final)
Node1:
192.168.70.71
Server
Node2:
192.168.70.72
Server
Node3:
192.168.70.73
Client
1.2 set up the firewall
Vi / etc/sysconfig/iptables:
-An INPUT-m state--state NEW-m tcp-p tcp-- dport 24007 tcp 24008-j ACCEPT
-An INPUT-m state-- stateNEW-m tcp-p tcp-- dport 49152 tcp 49162-j ACCEPT
Service iptables restart
1.3Glusterfs installation
Do the following under both Server1 and server2:
Installation method 1:
Wget-l 1-nd-nc-r-A.rpm http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/CentOS/epel-6Server/x86_64/
Wget-l 1-nd-nc-r-A.rpm http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/CentOS/epel-6Server/noarch/
Wget-nc
Http://download.gluster.org/pub/gluster/nfs-ganesha/2.3.0/EPEL.repo/epel-6Server/x86_64/nfs-ganesha-gluster-2.3.0-1.el6.x86_64.rpm
Yum install *-y
Method 2: (different versions)
Mkdir tools
Cd / tools
Wget-l 1-nd-nc-r-A.rpm http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.5/RHEL/epel-6/x86_64/
Yum install * .rpm
Enable the gluster service
/ etc/init.d/glusterd start
Set the glusterFS service to boot
Chkconfig glusterd on
1.4gluster Server Settings 1.4.1 configure Storage pools 1.4.2 add trusted Storage pools
[root@node1 /] # gluster peer probe 192.168.70.72
Peer probe: success.
1.4.3 View status
[root@node1 /] # gluster peer status
Number of Peers: 1
Hostname: 192.168.70.72
Uuid: fdc6c52d-8393-458a-bf02-c1ff60a0ac1b
State: Accepted peer request (Connected)
1.4.4 remove Node
[root@node1 /] # gluster peer detach 192.168.70.72
Peer detach: success
[root@node1 /] # gluster peer status
Number of Peers: 0
1.5 create a GlusterFS logical volume (Volume)
Set up mkdir / data/gfs/ in node1 and node2 respectively
Mkdir / data/gfs/
Create logical volumes,
Gluster volume create vg0 replica 2192.168.70.71:/data/gfs 192.168.70.72:/data/gfs force
Volume create: vg0: success: please startthe volume to access data
View logical volume information
[root@node1 /] # gluster volume info
Volume Name: vg0
Type: Replicate
Volume ID: 6aff1f4f-8efe-4ed0-879e-95df483a86a2
Status: Created
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.70.71:/data/gfs
Brick2: 192.168.70.72:/data/gfs
[root@node1 /] # gluster volume status
Volume vg0 is not started
Open logical volume
[root@node1 /] # gluster volume start vg0
Volume start: vg0: success
1.6 client installation
Yum-y install glusterfs glusterfs-fuse
Mount-t glusterfs 192.168.70.71:/gv0/mnt/gfs
# Volume expansion (because the number of copies is set to 2, at least 2 (4, 6, 8.) machines should be added)
Gluster peer probe 192.168.70.74 # add nodes
Gluster peer probe 192.168.70.75 # add nodes
Gluster volume add-brick gv0 192.168.70.74:/data/glusterfs192.168.70.75:/data/glusterfs # merge Volume
Shrink volume (gluster needs to move data to another location before shrinking volume)
Gluster volume remove-brick gv0 192.168.70.74:/data/glusterfs 192.168.70.75:/data/glusterfsstart # start the migration
Gluster volume remove-brick gv0 192.168.70.74:/data/glusterfs192.168.70.74:/data/glusterfsstatus # View migration status
Gluster volume remove-brick gv0 192.168.70.74:/data/glusterfs192.168.70.74:/data/glusterfs commit # submit after the migration is completed
# Migration Volume
Gluster peer probe 192.168.70.7migrate 192.168.70.76 data to 192.168.70.75 first add 192.168.70.75 to the cluster
Gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfsstart # start the migration
Gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfsstatus # View migration status
Gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfscommit # submit after data migration is completed
Gluster volume replace-brick gv0 192.168.70.76:/data/glusterfs192.168.70.75:/data/glusterfs commit-force # if machine 192.168.70.76 fails to run, perform a forced commit
Gluster volume heal gv0full # synchronize the entire volume
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.