Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Example Analysis of Moosefs distributed File system Cluster explanation configuration

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/03 Report--

This article introduces the example analysis of Moosefs distributed file system cluster explanation configuration, the content is very detailed, interested friends can refer to, hope to be helpful to you.

Management server (master-server): responsible for the management of each data storage server, file read and write scheduling, file space recovery and recovery. Multi-node copy

2 metadata log server (changelog-server): responsible for backing up the changes of the master server. (generally, it can be placed with the management server) the file type is changelog_ml.*.mfs, so that it can take over from the master server when something goes wrong.

Data storage server (chunk-server): responsible for connecting to the management server, following the management server scheduling, providing storage space, and providing data transmission for customers.

4 client (clients): connect the data storage server managed on the remote management server through the fuse kernel interface. It'seems that the shared file system uses the same effect as the local unix file system.

The principle of reading and writing of MFS file system:

MFS distributed file system building:

System environment:

RHEL6.4

Selinux is disabled

Iptables is flush

1. Yum source definition, which is used to solve the dependency problem of software packages.

# cat yum.repo [base] name=yum baseurl= ftp://192.168.2.234/pub/RHEL6.4 gpgcheck=0 [HA] name=ha baseurl= ftp://192.168.2.234/pub/RHEL6.4/HighAvailability gpgcheck=0 [lb] name=LB baseurl= ftp://192.168.2.234/pub/RHEL6.4/LoadBalancer gpgcheck=0 [Storage] name=St baseurl= ftp://192.168.2.234/pub/RHEL6.4/ResilientStorage gpgcheck=0 [SFS] name=FS baseurl= ftp://192.168.2.234/pub/RHEL6.4/ScalableFileSystem gpgcheck=0

Second, host parsing preparation

# cat / etc/hosts 192.168.2.88 node1 mfsmaster 192.168.2.89 node2 192.168.2.90 node3 192.168.2.82 node4 192.168.2.85 node5

Node1 will be used as master-server in the experiment.

Node3 and node4 as chunk-server

Node5 as clients

All nodes must be prepared above

Third, installation preparation

# yum install rpm-build gcc make fuse-devel zlib-devel-y dependencies used by the installation and compilation environment (in fact, installation will be prompted during installation)

# rpmbuild-how tb mfs-1.6.27.tar.gz constructs a gz package into a rpm package Note: the format of the package is important (only large versions are supported)

# # rpm package generated by ls / root/rpmbuild/RPMS/x86_64/

Mfs-cgi-1.6.27-2.x86_64.rpm mfs-client-1.6.27-2.x86_64.rpm

Mfs-cgiserv-1.6.27-2.x86_64.rpm mfs-master-1.6.27-2.x86_64.rpm

Mfs-chunkserver-1.6.27-2.x86_64.rpm mfs-metalogger-1.6.27-2.x86_64.rpm

1.master-server installation:

# yum localinstall mfs-cgi-1.6.27-2.x86_64.rpm mfs-master-1.6.27-2.x86_64.rpm mfs-cgiserv-1.6.27-2.x86_64.rpm-y

You can use cgi for page monitoring:

Master-server: main files and directories

/ var/lib/mfs mfs data directory

Metadata.mfs mfs startup file

/ etc/mfs home directory (storage profile)

Mfsmaster.cfg mfs main configuration file (define relevant parameters, user, group, etc.)

Mfsexports.cfg mfs attached directory and its permission control file

Mfstopology.cfg file that defines the topology of an MFS network

Configuration files can be used without modification by default

# chown-R nobody / var/lib/mfs pay attention to giving mfs permission to the data directory

# mfsmaster starts mfs

# mfsmaster stop shuts down mfs

# netstat-antlpe (mfsmaster opens three ports: client connection port 9421, listening port 9422, data node port 9420)

# / usr/share/mfscgi

# chmod + x * .cgi gives executable permissions to all cgi pages (to view status in web)

# mfscgiserv-"start cgi monitoring

Http://192.168.2.88:9425/

View mfs monitoring information

2.chunk-server installation configuration (node3 and node4)

# rpm-ivh mfs-chunkserver-1.6.27-2.x86_64.rpm

# cd / etc/mfs/

# cp mfschunkserver.cfg.dist mfschunkserver.cfg

# cp mfshdd.cfg.dist mfshdd.cfg

# vim mfshdd.cfg Storage File

The directory where / mnt/chunk actually stores (the files of the client / mnt/mfs are stored)

# mkdir / mnt/chunk

# mkdir / var/lib/mfs

# chown nobody / var/lib/mfs/

# chown nobody / mnt/chunk

# mfschunkserver starts the mfs server (Note: mfsmaster resolution must be in place)

# l. Generate a hidden lock file

.mfschunkserver.lock

3. Installation and configuration of clients

# yum localinstall mfs-client-1.6.27-2.x86_64.rpm

# cp mfsmount.cfg.dist mfsmount.cfg

# vim mfsmount.cfg

Modify master and distributed directory / mnt/mfs

# mkdir / mnt/mfs

# mfsmounts performs client mount

Mfsmaster accepted connection with parameters: read-write,restricted_ip; root mapped to root:root mounted successfully

# df to view mount devices

Mfsmaster:9421 6714624 0 6714624 / mnt/mfs

# ll-d / mnt/mfs/ automatically read and write after mounting

Drwxrwxrwx 2 root root 0 Jun 8 10:29 / mnt/mfs/

Test: MFS test:

# mkdir hello {1,2}

# ls

Hello1 hello2

# mfsdirinfo hello1/

Hello1/:

Inodes: 1

Directories: 1

Files: 0

Chunks: 0

Length: 0

Size: 0

Realsize: 0

# mfssetgoal-r 3 hello1/ sets the number of backups

Hello1/:

Inodes with goal changed: 1

Inodes with goal not changed: 0

Inodes with permission denied: 0

# mfsgetgoal hello1/ to view the number of file backups

Hello1/: 3

# mfsgetgoal hello2

Hello2: 1

# cp / etc/fstab hello1/

# cp / etc/passwd hello2/

# mfsfileinfo / hello/fstab to view file details

Fstab:

Chunk 0: 000000000000000B_00000001 / (id:11 ver:1)

Copy 1: 192.168.2.82:9422

Copy 2: 192.168.2.90:9422

# mfscheckfile passwd

Test the storage relationship:

# mfsfileinfo fstab

Fstab:

Chunk 0: 000000000000000B_00000001 / (id:11 ver:1)

Copy 1: 192.168.2.90:9422

[root@node5 hello1] # mfsfileinfo.. / hello2/passwd

.. / hello2/passwd:

Chunk 0: 000000000000000C_00000001 / (id:12 ver:1)

No valid copies!

Client: accidentally delete files (accidentally delete / mnt/mfs/hello*/passwd)

# mfsmount-m / mnt/test/-H mfsmaster recovery directory mounted to mfsmaster

Mfsmaster accepted connection with parameters: read-write,restricted_ip

# mount

# cd / mnt/test/

# # mfscheckfile passwd

# mv 00000005\ | hello2\ | passwd undel/

Restore directly to the previous mfs directory

# umount / mnt/meta/

Mfschunk-server can automatically detect the client's configuration file:

# mfschunkserver stop

Re-copy files on the client

# cp / etc/inittab / mnt/mfs/hello1

# mfsgetgoal hello1/fstab to view the number of files

# mfsgetgoal hello1/inittab

# mfsfileinfo inittab started with only one chukserver and can only keep one copy

Turn on chunkserver

# mfschunkserver

# mfsfileinfo inittab checks the number of backups of files and the number of files restored to chunkserver

Inittab:

Chunk 0: 000000000000060000000001 / (id:6 ver:1)

Copy 1: 192.168.2.184:9422

Copy 2: 192.168.2.185:9422

Note:

In mfsmaster, during normal operation, the data file is metadata.mfs.back

When the host fails, the data file will be saved as metadata.mfs

With abnormal shutdown, (kill-9 pid) data files will not be restored

# mfsmetarestore-a-metadata.mfs files will be lost after abnormal startup and must be restored

Then restart mfsmaster (mfsmaster startup must have a metadata.mfs file)

So much for the sample analysis of Moosefs distributed file system cluster explanation and configuration. I hope the above content can be of some help and learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report