In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Rhel6.5 system environment:
Server1 master
Server2
Server3
Server4
First, server1 downloads the software package:
Libpcap-1.4.0-4.20130826git2dbcaa1.el6.x86_64.rpm dependent environment
Libpcap-devel-1.4.0-4.20130826git2dbcaa1.el6.x86_64.rpm dependent environment
Moosefs-3.0.80-1.tar.gz
Yum install-y rpm-build rpm package detection compiler environment installation
Rpmbuild-tb moosefs-3.0.80-1.tar.gz view the compiled package
You may want to install the gcc compilation environment
Yum install gcc-y
Make soft connection ln-s moosefs-3.0.80-1.tar.gz moosefs-3.0.80.tar.gz
Enter the directory cd / root/rpmbuild/RPMS/x86_64
[root@server1 x8634] # yum install-y moosefs-master-3.0.80-1.x86_64.rpm moosefs-cgi-3.0.80-1.x86_64.rpm moosefs-cgiserv-3.0.80-1.x86_64.rpm
Send client packages to server2 and server3
Scp moosefs-chunkserver-3.0.80-1.x86_64.rpm server2:/root/
Scp moosefs-chunkserver-3.0.80-1.x86_64.rpm server3:/root/
[root@server1 x86_64] # cd / etc/mfs/
[root@server1 mfs] # vim mfsmaster.cfg can not be modified by default, and no modification is made here
[root@server1 mfs] # vim / etc/hosts Note mfs parsing
172.25.35.1 server1 mfsmaster
172.25.35.2 server2
172.25.35.3 server3
[root@server1 mfs] # cd / var/lib/mfs/
[root@server1 mfs] # ls View permission
Changelog.2.mfs changelog.5.mfs metadata.mfs.empty
Changelog.3.mfs metadata.mfs stats.mfs
Changelog.4.mfs metadata.mfs.back.1
[root@server1 mfs] # ll
The total dosage is 3620
-rw-r- 1 mfs mfs 33 June 16 15:09 changelog.0.mfs
-rw-r- 1 mfs mfs 67 June 10 10:08 changelog.2.mfs
-rw-r- 1 mfs mfs 10 June 1924 09:58 changelog.3.mfs
-rw-r- 1 mfs mfs 10 June 1712 08:58 changelog.4.mfs
-rw-r- 1 mfs mfs 213 June 9 17:52 changelog.5.mfs
-rw-r- 1 mfs mfs 3799 June 10 11:08 metadata.mfs.back
-rw-r- 1 mfs mfs 3799 June 10 11:00 metadata.mfs.back.1
-rw-r--r-- 1 mfs mfs 8 June 9 17:28 metadata.mfs.empty
-rw-r- 1 mfs mfs 3672832 June 10 11:08 stats.mfs
[root@server1 mfs] # mfsmaster starts mfsmaster
Open files limit has been set to: 16384
Working directory: / var/lib/mfs
Lockfile created and locked
Initializing mfsmaster modules...
Exports file has been loaded
Topology file has been loaded
Loading metadata...
Loading sessions data... Ok (0.0000)
Loading storage classes data... Ok (0.0000)
Loading objects (files,directories,etc.) Ok (0.1752)
Loading names... Ok (0.3000)
Loading deletion timestamps... Ok (0.0000)
Loading quota definitions... Ok (0.0000)
Loading xattr data... Ok (0.0000)
Loading posix_acl data... Ok (0.0000)
Loading open files data... Ok (0.0000)
Loading flock_locks data... Ok (0.0000)
Loading posix_locks data... Ok (0.0000)
Loading chunkservers data... Ok (0.0000)
Loading chunks data... Ok (0.4275)
Checking filesystem consistency... Ok
Connecting files and chunks... Ok
All inodes: 6
Directory inodes: 3
File inodes: 3
Chunks: 6
Metadata file has been loaded
Stats file has been loaded
Master metaloggers module: listen on: 9419
Master chunkservers module: listen on: 9420
Main master server module: listen on: 9421
Mfsmaster daemon initialized properly
[root@server1 mfs] # mfscgiserv opens services and ports
Lockfile created and locked
Starting simple cgi server (host: any, port: 9425, rootpath: / usr/share/mfscgi)
Browser access: http://172.25.35.1:9425/mfs.cgi
~
[root@server1 x86_64] # pwd
/ root/rpmbuild/RPMS/x86_64
[root@server1 x86 / 64] scp moosefs-client-3.0.80-1.x86_64.rpm root@172.25.35.250:/root/desktop sends the client software to the tester (here the physical machine will be used for testing later)
[root@server2] # rpm-ivh moosefs-chunkserver-3.0.80-1.x86_64.rpm
[root@server3] # rpm-ivh moosefs-chunkserver-3.0.80-1.x86_64.rpm
Server2,3 installs software, pay attention to parsing, same as server1
[root@server2 ~] # cd / etc/mfs/
[root@server2 mfs] # vim mfshdd.cfg
Append storage path to / mnt/chunk1 document
[root@server2 mfs] # mkdir / mnt/chunk1/
[root@server2 mfs] # chown mfs.mfs / mnt/chunk1/
[root@server2 mfs] # mfschunkserver starts, server3 is the same as server2, as long as the file directory is changed to / mnt/chunk2 and server2 is different
Add a hard drive to server2: install scsi service
[root@server2 mfs] # yum install-y scsi- installs all packages of scsi
[root@server2 mfs] # vim / etc/tgt/targets.conf
Backing-store / dev/vdb
Root@server2 mfs] # / etc/init.d/tgtd start
Starting SCSI target daemon: [OK]
On server1: attach this new hard drive
[root@server1 x8634] # iscsiadm-m discovery-t st-p 172.25.35.2
172.25.35.2 iqn.2018-06.com.example:server.target1
[root@server1 x8634] # iscsiadm-m node-l
Logging in to [iface: default, target: iqn.2018-06.com.example:server.target1, portal: 172.25.35.2 Magi 3260] (multiple)
Login to [iface: default, target: iqn.2018-06.com.example:server.target1, portal: 172.25.35.2 Login to 3260] successful.
[root@server1 x86o64] # fdisk-l View
Device Boot Start End Blocks Id System
/ dev/sda1 2 8192 8387584 83 Linux
[root@server1 x86 / 64] # fdisk-cu / dev/sda
[root@server1 x86 / 64] # mkfs.ext4 / dev/sda1 partition format
[root@server1 x86_64] # mount / dev/sda1 / mnt/
[root@server1 x86x64] # df first mount to see
[root@server1 x86_64] # cd / var/lib/mfs/
[root@server1 mfs] # mfsmaster stop stop the service
[root@server1 mfs] # cp-p / mnt/ copy all text files to mnt
[root@server1 mfs] # chown mfs.mfs / mnt/
[root@server1 mfs] # umount / mnt/
[root@server1 mfs] # mount / dev/sda1 / var/lib/mfs
[root@server1 mfs] # mfsmaster
[root@server1 mfs] # df
/ dev/sda1 8255928 153132 7683420 2% / var/lib/mfs
To achieve high availability, master parsing is the same as server1 installed on server4
[root@server4] # yum install-y moosefs-master-3.0.80-1.x86_64.rpm
[root@server4 ~] # yum install iscsi-
[root@server4] # iscsiadm-m discovery-t st-p 172.25.35.2
172.25.35.2 iqn.2018-06.com.example:server.target1
[root@server4] # iscsiadm-m node-l
Logging in to [iface: default, target: iqn.2018-06.com.example:server.target1, portal: 172.25.35.2 Magi 3260] (multiple)
Login to [iface: default, target: iqn.2018-06.com.example:server.target1, portal: 172.25.35.2 Login to 3260] successful.
[root@server4 ~] # fdisk-l
Back to the physical machine mentioned earlier: it also needs to be parsed.
[root@localhost] # rpm-ivh moosefs-client-3.0.80-1.x86_64.rpm
[root@localhost ~] # rpm-qa | grep moosefs
Moosefs-client-3.0.80-1.x86_64 view
[root@localhost ~] # cd / etc/mfs/
[root@localhost mfs] # vim mfsmount.cfg
/ mnt/mfs
[root@localhost ~] # mfsmount
[root@localhost mfs] # df
Mfsmaster:9421 34365120 4873600 29491520 15% / mnt/mfs
File deletion recovery
[root@localhost mfs] # mkdir dir {1..2}
[root@localhost mfs] # mfsgetgoal dir1/
Dir1/: 2
[root@localhost mfs] # mfsgetgoal dir2/
Dir2/: 2
[root@localhost mfs] # cd dir1
[root@localhost mfs] # cp / etc/passwd. Copy some files for testing
[root@localhost mfs] # cd dir2
[root@localhost mfs] # cp / etc/fstab.
[root@localhost mfs] # cd dir1
[root@localhost dir1] # dd if=/dev/zero of=bigfile bs=1M count=200
Write to a large file
[root@localhost dir1] # mfsfileinfo bigfile
[root@localhost dir1] # rm-f passwd
[root@localhost dir1] # mfsgettrashtime.
.: 86400
[root@localhost dir1] # cd / etc/mfs/
[root@localhost mfs] # cat / etc/mfs/mfsmount.cfg
[root@localhost mnt] # mkdir mfsmeta
[root@localhost mnt] # mfsmount-m / mnt/mfsmeta/
[root@localhost mnt] # cd mfsmeta/
[root@localhost mfsmeta] # ls
Sustained trash
[root@localhost mfsmeta] # cd trash/
[root@localhost trash] # find-name passwd
. / 004Unip 00000004 | dir1 | passwd looks for files
[root@localhost trash] # mv. / 004swap 00000004 | dir1 | passwd undel/
[root@localhost dir1] # pwd
/ mnt/mfs/dir1
[root@localhost dir1] # ls
Bigfile passwd restored the files.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.