Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to mount HDFS using FUSE

2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article introduces you how to use FUSE to mount HDFS, the content is very detailed, interested friends can refer to, hope to be helpful to you.

Because you want to use iozone and dd to do a simple test on HDFS, you need to mount HDFS locally. The mounting process is not complicated, but there are all kinds of errors in the process. After half a week, the test is finally finished. Now record the whole process of mounting and solving the error, and we will study and discuss it together.

1. FUSE installation

The installation steps are simple.

1. Decompress $tar zxvf fuse-2.9.3.tar.gz

2. Enter the fuse directory $cd / usr/local/fuse-2.9.3

3. $. / configure

4. $make

5. $make install

6. $modprobe fuse (automatically load fuse module)

II. HDFS mount

1. Add system configuration

$sudo vi / etc/profile

Add the following

Restart the cluster and find that all datanode are dead within a few seconds after startup. Check the log and find an error:

FATAL ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Shutting down. Incompatible version or revision.DataNode version '1.2.1' and revision' 1503152' and NameNode version '1.2.2Mel SNAPSHOT' and revision 'and hadoop.relaxed.worker.version.check is not enabled and hadoop.skip.worker.version.check is not enabled

The version mismatch was thought to be due to the version of the package used in the ant compilation process, but the ant clean command was used to erase all the previously compiled packages. I thought it was time to get back to normal, but there was an even worse problem. The cluster can start normally, and you can see that each node is in a normal state through the command line hadoop dfsadmin-report, but it just can't be viewed through the browser.

3. Iozone testing

Iozone testing is relatively simple, iozone is mainly used to test operating system file system performance testing tools, this tool mainly tests the scope of Write, Re-write, Read, Re-Read, Random Read, Random Write, Random Mix, Backwards Read, Record Rewrite, Strided Read, Fwrite, Frewrite, Fread, Freread, Mmap, Async I _ Unio. Using iozone, you can test the performance of file operations in multithreaded, multi-cpu, specified cpu cache space size and synchronous or asynchronous Imax O read-write mode.

The command is: iozone-s 128k-I 0-I 1-I 2-I 3-I 4-I 5-I 8-t 8-r 1m-B > test.txt

Parameter interpretation: 0=write/rewrite, 1=read/re-read, 2=random-read/write 3=Read-backwards, 4=Re-write-record, 5=stride-read, 6=fwrite/re-fwrite, 7=fread/Re-fread, 8=random mix, 9=pwrite/Re-pwrite, 10=pread/Re-pread, 11=pwritev/Re-pwritev, 12=preadv/Re-preadv). = pread/Re-pread, 11=pwritev/Re-pwritev, 12-thread preadvandr r block size indicates the number of threads,-r block size specifies the block size to write / read once,-s file size specifies the size of the test file,-f filename specifies the name of the test file, and will be deleted automatically after completion (this file must be specified on the hard drive you want to test),-F file1 file2... Specify the file name of the test under multithreading, and-B or-b specify to the output file. There are many iozone parameters, and you can learn some parameters according to your usage requirements.

4. Dd testing

Dd test is not a standard read and write test tool for disk and file system, but it is only a disk command for Linux system, but the disk copy function realized by dd command can indirectly reflect the read and write ability of disk. Therefore, when testing the read and write performance of disk and file system, it is often tested by dd command. Dd test read and write commands are separated

Write operation: dd if=/dev/zero of=/tmp/hdfs/zerofiles bs=4M count=1240 conv=fdatasync

Read operation: dd if=/tmp/hdfs/zerofiles of=/dev/null bs=4M count=1240

Where if stands for input file, that is, the input file; of for output file, that is, the output file; bs for the block size that is read or written once; count for how many blocks to write or read; and conv for the file to be converted with the specified parameters. In actual use, you can adjust it according to your own needs.

V. Summary

Finally, a brief summary of several problems encountered and their solutions

1. Error in FUSE compilation: error appears after. / configure

Solution: in Ubuntu12.04LTS environment, there are all kinds of problems when using fuse-2.7.4 or fuse-2.8.5, but after switching to fuse-2.9.3, the error is no longer reported, and other versions are not tried.

2. Version mismatch: after compiling hadoop module fuse-dfs using ant, datanode cannot start and a version mismatch error is reported

Solution: add configuration information to the hadoop/conf/core-site.xml configuration file (each node needs to be modified)

Hadoop.relaxed.worker.version.check

True

Hadoop.skip.worker.version.check

True

Since both attribute values in the default configuration are false, the version is checked when the cluster starts, and datanode refuses to connect to the

On namenode, the number of nodes in the cluster is 0, that is, datanode is not started.

3. You cannot view the cluster status through the browser:

Reason: after ant clean, all the previously compiled modules are cleared, and it is speculated that the cleared modules may involve the implementation of the browser display part.

Solution: recompile the FUSE module, do not easily clear the compiled module

4. Test files cannot be written to HDFS

Reason: ordinary ubuntu users do not have write access to hadoop hdfs

Solution: open the permissions of the hadoop directory, the command is: hadoop fs-chmod 777 / user/hadoop

On how to use FUSE to mount HDFS to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report