Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the operations of HDFS files?

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article will explain in detail about the operation of HDFS files, the editor thinks it is very practical, so share it for you to do a reference, I hope you can get something after reading this article.

File operation of HDFS

Format HDFS

Command: user@namenode: Hadoop$bin/hadoop namenode-format

Start HDFS

Command: user@namenode: hadoop$bin/start-dfs.sh

List files on HDFS

Command: user@namenode: hadoop$bin/hadoop dfs-ls

Use hadoop API

Public ListGetFileBolckHost (Configuration conf, String FileName) {

Try {

List list = new ArrayList ()

FileSystem hdfs = FileSystem.get (conf)

Path path = new Path (FileName)

FileStatus fileStatus = hdfs.getFileStatus (path)

BlockLocation [] blkLocations = hdfs.getFileBlockLocations (

FileStatus, 0fileStatus.getLen ()

Int blkCount = blkLocations.length

For (int I = 0; I < blkCount; iTunes +) {

String [] hosts = blkLocations.getHosts ()

List.add (hosts)

}

Return list

} catch (IOException e) {

E.printStackTrace ()

}

Return null

}

Create a directory on HDFS

Command: user@namenode: hadoop$bin/hadoop dfs-mkdir / file name

Use hadoop API

/ / create a new file in HDFS

Public FSDataOutputStream CreateFile (Configuration conf, StringFileName) {

Try {

FileSystem hdfs = FileSystem.get (conf)

Path path = new Path (FileName)

FSDataOutputStream outputStream = hdfs.create (path)

Return outputStream

} catch (IOException e) {

E.printStackTrace ()

}

Return null

}

Upload a file to HDFS

Command: user@namenode: Hadoop$ bin/hadoopdfs-put filename / user/yourUserName/

Use hadoop API

/ / upload files to HDFS

Public voidPutFile (Configuration conf, String srcFile, String dstFile) {

Try {

FileSystem hdfs = FileSystem.get (conf)

Path srcPath = newPath (srcFile)

Path dstPath = newPath (dstFile)

Hdfs.copyFromLocalFile (srcPath, dstPath)

} catch (IOExceptione) {

E.printStackTrace ()

}

}

Export data from HDFS

Command: user@namenode:hadoop$ bin/hadoopdfs-cat foo

Use hadoop API

/ / read files from HDFS

Public voidReadFile (Configuration conf, String FileName) {

Try {

FileSystem hdfs = FileSystem.get (conf)

FSDataInputStreamdis = hdfs.open (new Path (FileName))

IOUtils.copyBytes (dis, System.out, 4096, false)

Dis.close ()

} catch (IOExceptione) {

E.printStackTrace ()

}

}

Shutdown of HDFS

Command: user@namenode:hadoop$bin/stop-dfs.sh

HDFS global status information

Command: bin/Hadoop dfsadmin-report

We can get a global status report. This report contains basic information about HDFS clusters, as well as some information about each machine.

The above are all about local operation HDFS, which are all based on the operation of HDFS under Ubuntu and configured with hadoop environment. As a client, you can also remotely operate HDFS under window system. In fact, the principle is basically the same. Only the IP and ports of namenode in the cluster are open to the outside world to access HDFS.

/ * *

* operate on HDFS

* @ author yujing

*

, /

Public class Write {

Public static voidmain (String [] args) {

Try {

UploadTohdfs ()

ReadHdfs ()

GetDirectoryFromHdfs ()

} catch (FileNotFoundException e) {

E.printStackTrace ()

} catch (IOExceptione) {

E.printStackTrace ()

}

}

Public static voiduploadTohdfs () throws FileNotFoundException, IOException {

String localSrc = "D://qq.txt"

String dst = "hdfs://192.168.1.11:9000/usr/yujing/test.txt"

InputStream in = newBufferedInputStream (new FileInputStream (localSrc))

Configuration conf = new Configuration ()

FileSystem fs = FileSystem.get (URI.create (dst), conf)

OutputStream out = fs.create (new Path (dst), new Progressable () {

Public voidprogress () {

System.out.println (.)

}

});

System.out.println ("File uploaded successfully")

IOUtils.copyBytes (in,out, 4096, true)

}

/ * * read files from HDFS * /

Private static voidreadHdfs () throws FileNotFoundException, IOException {

String dst = "hdfs://192.168.1.11:9000/usr/yujing/test.txt"

Configuration conf = new Configuration ()

FileSystem fs = FileSystem.get (URI.create (dst), conf)

FSDataInputStreamhdfsInStream = fs.open (new Path (dst))

OutputStream out = newFileOutputStream ("d:/qq-hdfs.txt")

Byte [] ioBuffer = newbyte [1024]

Int readLen = hdfsInStream.read (ioBuffer)

While (- 1! = readLen) {

Out.write (ioBuffer, 0, readLen)

ReadLen = hdfsInStream.read (ioBuffer)

}

System.out.println ("read file successfully")

Out.close ()

HdfsInStream.close ()

Fs.close ()

}

/ * *

* add the content to the end of the file on the HDFS by append. Note: if the file is updated, you need to add dfs to the hdfs-site.xml.

* append.supporttrue

, /

Private static voidappendToHdfs () throws FileNotFoundException

IOException {

String dst = "hdfs://192.168.1.11:9000/usr/yujing/test.txt"

Configuration conf = new Configuration ()

FileSystem fs = FileSystem.get (URI.create (dst), conf)

FSDataOutputStream out= fs.append (new Path (dst))

Int readLen = "zhangzk add by hdfs java api" .getBytes () .length

While (- 1! = readLen) {

Out.write ("zhangzk add by hdfs java api" .getBytes (), 0meme readLen)

}

Out.close ()

Fs.close ()

}

/ * * Delete files from HDFS * /

Private static voiddeleteFromHdfs () throws FileNotFoundException

IOException {

String dst = "hdfs://192.168.1.11:9000/usr/yujing"

Configuration conf = new Configuration ()

FileSystem fs = FileSystem.get (URI.create (dst), conf)

Fs.deleteOnExit (newPath (dst))

Fs.close ()

}

/ * * traverse files and directories on HDFS * /

Private static voidgetDirectoryFromHdfs () throws FileNotFoundException

IOException {

String dst = "hdfs://192.168.1.11:9000/usr/yujing"

Configuration conf = new Configuration ()

FileSystem fs = FileSystem.get (URI.create (dst), conf)

FileStatus fileList [] = fs.listStatus (new Path (dst))

Int size = fileList.length

For (int I = 0; I

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report