Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

5. Operate hdfs through API

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

1. Basic api operation

1. There are two ways to obtain HDFS objects:

Method 1:

Public static FileSystem initFileSystem1 () throws IOException {/ / get configuration object Configuration conf = new Configuration (); / / specify namenode address conf.set ("fs.defaultFS", "hdfs://bigdata121:9000"); / / get hdfs file system access object FileSystem client = FileSystem.get (conf); return client;}

Mode 2:

Public static FileSystem initFileSystem2 () throws IOException, URISyntaxException {Configuration conf = new Configuration (); / / get the hdfs file system access object FileSystem client = FileSystem.get (new URI ("hdfs://bigdata121:9000"), conf) directly through uri; return client;}

After that, each method is called to manipulate hdfs through client, a file system object.

2. Parameter value configuration of configuration object

/ / you can set parameter values through conf.set (key, value), such as conf.set ("fs.defaultFS", "hdfs://bigdata121:9000").

3. Create a directory

Path p = new Path (HDFS path); client.mkdirs (p)

4. Upload files

```

Path dest = new Path (HDFS path)

Path src = new Path (local path)

Client.copyFromLocalFile (src, dest)

/ / you can also set whether to delete the local file and overwrite the original file with the same name in hdfs

5. Download the file

/ / usage:

Client.copyToLocalFile (srcPath,destPath,)

/ / example:

Path downloadFile = new Path ("/ king/edit_new.xml")

Path destPath = new Path ("G:\ edits.xml")

Client.copyToLocalFile (downloadFile, destPath)

Client.close ()

/ / you can also set whether to delete the source files in hdfs.

6. Delete file or directory ```java/* mode 1: client.delete (Path HDFS path, whether boolean is deleted recursively) if it is not deleted recursively, then if the deleted directory is not empty, it will report an error * / Path deletePath = new Path ("/ linux2.txt"); client.delete (deletePath, true); client.close () / * method 2: client.deleteOnExit (Path HDFS path) * / Path deletePath = new Path ("/ linux2.txt"); client.deleteOnExit (deletePath); / / exist deletion client.close ()

7. View file attributes (you can only view files, not directories)

/ / returns a LocatedFileStatus iterator, usage: RemoteIterator pathList = client.listFiles (HDFS path, recursive); recursive indicates whether to recursively display the contents of the subdirectory / / example: public void listFileMetaData () throws Exception {FileSystem client = initFileSystem2 (); Path listPath = new Path ("/"); / / gets the list under the specified path, does not display recursively, and returns an iterator RemoteIterator pathList = client.listFiles (listPath, false) While (pathList.hasNext ()) {LocatedFileStatus I = pathList.next (); / / get the file name System.out.println (i.getPath (). GetName ()); / / File permission System.out.println (i.getPermission ()); / / File owner System.out.println (i.getOwner ()) / / file array System.out.println (i.getGroup ()); / / file size in Byte System.out.println (i.getLen ()); / / size of the file's block System.out.println ("blocksize:" + i.getBlockSize ()) / / get the block address of the file BlockLocation [] bl = i.getBlockLocations (); for (BlockLocation b:bl) {/ / get the offset address System.out.println ("offset:" + b.getOffset ()) of each block / / get the hostname String [] hosts = b.getHosts () of all datanode where the current copy of block resides; for (String h:hosts) {System.out.println (h);}} System.out.println ("=");}

8. View the properties of files and directories

/ / an FileStatus array is returned, which cannot display the contents of the subdirectory recursively, but you can look at the properties of the subdirectory itself, using FileStatus [] f = client.listStatus (Path); public void judgeFile () throws Exception {FileSystem client = initFileSystem2 (); Path path = new Path ("/"); / / get the FileSstatus object FileStatus [] fileStatuses = client.listStatus (path) / / obtain the properties of a file or directory through the Filestatus object for (FileStatus f:fileStatuses) {System.out.println ("File name:" + f.getPath (). GetName ()); System.out.println ("permission:" + f.getPermission ()); System.out.println ("size:" + f.getLen ()); System.out.println ("= =") } client.close ();}

9. File type judgment

/ / both FileStatus and LocatedFileStatus above can call internal methods to determine whether the current file or directory is a FileStatus object. IsFile () LocatedFileStatus object .isFile () FileStatus object .isDirectory () LocatedFileStatus object .isDirectory () 2. Manipulate hdfs with IO stream

1. Upload files by IO stream

The following two methods are mainly used:

/ / this is the data stream output stream dedicated to HDFS FSDataOutputStream fos = client.create (Path) / / create method also has the following parameters: boolean overwrite: if the file file already exists, whether to overwrite it, the default is trueshort replication: you can specify the number of copies If it is not specified, it depends on the configuration of hdfs int bufferSize: buffer size long blockSize: specify the block size you use FsPermission var2: specify permission ChecksumOpt checksumOpt: specify the check value / / the following is the tool for interfacing input stream and output stream IOutils.copyBytes (inputstream,outputstream,buffsize,close) inputstream input stream outputstream output stream buffsize buffer close whether to close the stream

Example:

@ Test public void putFileFromIO () throws Exception {FileSystem client = initFileSystem2 (); / / create the local file byte input stream InputStream fis = new FileInputStream ("E:\\ file\\ big data\\ java se\\ Chapter 18 Java File and IO Stream .md") / / create the hdfs file byte output stream. Note that when creating the output stream file, be sure to specify the file name, otherwise the error will be Path uploadPath = new Path ("/ Chapter 18 Java file and IO stream .md"); FSDataOutputStream fos = client.create (uploadPath) / / input stream and output stream connect try {/ / copy of input stream and output stream. The following false means not to close stream IOUtils.copyBytes (fis, fos, 1024 false);} catch (IOException e) {e.printStackTrace ();} finally {/ / close input stream and output stream IOUtils.closeStream (fis) IOUtils.closeStream (fos);}}

2. Download files by IO stream

Path getPath = new Path ("/ dog.txt"); FSDataInputStream fis = client.open (getPath)

Example:

/ * * there are two ways to write: * 1, use the read/write method of the stream * 2, use the tool IOUtils.copyBytes (instream,outstream,buffize) * * / @ Test public void getFileFromIO () throws Exception {FileSystem client = initFileSystem2 (); / / create a local output stream and save the content OutputStream fos = new FileOutputStream ("F:\\ edit_new.xml") / / create hdfs input stream Path getPath = new Path ("/ dog.txt"); FSDataInputStream fis = client.open (getPath); try {IOUtils.copyBytes (fis, System.out, 1024);} catch (IOException e) {e.printStackTrace ();} finally {IOUtils.closeStream (fis); IOUtils.closeStream (fos) }}

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report