In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces how to use JAVA API to operate HDFS, the article is very detailed, has a certain reference value, interested friends must read it!
Step 1: hadoop environment configuration in windows
The windows operating system needs to configure the hadoop environment.
Mac is essentially a unix system and does not require configuration
= = reference document "Windows&Mac native development environment configuration" =
Link: https://pan.baidu.com/s/1tFJSlRxn18YELUUAUkXXQA
Extraction code: g9ka
Step 2: create the java maven project and import the package:
3.1.4
Org.apache.hadoop
Hadoop-client
${hadoop.version}
Org.apache.hadoop
Hadoop-common
${hadoop.version}
Org.apache.hadoop
Hadoop-hdfs
${hadoop.version}
Org.apache.hadoop
Hadoop-mapreduce-client-core
${hadoop.version}
Junit
Junit
4.11
Test
Org.testng
Testng
RELEASE
Log4j
Log4j
1.2.17
Org.apache.maven.plugins
Maven-compiler-plugin
3.0
1.8
1.8
UTF-8
Org.apache.maven.plugins
Maven-shade-plugin
2.4.3
Package
Shade
True
Step 3: develop the javaAPI operation of hdfs
/ / simplified version
@ Test
Public void mkDirOnHDFS () throws IOException {
/ / configuration item
Configuration configuration = new Configuration ()
/ / set the hdfs cluster to connect to
Configuration.set ("fs.defaultFS", "hdfs://node01:8020")
/ / get the file system
FileSystem fileSystem = FileSystem.get (configuration)
/ / call the method to create a directory. If the directory already exists, the creation fails and false is returned.
Boolean mkdirs = fileSystem.mkdirs (new Path ("/ kaikeba/dir1"))
/ / release resources
FileSystem.close ()
}
/ / specify the user to which the directory belongs
@ Test
Public void mkDirOnHDFS2 () throws IOException, URISyntaxException, InterruptedException {
/ / configuration item
Configuration configuration = new Configuration ()
/ / get the file system
FileSystem fileSystem = FileSystem.get (new URI ("hdfs://node01:8020"), configuration, "test")
/ / call the method to create a directory
Boolean mkdirs = fileSystem.mkdirs (new Path ("/ kaikeba/dir2"))
/ / release resources
FileSystem.close ()
}
/ / specify directory permissions when creating a directory
@ Test
Public void mkDirOnHDFS3 () throws IOException {
Configuration configuration = new Configuration ()
Configuration.set ("fs.defaultFS", "hdfs://node01:8020")
FileSystem fileSystem = FileSystem.get (configuration)
FsPermission fsPermission = new FsPermission (FsAction.ALL, FsAction.READ, FsAction.READ)
Boolean mkdirs = fileSystem.mkdirs (new Path ("hdfs://node01:8020/kaikeba/dir3"), fsPermission)
If (mkdirs) {
System.out.println ("Directory created successfully")
}
FileSystem.close ()
}
Note: we must configure the corresponding hadoop cluster and related host in accordance with the requirements of the previous environment, and only if the hadoop starts successfully can the above program run normally.
The above is all the contents of the article "how to use JAVA API to operate HDFS". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.