In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article is to share with you about how to upload the principle of RPC communication files in the original hadoop. The editor thinks it is very practical, so I share it with you to learn. I hope you can get something after reading this article. Let's take a look at it.
Code called in 01 / / APP2
02 public static final String HDFS_PATH = "hdfs://hadoop:9000/hello"
03 public static final String DIR_PATH = "/ d1000"
04 public static final String FILE_PATH = "/ d1000/f10000"
05
06 public static void main (String [] args) throws Exception {
07 FileSystem fileSystem = FileSystem.get (new URI (HDFS_PATH))
08 new Configuration ()
09 / / create a file
10 / / fileSystem.mkdirs (new Path (DIR_PATH))
11 / upload files
12 / / FSDataOutputStream out = fileSystem.create (new Path (FILE_PATH))
13 / / FileInputStream in = new FileInputStream ("c:/hello.txt")
14 / / IOUtils.copyBytes (in, out, 1024 dint true)
15 / download data
16 / / FSDataInputStream in1 = fileSystem.open (new Path (FILE_PATH))
17 / / IOUtils.copyBytes (in1, System.out, 1024 dint true)
eighteen
19 / / Delete folder
20 deleteFile (fileSystem)
21}
twenty-two
23 private static void deleteFile (FileSystem fileSystem) throws IOException {
24 fileSystem.delete (new Path (FILE_PATH), true)
twenty-five
26}
Note: RPC (remote procedure call)
Calls to object methods between different java processes. One side is called the server and the other is called the client.
The server side provides the object, and the execution of the method of the called object for the client to call occurs on the server side.
RPC is the foundation on which the hadoop framework runs.
Rpc communication
The picture above shows that a series of methods called by RPC communication finally achieve the process of writing the file to the linux file system, but because the API encapsulation of the hdfs distributed file system in hadoop is very good, so that the caller does not feel this complex process, for the user or program, it is actually the action of accessing the file through the network, but to the user, it fully reflects the permeability just like accessing the local disk.
To sum up: for the operation of HDFS, you only need to master FileSystem in the application, without paying attention to which block of DataNode the data is stored in (because this work is given to NameNode).
Note: although the client uploads data through DataStreamer to NameNode for blockblocks and blockid, but the data transmission behavior is not transferred through NameNode, but directly connected to DataNode!
15023619_vEwU.jpg (127.44 KB, downloads: 0)
The above is how to analyze the principle of uploading RPC communication files in the original hadoop. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.