In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
How to achieve hadoop RPC communication file upload principle analysis, I believe that many inexperienced people do not know what to do, so this paper summarizes the causes of the problem and solutions, through this article I hope you can solve this problem.
The code called in / / APP2 public static final String HDFS_PATH = "hdfs://hadoop:9000/hello"; public static final String DIR_PATH = "/ d1000"; public static final String FILE_PATH = "/ d1000/f10000"; public static void main (String [] args) throws Exception {FileSystem fileSystem = FileSystem.get (new URI (HDFS_PATH), new Configuration ()) / create a file / / fileSystem.mkdirs (new Path (DIR_PATH)); / / upload a file / / FSDataOutputStream out = fileSystem.create (new Path (FILE_PATH)); / / FileInputStream in = new FileInputStream ("c:/hello.txt"); / / IOUtils.copyBytes (in, out, 1024Jing true) / download data / / FSDataInputStream in1 = fileSystem.open (new Path (FILE_PATH)); / / IOUtils.copyBytes (in1, System.out, 1024Lo true); / / Delete folder deleteFile (fileSystem) } private static void deleteFile (FileSystem fileSystem) throws IOException {fileSystem.delete (new Path (FILE_PATH), true);}
Note: RPC (remote procedure call)
Calls to object methods between different java processes. One side is called the server and the other is called the client.
The server side provides the object, and the execution of the method of the called object for the client to call occurs on the server side.
RPC is the foundation on which the hadoop framework runs.
The picture above shows that a series of methods called by RPC communication finally achieve the process of writing the file to the linux file system, but because the API encapsulation of the hdfs distributed file system in hadoop is very good, so that the caller does not feel this complex process, for the user or program, it is actually the action of accessing the file through the network, but to the user, it fully reflects the permeability just like accessing the local disk.
For the operation of HDFS, you only need to master FileSystem in the application, without paying attention to which block of DataNode the data is stored in (because it is given to NameNode during this work).
Note: although the client requests blockblocks and blockid from NameNode through DataStreamer when uploading data, the transmission of the data is not transferred through NameNode, but is directly connected to DataNode!
After reading the above, have you mastered the method of how to analyze the principle of uploading RPC communication files in hadoop? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.