Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to create a folder by reading and writing HDFS in a virtual machine through JavaAPI

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

This article will explain in detail how to create a folder by reading and writing HDFS in the JavaAPI virtual machine. The content of the article is of high quality, so the editor shares it for you as a reference. I hope you will have some understanding of the relevant knowledge after reading this article.

In this lecture, we will explain how to read and write HDFS on another computer. In the real world, our HDFS storage and applications are likely to run on different computers.

In the last lecture, we realized the interworking of the network, so in this lecture we will explain how to write a java program to read and write hdfs.

Steps:

1. Ensure the interconnection of the two computer networks (the previous lecture)

2. The HDFS in Centos is running normally (previous lecture)

3. The firewall in Centos opens port 9000 (that is, the hdfs configuration port in centos)

4. Configure JDK in Windows

5. Configure Hadoop in Windows

6. Configure eclipse-related hadoop plug-ins or library packages

7. Create Java program.

First, open port 9000

Open port 9000 in the firewall of centos, as follows:

After centos is enabled, browsers in windows can access the port as below to test whether they can access the port properly: (note that the browser's speed mode or Google browser, ie browser or compatibility mode may not be accessible)

In addition, all the following cmd commands need to be operated in a newly opened cmd window. If you use the previously opened cmd window, the test may fail, because the cmd is valid for the system environment in which the window is opened.

2. Configure JDK in Windows

The jdk version of this demo

Link: https://pan.baidu.com/s/1X3hqp8DhdF-JEcK4rE6TyQ

Extraction code: kvgj

The location where my jdk files are stored.

Configure java_home

Configure class_path

Configure path

Finally, the test java and javac commands are normal.

Configure Hadoop3.1 configuration environment variables in Windows

Decompress the hadoop package, which is common to those in centos, except that the package with the suffix tar.zip has to be unzipped constantly to get the innermost folder. I put it here in the root directory of C disk.

Link: https://pan.baidu.com/s/1AJLenl05gs75XOQJisOyFg

Extraction code: 4t4d

The version of hadoop above

Then configure path to add hadoop_home to path

After saving, cmd runs to view the version, or enter hadoop directly in the cmd command box, if the following error is prompted:

According to the prompt, open the hadoop-env.cmd under hadoop. Note that the cmd suffix is for windows, and the sh suffix of the same name is for linux systems.

Select the hadoop-env.cmd file, and then click Edit (double-click to run directly). After opening it, change the "set JAVA_HOME=" in it to the following:

Note above, because my java is placed in C:\ Program Files, if you write C:\ Program Files\ Java\ jdk-13.0.2 directly, then the system will not recognize this path in dos mode.

In dos filename mode, the abbreviation for "C:\ Program Files" is "PROGRA~1". Or change it to "C:\ Program Files"\ Java\ jdk-13.0.2

That is, enclosed in double quotation marks, otherwise the system will not be able to identify a valid path.

After setting up, run cmd, and you can see that it is normal.

3.2download winutils.exe

Winutils.exe is a hadoop debugging environment tool needed on Windows systems. It contains some basic tool classes needed to debug hadoop and spark on Windows systems.

Single winutils.exe link: https://pan.baidu.com/s/1tsnA4dKOaaI-kdtjqZ5gTQ

Extraction code: ip7h

After download, put it under the hadoop root directory bin.

Otherwise, it will prompt:

Download hadoop.dll

Copy hadoop.dll to the / bin and C:\ windows\ system32 directories in the hadoop root directory, respectively.

Single hadoop.dll link: https://pan.baidu.com/s/1kJBEDPXqOKmV1ZvhEbnB_Q

Extraction code: 02hs

Configure the hadoop plug-in for eclipse

The version of eclipse for the demo is 4.14:

To install the hadoop plug-in for eclipse, download the eclipse plug-in version of the corresponding hadoop. The hadoop version demonstrated here is 2.9.2, so the plug-in is also a matching version.

Single hadoop plug-in link: https://pan.baidu.com/s/1cNrQS3tTb3ZsDCb5C3ivlg

Extraction code: 5y5s

Here is the eclipse installation path:

After shutting down eclipse, put the downloaded hadoop-eclipse-plugin-2.9.2.jar in the dropins directory.

Here we want to talk about the common ways to install plug-ins in eclipse. Many methods on the Internet say to put the above jar under the plugins folder. In fact, eclipse has been modified since version 3.5. There are generally three ways to install plug-ins:

1. Install directly with Eclipse under the option of "help"-> "install Software".

2. Install link by creating a links in which a link file is built to match the path of the plug-in.

3. Dropins installation method, copy (drag and drop) the required plug-ins to the eclipse\ dropins folder, and then install the plug-ins successfully (of course, you must unzip them first). If you want to use your own plug-ins on other machines. You just need to copy your own dropins and overwrite the original one.

After eclipse starts, you can see that a "DFS Locations folder" has been added to Project Explorer.

Launch the hadoop configuration window in windows-show view-other

Select map/reduce locations and click OK, and the window will be displayed at the bottom of IDE.

You can see the following at the bottom. Click the blue elephant icon on the right to add new configuration points:

In the configuration point, we enter the following parameters, and click "Finsh" after configuration:

Then, open: window-- > proferences

Find Hadoop Map/Reduce, click browse on the right to configure the installation path of the local hadoop, and click apply and close to confirm after configuration.

Create Java Program 5.1 create a project

Open eclipse, where you create a java project with a custom name

Select map/reduce project in map/reduce

Pick a name and click next

When finished, we can see that a library package has been added to the new project, which contains the jar used by hadoop.

If we do not reach the jar package such as hadoop, we will find that there are many problems, such as the red wavy line:

First: select the project you created, as follows:

Then click File- > Properties- >

Then select "Java Buile Path- > Libraries- > ClassPath- > Add Library- > User Library" to create a custom jar package storage location, and then click Next

Then click "User Libraries-- > New..." Enter the name of the Libraries you want to create in the input box, where the name is customized, and you also need to see the name. This is hadoopJar.

Finally, click OK, and then click the apply button "Apply and Close" to enter the interface below.

Click add external Jars to add the path to the jar package. Just select all the jar packages under common.

It is available in the following common directory

It's also in the lib directory.

It's also in souces.

After all the selection, we expand hadoopJar and we can see that there are a lot of jar packages.

Finally, click "apply and close" below

At this time, we can see that all the sentences of the guide package have no wavy line.

5.3 write code

Notice that I'm going to create a java folder under the root of my hdfs. We can see that there is no java folder in centos's hdfs.

Create a package in the src folder of the project

Enter the name here in name. The name is customized.

Then create the main class in the package, and the name of the main class is also customized The main purpose of the package is to classify class later.

Write the code in the main class as follows. Instead of giving you the source code, find out what it feels like to write the code:

That is, it is stipulated that through root

After saving, the running result is as follows. Select "java application":

The results are as follows:

Finally, if you want to download all the content uniformly, you can use this link (the file is relatively large):

Link: https://pan.baidu.com/s/1DIHvbSoWvYRLBaP7qQI9ng

Extraction code: olgh

On how to use JavaAPI to read and write the HDFS in the virtual machine to create a folder to share here, I hope the above content can be of some help to you, can learn more knowledge. If you think the article is good, you can share it for more people to see.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report