Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Eclipse integrated hadoop plug-in development environment

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

First of all, set up the environment of hadoop under win10, and hadoop can run

Extract the hadoop2.7.7 installation package and the source code package, and create an empty directory after decompression. Copy all the jar packages under the share/hadoop under the installation package except the kms directory package to the newly created empty directory. There are about 120.

Put the hadoop.dll under the hadoop/bin where win10 was installed under c:windows/system32, and restart the computer.

Check that the previously installed local hadoop environment is configured with hadoop's environment variable and hadoop's HADOOP_USER_NAME uses root by default. Put the hadoop.dll file under disk C windows/system32.

Under the path of installing eclipse, plugins,dropins, put hadoop-eclipse-plugin-2.6.0.jar (corresponding to download your own version of the plug-in) under this path / eclipse/plugins/ and / eclipse/dropins, and start eclipse

Installation succeeded

6. Find Hadoop Map/Reduce in window- > preferences in ecplise and specify the local installation hadoop path here.

7. First confirm that the hadoop cluster is started, and then

Create a new one in the Map/Reduce Locations in step 2

Then click finish, and you can see that ecplise connects to hadoop.

If you can't see it, right click on localhadoop and reconnect with reconnection.

8. Import package

Windows- "preferences- > Java-> bulid path-" user libraries in eclipse

Then ecplise creates a project: File-> new-> project-> java-> java project

You don't have to type in the hadoopLib jar when you hit the jar package, just type the program.

Click on the project and introduce the JUnit4 package into it. Then create a conf folder in the project, and then create a HA directory under the conf directory

Add core-site.xml and hdfs-site.xml from the hadoop cluster to the HA directory

Click on the HA folder

Test the code:

Package com.test.hadoop.hdfs

Import java.io.BufferedInputStream

Import java.io.BufferedOutputStream

Import java.io.File

Import java.io.FileInputStream

Import java.io.FileOutputStream

Import java.io.IOException

Import java.io.InputStream

Import org.apache.hadoop.conf.Configuration

Import org.apache.hadoop.fs.BlockLocation

Import org.apache.hadoop.fs.FSDataInputStream

Import org.apache.hadoop.fs.FSDataOutputStream

Import org.apache.hadoop.fs.FileStatus

Import org.apache.hadoop.fs.FileSystem

Import org.apache.hadoop.fs.Path

Import org.apache.hadoop.io.IOUtils

Import org.junit.After

Import org.junit.Before

Import org.junit.Test

Public class TestHDFS {

Configuration conf;FileSystem fs;@Beforepublic void conn () throws IOException {conf = new Configuration (true); fs = FileSystem.get (conf);} @ Afterpublic void close () throws IOException {fs.close ();} / / create a directory @ Testpublic void mkDir () throws IOException {Path ifile = new Path ("/ ecpliseMkdir"); if (fs.exists (ifile)) {fs.delete (ifile, true);} fs.mkdirs (ifile) } / / upload file @ Testpublic void upload () throws IOException {Path ifile = new Path ("/ ecpliseMkdir/hello.txt"); FSDataOutputStream output = fs.create (ifile); InputStream input = new BufferedInputStream (new FileInputStream ("d:\\ ywcj_chnl_risk_map_estimate_model.sql")); IOUtils.copyBytes (input, output, conf, true);} / download @ Testpublic void downLocal () throws IOException {Path ifile = new Path ("/ ecpliseMkdir/hello.txt") FSDataInputStream open = fs.open (ifile); File newFile = newFile ("d:\\ test.txt"); if (! newFile.exists ()) {newFile.createNewFile ();} BufferedOutputStream output = new BufferedOutputStream (new FileOutputStream (newFile)); IOUtils.copyBytes (open, output, conf, true);} / / get block block information @ Testpublic void blockInfo () throws IOException {Path ifile = new Path ("/ ecpliseMkdir/hello.txt") FileStatus fsu = fs.getFileStatus (ifile); BlockLocation [] fileBlockLocations = fs.getFileBlockLocations (ifile, 0, fsu.getLen ()); for (BlockLocation b: fileBlockLocations) {System.out.println (b);}} / delete the file @ Testpublic void deleteFile () throws IOException {Path ifile = new Path ("/ ecpliseMkdir/hello.txt"); boolean delete = fs.delete (ifile, true) If (delete) {System.out.println ("deleted successfully -");}}

}

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report