Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

HDFS read and write file operation

2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

1 description of the operating environment

1.1 hardware and software environment

1.2 Machine network environment

2 written assignment 1: compile and run example 3.2 in the authoritative Guide

2.1 written assignment 1 content

2.2 run the code

2.3 implementation process

2.3.1 create a code directory

2.3.2 create an example file and upload it to hdfs

2.3.3 configure the local environment

2.3.4 write code

2.3.5 compile the code

2.3.6 use compiled code to read files

3 written assignment 2: write HDFS to become a new file

3.1 content of written assignment 2

3.2 run the code

3.3 implementation process

3.3.1 write code

3.3.2 compile the code

3.3.3 create test files

3.3.4 upload file contents to hdfs using compiled code

3.3.5 verify whether it is successful

4 written assignment 3: reverse operation of assignment 2

4.1 content of written assignment 2

4.2 Program Code

4.3 implementation process

4.3.1 write code

4.3.2 compile the code

4.3.3 create a test file

4.3.4 output the contents of the file from hdfs to the file system using compiled code

4.3.5 verify success

1 operating environment description 1.1 hardware and software environment

L host operating system: Windows 64 bit, dual-core 4-thread, main frequency 2.2 gigabyte 6G memory

L virtual software: VMware Workstation 9.0.0 build-812388

Virtual machine operating system: CentOS 64-bit, single core, 1G memory

L JDK:1.7.0_55 64 bit

L Hadoop:1.1.2

1.2 Machine network environment

The development machine is in a local area network that can connect to the Internet. The specific information is as follows:

Serial number

IP address

Machine name

Types

User name

Run the process

one

10.88.147.220

Hadoop0

Stand-alone node

Hadoop

NN 、 SNNTaskTracer 、 DN 、 JobTracer

2 written assignment 1: compile and run example 3.22.1 written assignment 1 content in the authoritative Guide

Compile and run example 3.2 of the authoritative Guide in a Hadoop cluster

2.2 run the code

Import java.io.InputStream

Import java.net.URI

Import org.apache.hadoop.conf.Configuration

Import org.apache.hadoop.fs.*

Import org.apache.hadoop.io.IOUtils

Publicclass FileSystemCat {

Publicstaticvoid main (String [] args) throws Exception {

String uri = args [0]

Configuration conf = new Configuration ()

FileSystem fs = FileSystem. Get (URI.create (uri), conf)

InputStream in = null

Try {

In = fs.open (new Path (uri))

IOUtils.copyBytes (in, System.out, 4096, false)

} finally {

IOUtils.closeStream (in)

}

}

}

2.3 implementation process

Compile and run example 3.2 of the authoritative Guide in a Hadoop cluster

2.3.1 create a code directory

Start Hadoop and create the myclass and input directories under the / usr/local/hadoop-1.1.2 directory using the following command:

Mkdir myclass

Mkdir input

2.3.2 create an example file and upload it to hdfs

Go to the / usr/local/hadoop-1.1.2/input directory and create a quangle.txt file in that directory, which reads:

Create a / usr/hadoop/ folder in hdfs using the following command

Hadoop fs-mkdir / usr/hadoop/

Hadoop fs-ls / usr/

Upload the sample file to the / usr/hadoop/ folder of hdfs

Hadoop fs-copyFromLocal.. / input/quangle.txt / usr/hadoop/quangle.txt

Hadoop fs-ls / usr/hadoop

2.3.3 configure the local environment

Configure the hadoop-env.sh in the / usr/local/hadoop-1.1.2/conf directory as follows:

Ls

Vi hadoop-env.sh

Add the value of the HADOOP_CLASPATH variable, the value is / usr/local/hadoop-1.1.2/myclass

2.3.4 write code

Go to the / usr/local/hadoop-1.1.2/myclass directory and create the FileSystemCat.java code file in that directory with the following command:

Cd / usr/local/hadoop-1.1.2/myclass/

Vi FileSystemCat.java

Enter the code content:

2.3.5 compile the code

In the / usr/local/hadoop-1.1.2/myclass directory, compile the code using the following command:

Javac-classpath.. / hadoop-core-1.1.2.jar FileSystemCat.java

Ls

2.3.6 use compiled code to read files

Use the following command to read the quangle.txt contents:

Hadoop FileSystemCat / usr/hadoop/quangle.txt

3 written assignment 2: write HDFS to become a new document 3.1 written assignment 2 content new waterway IELTS training

Generate a text file of about 100 bytes in the local file system by yourself, write a program (you can use Java API or C API), read this file, and write its 101-120 bytes into HDFS to become a new file, and provide a screenshot of the code and execution results.

3.2 run the code

Note: please delete Chinese comments before compilation!

Import java.io.File

Import java.io.FileInputStream

Import java.io.FileOutputStream

Import java.io.OutputStream

Import java.net.URI

Import org.apache.hadoop.conf.Configuration

Import org.apache.hadoop.fs.FSDataInputStream

Import org.apache.hadoop.fs.FileSystem

Import org.apache.hadoop.fs.Path

Import org.apache.hadoop.io.IOUtils

Import org.apache.hadoop.util.Progressable

Publicclass LocalFile2Hdfs {

Publicstaticvoid main (String [] args) throws Exception {

/ / get the location parameters of the read source and target files

String local = args [0]

String uri = args [1]

FileInputStream in = null

OutputStream out = null

Configuration conf = new Configuration ()

Try {

/ / get read-in file data

In = new FileInputStream (new File (local))

/ / get target file information

FileSystem fs = FileSystem.get (URI.create (uri), conf)

Out = fs.create (new Path (uri), new Progressable () {

@ Override

Publicvoid progress () {

System.out.println ("*")

}

});

/ / skip the first 100 characters

In.skip (100)

Byte [] buffer = newbyte [20]

/ / read 20 characters from the position of 101into buffer

Int bytesRead = in.read (buffer)

If (bytesRead > = 0) {

Out.write (buffer, 0, bytesRead)

}

} finally {

IOUtils.closeStream (in)

IOUtils.closeStream (out)

}

}

}

3.3 implementation process 3.3.1 write code

Go to the / usr/local/hadoop-1.1.2/myclass directory and create the LocalFile2Hdfs.java code file in that directory with the following command:

Enter the code content:

3.3.2 compile the code

In the / usr/local/hadoop-1.1.2/myclass directory, compile the code using the following command:

Javac-classpath.. / hadoop-core-1.1.2.jar LocalFile2Hdfs.java

Ls

3.3.3 create test files

Go to the / usr/local/hadoop-1.1.2/input directory and create a local2hdfs.txt file in that directory, which reads:

Cd / usr/local/hadoop-1.1.2/input/

Vi local2hdfs.txt

3.3.4 upload file contents to hdfs using compiled code

Use the following command to read the contents of local2hdfs 101-120 bytes and write to HDFS as a new file:

Cd / usr/local/hadoop-1.1.2/bin/

Hadoop LocalFile2Hdfs.. / input/local2hdfs.txt / usr/hadoop/local2hdfs_part.txt

3.3.5 verify whether it is successful

Use the following command to read the local2hdfs_part.txt contents:

Hadoop fs-cat / usr/hadoop/local2hdfs_part.txt

4 written assignment 3: job 2 reverse operation 4.1 written assignment 2 content

2, generate a text file of about 100 bytes in HDFS, write a program (you can use Java API or C API), read this file, and write its 101-120 bytes to the local file system to become a new file, provide code and screenshots of the execution results.

4.2 Program Code 4.3 implementation process 4.3.1 write code

Go to the / usr/local/hadoop-1.1.2/myclass directory and create the Hdfs2LocalFile.java code file in that directory with the following command:

Cd / usr/local/hadoop-1.1.2/myclass/

Vi Hdfs2LocalFile.java

Enter the code content:

4.3.2 compile the code

In the / usr/local/hadoop-1.1.2/myclass directory, compile the code using the following command:

Javac-classpath.. / hadoop-core-1.1.2.jar Hdfs2LocalFile.java

Ls

4.3.3 create a test file

Go to the / usr/local/hadoop-1.1.2/input directory and create a hdfs2local.txt file in that directory, which reads:

Cd / usr/local/hadoop-1.1.2/input/

Vi hdfs2local.txt

Upload the file to the / usr/hadoop/ folder in hdfs

Cd / usr/local/hadoop-1.1.2/bin/

Hadoop fs-copyFromLocal.. / input/hdfs2local.txt / usr/hadoop/hdfs2local.txt

Hadoop fs-ls / usr/hadoop

4.3.4 output the contents of the file from hdfs to the file system using compiled code

Use the following command to read the contents of hdfs2local.txt 101-120 bytes and write them to the local file system as a new file:

Hadoop Hdfs2LocalFile / usr/hadoop/hdfs2local.txt.. / input/hdfs2local_part.txt

Ls.. / input

4.3.5 verify success

Use the following command to read the hdfs2local_part.txt contents:

Cat.. / input/hdfs2local_part.txt

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report