Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop learning-- modify file copy block size through configuration files-- day04

2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Import java.io.ByteArrayOutputStream

Import java.io.FileInputStream

Import java.io.FileOutputStream

Import java.io.InputStream

Import java.net.URL

Import org.apache.hadoop.conf.Configuration

Import org.apache.hadoop.fs.FSDataInputStream

Import org.apache.hadoop.fs.FSDataOutputStream

Import org.apache.hadoop.fs.FileSystem

Import org.apache.hadoop.fs.FsUrlStreamHandlerFactory

Import org.apache.hadoop.fs.Path

Import org.apache.hadoop.io.IOUtils

Import org.junit.Test

Public class replication {

/ * *

* number of copies of custom files implemented through API

* modify the value of dfs.namenode.fs-limits.min-block-size in the cluster hdfs.site.xml to 10K during the final test

* Note that the manual refresh command of nodes after modifying the configuration file of the cluster does not make the file effective. You need to restart the cluster.

* finally check through webui hadoop01:50070

, /

@ Test

Public void customReplicationNum () throws Exception {

/ / create configuration objects, which have a default loading order, first from core-default.xml to the files in the src directory, here

/ / We have given

Configuration conf = new Configuration ()

/ / modify the number of copies of the current file

Conf.set ("dfs.replication", "4")

/ / modify the block size of the current file

Conf.set ("dfs.blocksize", "20480")

/ / modify the lower limit of the blcoksize of namenode. You need to modify the configuration of the cluster. There is a problem with the setting here.

/ / conf.set ("dfs.namenode.fs-limits.min-block-size", "1024010")

/ / the distributed file system fs is created through the configuration object of conf. By default, it is the local file system if no file is specified.

FileSystem fs = FileSystem.get (conf)

/ / define a string of URL

String file = "hdfs://hadoop01:9000/user/hadoop/data2/kala-copy.jpg"

/ / construct a path object from a string of URL

Path path = new Path (file)

FSDataOutputStream out = fs.create (path)

IOUtils.copyBytes (new FileInputStream ("E:/zhaopian.jpg"), out, 1024)

Out.close ()

}

}

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report