In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
1. Hadoop default temporary data files are stored in Unix's tmp directory (cd / tmp includes hadoop-root and other files). If you do not modify them, hadoop may be abnormal after linux system restart. Therefore, you need to modify the temporary file storage directory of hadoop.
2. Configure vim core-site.xml as follows, then restart the hadoop cluster without reformatting the namenode.
Modify the clusterid of the VERSION file in the datanode / var/hadoop/dfs/data/current directory is the same as namenode; then start the cluster normally
Hadoop.tmp.dir
/ var/hadoop
After namenode performs a format operation, it causes namenode to regenerate clusterid, but the clusterID value of datanode remains the same.
The inconsistency between namenode and datanode clusterid results in abnormal startup of datanode; it needs to be manually changed to be consistent with namenode
3. When testing, you can turn off permission checking (otherwise there is no permission to access), and add the following configuration to the namenode node
Vim hdfs-site.xml
Dfs.permissions.enabled
False
4.0.0
Com.skcc
Wordcount
0.0.1-SNAPSHOT
Wordcount
Count the word
UTF-8 2.7.3 junit junit 4.12 org.apache.hadoop hadoop-client ${hadoop.version} org.apache.hadoop hadoop-common ${hadoop.version} org.apache.hadoop hadoop-hdfs ${hadoop.version}
Package com.skcc.hadoop
Import java.io.FileInputStream
Import java.io.IOException
Import java.io.InputStream
Import java.net.URL
Import java.text.NumberFormat
Import org.apache.hadoop.conf.Configuration
Import org.apache.hadoop.fs.FSDataOutputStream
Import org.apache.hadoop.fs.FileSystem
Import org.apache.hadoop.fs.FsUrlStreamHandlerFactory
Import org.apache.hadoop.fs.Path
Import org.apache.hadoop.io.IOUtils
Public class HelloHDFS {
Public HelloHDFS () {/ / TODO Auto-generated constructor stub} public static FileSystem getFileSystemInstance () {Configuration conf = new Configuration (); conf.set ("fs.defaultFS", "hdfs://172.26.19.40:9000"); FileSystem fileSystem = null; try {fileSystem = FileSystem.get (conf);} catch (IOException e) {/ / TODO Auto-generated catch block e.printStackTrace ();} return fileSystem } public static void getFileFromHDFS () throws Exception {/ / URL handles http protocol by default, FsUrlStreamHandlerFactory handles hdfs protocol URL.setURLStreamHandlerFactory (new FsUrlStreamHandlerFactory ()); URL url=new URL ("hdfs://172.26.19.40:9000/10803060234.txt"); InputStream inputStream= url.openStream (); IOUtils.copyBytes (inputStream, System.out, 4096) } public static void getFileFromBaiDu () throws IOException {URL url=new URL ("http://skynet.skhynix-cq.com.cn/plusWare/Main.aspx"); InputStream inputStream= url.openStream (); IOUtils.copyBytes (inputStream, System.out, 4096);} public static void testHadoop () throws Exception {FileSystem fileSystem = getFileSystemInstance (); Boolean success = fileSystem.mkdirs (new Path (" / skcc ")); System.out.println (" mkdirs is "+ success) Success = fileSystem.exists (new Path ("/ 10803060234.txt")); System.out.println ("file exists is" + success); success = fileSystem.delete (new Path ("/ test2.data"), true); System.out.println ("delete dirs is" + success); success = fileSystem.exists (new Path ("/ skcc")); System.out.println ("dirs exists is" + success);} public static void uploadFileToHDFS () throws Exception {FileSystem fileSystem = getFileSystemInstance () String filename = "/ test2.data"; / / overwrite = = true FSDataOutputStream outputStream = fileSystem.create (new Path (filename), true); FileInputStream fis = new FileInputStream ("D:\\ 2018\\ u001.zip")
/ / IOUtils.copyBytes (fis, outputStream, 4096, true)
Long totalLen = fis.getChannel (). Size (); long tmpSize = 0; double readPercent = 0; NumberFormat numberFormat = NumberFormat.getInstance (); numberFormat.setMaximumFractionDigits (0); System.out.println ("totalLen:" + totalLen + "available:" + fis.available ()); byte [] buf = new byte [4096]; int len = fis.read (buf); while (len! =-1) {tmpSize = tmpSize + len String result = numberFormat.format ((float) tmpSize / (float) totalLen * 100); outputStream.write (buf,0,len); System.out.println ("Upload Percent:" + result + "%"); len = fis.read (buf);}}
}
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.