In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article is to share with you about hadoop errors. The editor thinks it is very practical, so share it with you as a reference and follow the editor to have a look.
1. Version inconsistencies may occur after recompiling hadoop. In recent weeks, we have been modifying Hadoop's computing power scheduling algorithm and encountered problems of one kind or another. The version I modified is the first step of hadoop-0.20.2: load the source code of hadoop into eclipse and configure to compile with ant. Step 2: modify the source code as needed. Step 3: use ant to compile and modify the content. What I want to remind you is to ensure that the JDK of the compilation platform is the same as the JDK of the running platform. The method of ant compilation is: first, enter ant directly into the $HadoopHome/src/contrib/capacity-scheduler directory to compile. After successful compilation, you will be prompted that the produced JAR file is in $HadoopHome/build/contrib/capacity-scheduler. The important thing is that if you modify something other than capacity-scheduler (mainly in MapReduce), you also need to compile the entire modified Hadoop and compile it before compiling capacity-scheduler. When you compile the entire hadoop, you will see the fourth step of hadoop-0.20.2-dev-capacity-scheduler.jar under the build folder: rename hadoop-0.20.2-dev-capacity-scheduler.jar to hadoop-0.20.2-capacity-scheduler.jar and rename hadoop-$ {version}-capacity-scheduler.jar to hadoop-0.20.2-capacity-scheduler.jar override cluster response JAR to restart the cluster. These are successful steps. But there are always errors in the whole process. Here is a detailed analysis of the errors I encountered: I compiled the cluster under windows and built it in Ubuntu. At the beginning, the JDK of the two platforms was inconsistent, resulting in: java.lang.UnsupportedClassVersionError errors, so to ensure that the JDK is consistent sometimes encounter org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible build versions: namenode BV = The error of datanode BV = 911707 is due to the inconsistency between the version of datanode and the version of NameNode. I will rename the generated hadoop-0.20.2-dev-capacity-scheduler.jar and overwrite it in the $HadoopHome of DataNode. 2. Get the annual maximum temperature public static class TokenizerMapper extends Mapper {public void map (Object key, Text value, Context context) throws IOException, InterruptedException {String line = value.toString (); String year = line.substring (15,19); int tmp If (line.charAt (25) = ='+') {tmp = Integer.parseInt (line.substring (26Power30)); System.out.println (tmp);} else {tmp = Integer.parseInt (line.substring (25L30)); System.out.println (line.substring (25L30));} context.write (new Text (year), new IntWritable (tmp)) }} public static class IntSumReducer extends Reducer {public void reduce (Text key, Iterable values, Context context) throws IOException, InterruptedException {int maxValue = 0; while (values.iterator (). HasNext ()) {maxValue = Math.max (maxValue, values.iterator (). Next (). Get ());} context.write (key, new IntWritable (maxValue)) Exception: java.lang.StringIndexOutOfBoundsException: String index out of range: 19 at java.lang.String.substring (String.java:1907) at com.zzg.test.GetMaxTemp$TokenizerMapper.map (GetMaxTemp.java:21) at com.zzg.test.GetMaxTemp$TokenizerMapper.map (GetMaxTemp.java:1) at org.apache.hadoop.mapreduce.Mapper.run (Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper (MapTask.java:764) at org.apache.hadoop.mapred. The exception of MapTask.run (MapTask.java:370) at org.apache.hadoop.mapred.LocalJobRunner$Job.run (LocalJobRunner.java:212) is due to blank lines in the data file. Thank you for reading! This is the end of this article on "what are the hadoop errors?". I hope the above content can be of some help to you, so that you can learn more knowledge. if you think the article is good, you can share it for more people to see!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
Linux Open Source Please add a link description
© 2024 shulou.com SLNews company. All rights reserved.