In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Editor to share with you the example analysis of hadoop-ID, I believe that most people do not know much about it, so share this article for your reference, I hope you can learn a lot after reading this article, let's go to know it!
Let's start by analyzing the internal operation mechanism of Hadoop MapReduce. The user submits a Job (job) to Hadoop, and the job is executed under the control of the JobTracker object. The Job is broken down into Task (tasks), distributed to the cluster, and run under the control of TaskTracker. Task, which includes MapTask and ReduceTask, is where MapReduce's Map and Reduce operations are performed. The method of task distribution is similar to the division of labor between NameNode and DataNode in HDFS. NameNode corresponds to JobTracker,DataNode and TaskTracker. The clients of JobTracker,TaskTracker and MapReduce communicate through RPC. For more information, please refer to the analysis of HDFS section.
Let's first analyze some auxiliary classes, first of all, the classes related to ID. The inheritance tree of ID is as follows:
* Licensed to the Apache Software Foundation (ASF) under onepackage org.apache.hadoop.mapreduce;import java.io.DataInput;/** * A general identifier, which internally stores the id * as an integer. This is the super class of {@ link JobID}, * {@ link TaskID} and {@ link TaskAttemptID}. * * @ see JobID * @ see TaskID * @ see TaskAttemptID * / public abstract class ID implements WritableComparable {protected static final char SEPARATOR ='_'; protected int id; / * * constructs an ID object from the given int * / public ID (int id) {this.id = id;} protected ID () {} / * returns the int which represents the identifier * / public int getId () {return id;} @ Override public String toString () {return String.valueOf (id) } @ Override public int hashCode () {return Integer.valueOf (id) .hashCode ();} @ Override public boolean equals (Object o) {if (this = = o) return true; if (o = = null) return false; if (o.getClass () = = this.getClass ()) {ID that = (ID) o; return this.id = = that.id;} else return false } / * * Compare IDs by associated numbers * / public int compareTo (ID that) {return this.id-that.id;} public void readFields (DataInput in) throws IOException {this.id = in.readInt ();} public void write (DataOutput out) throws IOException {out.writeInt (id);}} * Licensed to the Apache Software Foundation (ASF) under onepackage org.apache.hadoop.mapreduce;import java.io.DataInput;/** * JobID represents the immutable and unique identifier for * the job. JobID consists of two parts. First part * represents the jobtracker identifier, so that jobID to jobtracker map * is defined. For cluster setup this string is the jobtracker * start time, for local setting, it is "local". * Second part of the JobID is the job number.
* An example JobID is: * job_200707121733_0003, which represents the third job * running at the jobtracker started at 200707121733. *
* Applications should never constructor parse JobID strings, but rather * use appropriate constructors or {@ link # forName (String)} method. * * @ see TaskID * @ see TaskAttemptID * @ see org.apache.hadoop.mapred.JobTracker#getNewJobId () * @ see org.apache.hadoop.mapred.JobTracker#getStartTime () * / public class JobID extends org.apache.hadoop.mapred.ID implements Comparable {protected static final String JOB = "job"; private final Text jtIdentifier; protected static final NumberFormat idFormat = NumberFormat.getInstance (); static {idFormat.setGroupingUsed (false); idFormat.setMinimumIntegerDigits (4) } / * Constructs a JobID object * @ param jtIdentifier jobTracker identifier * @ param id job number * / public JobID (String jtIdentifier, int id) {super (id); this.jtIdentifier = new Text (jtIdentifier);} public JobID () {jtIdentifier = new Text ();} public String getJtIdentifier () {return jtIdentifier.toString () } @ Override public boolean equals (Object o) {if (! super.equals (o)) return false; JobID that = (JobID) o; return this.jtIdentifier.equals (that.jtIdentifier);} / * Compare JobIds by first jtIdentifiers, then by job numbers*/ @ Override public int compareTo (ID o) {JobID that = (JobID) o; int jtComp = this.jtIdentifier.compareTo (that.jtIdentifier) If (jtComp = = 0) {return this.id-that.id;} else return jtComp;} / * * Add the stuff after the "job" prefix to the given builder. This is useful, * because the sub-ids use this substring at the start of their string. * @ param builder the builder to append to * @ return the builder that was passed in * / public StringBuilder appendTo (StringBuilder builder) {builder.append (SEPARATOR); builder.append (jtIdentifier); builder.append (SEPARATOR); builder.append (idFormat.format (id)); return builder;} @ Override public int hashCode () {return jtIdentifier.hashCode () + id;} @ Override public String toString () {return appendTo (new StringBuilder (JOB)). ToString () } @ Override public void readFields (DataInput in) throws IOException {super.readFields (in); this.jtIdentifier.readFields (in);} @ Override public void write (DataOutput out) throws IOException {super.write (out); jtIdentifier.write (out) } / * * Construct a JobId object from given string * @ return constructed JobId object or null if the given String is null * @ throws IllegalArgumentException if the given string is malformed * / public static JobID forName (String str) throws IllegalArgumentException {if (str = = null) return null; try {String [] parts = str.split ("_") If (parts.length = = 3) {if (parts [0] .equals (JOB)) {return new org.apache.hadoop.mapred.JobID (parts [1], Integer.parseInt (parts [2])) } catch (Exception ex) {/ / fall below} throw new IllegalArgumentException ("JobId string:" + str + "is not properly formed");}} above is all the content of the article "sample Analysis of hadoop-ID". Thank you for reading! I believe we all have a certain understanding, hope to share the content to help you, if you want to learn more knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.