Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the knowledge points related to the working mechanism of MapReduce

2025-01-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what are the knowledge points related to the working mechanism of MapReduce", interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what are the knowledge points related to the working mechanism of MapReduce"?

Submission of assignments

You can run a MapReduce job with just one line of code:

JobClient.runJob (conf).

Job scheduling Hadoop Job scheduling Evolution

1. Earlier versions of Hadoop used the FIFO scheduling algorithm to run jobs

Earlier versions of Hadoop used a very simple method to schedule users' jobs:

Use the FIFO (first-in, first-out) scheduling algorithm to run jobs in the order in which they are submitted.

2. Then, set the priority by setting the setJobPriority () method of mapred.job.priority or JobClient

3. The type of scheduler for Hadoop

In Hadoop, the scheduler for MapReduce can be selected.

1) the default scheduler is the original queue-based FIFO scheduler.

2), Fair Scheduler fair scheduler

3), Capacity Scheduler

Knowledge points related to the working mechanism of MapReduce: 1. What code is called to run a MapReduce job?

JobClient.runJob (conf)

2. Diagram of the whole process of running a job in hadoop. 3. How many independent entities does the whole process of running a job in hadoop contain? It contains the following four separate entities:

1), client: submit MapReduce job.

2), jobtracker: coordinate the operation of the job.

Jobtracker is a Java application, and its main class is JobTracker.

3), tasktracker: run the task divided by the job.

Tasktracker is a Java application, and its main class is TaskTracker.

4), distributed file system (usually HDFS)

Used to share job files among other entities.

At this point, I believe you have a deeper understanding of "what are the knowledge points related to the working mechanism of MapReduce?" you might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 270

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report