Mapreduce job flow on YARN involves below components.
- A Client node, which submits the Mapreduce job.
- The YARN Resource Manager, which allocates the cluster resources to jobs.
- The YARN Node Managers, which launch and monitor the tasks of jobs.
- The MapReduce Application Master, which coordinates the tasks running in the MapReduce job. The
- application master and the MapReduce tasks run in containers that are scheduled by the resource
- manager, and managed by the node managers.
- The HDFS file system is used for sharing job files between the above entities.
Job Start up:
- The call to Job.waitForCompletion() in the main driver class is where all the execution starts. The driver is the only piece of code that runs on our local machine, and this call starts the communication with the Resource Manager
- Retrieves the new Job ID or Application ID from Resource Manager.
- The Client Node copies Job Resources specified via the -files, -archives, and -libjars command-line arguments, as well as the job JAR file on to HDFS.
- Finally, Job is submitted by calling submitApplication() method on Resource Manager.
- Resource Manager triggers its sub-component Scheduler, which allocates containers for mapreduce job execution. Then Resource Manager starts Application Master in the container provided by the scheduler. This container will be managed by Node Manager from here on wards.
Input Split:
- In this phase, HDFS splits the input files into equal sized chunks or segments based on minimum split size (mapreduce.input.fileinputformat.split.minsize) property .
- Each file segment or split is passed to a unique map task if file is splittable. If File is not splittable then entire file will be provided as input to a single map task.
- These map tasks are created by Mapreduce Application Master (MRAppMaster Java Class) and reduce tasks are also created by application master based on mapreduce.job.reduces property.
Role of an Application Master:
- Before starting any task, Job setup method is called to create job’s output directory for job’s OutputCommitter.
- As noted above, Both map tasks and reduce tasks are created by Application Master.
- If the submitted job is small, then Application Master runs the job in the same JVM on which Application Master is running. It reduces the overhead of creating new container and running tasks in parallel. These small jobs are called as Uber tasks.
- Uber tasks are decided by three configuration parameters, number of mappers <= 10, number of reducers <= 1 and Input file size is less than or equal to an HDFS block size. These parameters can be configured via mapreduce.job.ubertask.maxmaps , mapreduce.job.ubertask.maxreduces , and mapreduce.job.ubertask.maxbytes properties in mapred-site.xml.
- If job doesn’t qualify as Uber task, Application Master requests containers for all map tasks and reduce tasks.
Task Execution:
- Once Containers assigned to tasks, Application Master starts containers by notifying its Node Manager.
- Node Manager copies Job resources (like job JAR file) from HDFS distributed cache and runs map or reduce tasks.
- Running Tasks, keep reporting about the progress and status (Including counters) of current task to Application Master and Application Master collects this progress information from all tasks and aggregate values are propagated to Client Node or user.
No comments:
Post a Comment