Hi,

Sorry for a delayed response.

Once jobs are submitted, they are setup by running the setup task.
These are run in order of submission. However, the setup task itself
runs on any free map or reduce slot on any node. I can imagine
scenarios where the setup task of a job that was submitted later
completes first. And when that happens, it can start running out of
order.

Thanks
Hemanth

On Fri, Oct 15, 2010 at 3:41 PM, He Chen <airb...@gmail.com> wrote:
> Hi Hemanth
>
> all jobs were submitted within a minute and a few
> seconds between jobs. The hadoop version is 0.20.2
>
> Thanks
>
>
> On Thu, Oct 14, 2010 at 11:45 PM, Hemanth Yamijala <yhema...@gmail.com>wrote:
>
>> Hi,
>>
>> On Thu, Oct 14, 2010 at 10:59 PM, He Chen <airb...@gmail.com> wrote:
>> > they arrived in 1 minute. I understand there will be a setup phase which
>> > will use any free slot no matter map or reduce.
>> >
>>
>> You mean all jobs were submitted within a minute ? That means a few
>> seconds between jobs ? Or do you mean each job was submitted a minute
>> after the earlier job. Also, which version of Hadoop is this ?
>>
>> > My queue time is the period between the start of Map stage and the time
>> job
>> > is submitted. Because the setup phase has the higher priority than map
>> and
>> > reduce tasks. Any job submitted in the queue will setup no matter how
>> many
>> > previous map and reduce tasks need to be assigned.
>> >
>> > Now, I am sure the job3 setup stage finished earlier than job4's.
>> However,
>> > job3's map stage start later than job4's. BTW, they request same amount
>> of
>> > blocks.
>> >
>> >
>> > On Thu, Oct 14, 2010 at 12:10 PM, abhishek sharma <absha...@usc.edu>
>> wrote:
>> >
>> >> What is the inter-arrival time between these jobs?
>> >>
>> >> There is a "set up" phase for jobs before they are launched. It is
>> >> possible that the order of jobs can change due to slightly different
>> >> set up times. Apart from the number of blocks, it may matter "where"
>> >> these blocks lie.
>> >>
>> >> Abhishek
>> >>
>> >> On Thu, Oct 14, 2010 at 10:06 AM, He Chen <airb...@gmail.com> wrote:
>> >> > Hi all
>> >> >
>> >> > I am testing the performance of my Hadoop clsuters with Hadoop Default
>> >> FIFO
>> >> > schedular. But I find a interesting phenomina.
>> >> >
>> >> > When I submit a series of jobs, some job will be executed earlier even
>> >> they
>> >> > are submitted late. All jobs are request same amount of blocks. For
>> >> example:
>> >> > job 1  submit at time 0
>> >> > job 2 submit at time 1
>> >> > job 3 submit at time 2
>> >> > job 4 submit at time 3
>> >> >
>> >> >
>> >> > job 4 's queue time is smaller than job3's queue time. This disobey
>> the
>> >> FIFO
>> >> > principle. Any one can give a hint?
>> >> >
>> >> > Thanks
>> >> >
>> >> > Chen
>> >> >
>> >>
>> >
>>
>

Reply via email to