long running jobs don¹t take up
> whole of the cluster capacity and submit shorter, smaller jobs to fast moving
> queue with something like 10% user limit which allows 10 concurrent user per
> queue.
>
> The actual distribution of the of the capacity across longer/shorter jobs
> dep
hat way your job1 will never
> execeed first queues capacity
>
>
>
>
> On 4/28/11 11:48 PM, "Rosanna Man" wrote:
>
>> Hi all,
>>
>> We are using capacity scheduler to schedule resources among different queues
>> for 1 user (hadoop)
Hi all,
We are using capacity scheduler to schedule resources among different queues
for 1 user (hadoop) only. We have set the queues to have equal share of the
resources. However, when 1st task starts in the first queue and is consuming
all the resources, the 2nd task starts in the 2nd queue will
Hi all,
I have a large table partitioned by date in S3 and I would like to copy it
to a local partitioned table stored in HDSF. Any hints how to do it
efficiently?
Thanks,
Rosanna