queue is still unusable.
>
> It seems like an issue with YARN, but it's specifically Spark that
> leaves the queue in this state. I've ran a Hadoop job in a for loop 10x,
> while specifying the queue explicitly, just to double-check.
>
> On Tue, Jun 9, 2015 at 4:45 PM
queue in this state. I've ran a Hadoop job in a for loop 10x, while
specifying the queue explicitly, just to double-check.
On Tue, Jun 9, 2015 at 4:45 PM, Matt Kapilevich wrote:
> From the RM scheduler, I see 3 applications currently stuck in the
> "root.thequeue" queue.
>
>
>From the RM scheduler, I see 3 applications currently stuck in the
"root.thequeue" queue.
Used Resources:
Num Active Applications: 0
Num Pending Applications: 3
Min Resources:
Max Resources:
Steady Fair Share:
Instantaneous Fair Share:
On Tue, Jun 9, 2015 at 4:30 PM, Matt Kapi
Yes! If I either specify a different queue or don't specify a queue at all,
it works.
On Tue, Jun 9, 2015 at 4:25 PM, Marcelo Vanzin wrote:
> Does it work if you don't specify a queue?
>
> On Tue, Jun 9, 2015 at 1:21 PM, Matt Kapilevich
> wrote:
>
>> Hi Marc
seen anything like
> this.
>
>
> On Tue, Jun 9, 2015 at 1:01 PM, Marcelo Vanzin
> wrote:
>
>> On Tue, Jun 9, 2015 at 11:31 AM, Matt Kapilevich
>> wrote:
>>
>>> Like I mentioned earlier, I'm able to execute Hadoop jobs fine even now
>>&g
y available on any single NM.
>
> On Tue, Jun 9, 2015 at 7:56 AM, Matt Kapilevich
> wrote:
>
>> Hi all,
>>
>> I'm manually building Spark from source against 1.4 branch and submitting
>> the job against Yarn. I am seeing very strange behavior. The first 2 o
Hi all,
I'm manually building Spark from source against 1.4 branch and submitting
the job against Yarn. I am seeing very strange behavior. The first 2 or 3
times I submit the job, it runs fine, computes Pi, and exits. The next time
I run it, it gets stuck in the "ACCEPTED" state.
I'm kicking off