Does it work if you don't specify a queue?

On Tue, Jun 9, 2015 at 1:21 PM, Matt Kapilevich <matve...@gmail.com> wrote:

> Hi Marcelo,
>
> Yes, restarting YARN fixes this behavior and it again works the first few
> times. The only thing that's consistent is that once Spark job submissions
> stop working, it's broken for good.
>
> On Tue, Jun 9, 2015 at 4:12 PM, Marcelo Vanzin <van...@cloudera.com>
> wrote:
>
>> Apologies, I see you already posted everything from the RM logs that
>> mention your stuck app.
>>
>> Have you tried restarting the YARN cluster to see if that changes
>> anything? Does it go back to the "first few tries work" behaviour?
>>
>> I run 1.4 on top of CDH 5.4 pretty often and haven't seen anything like
>> this.
>>
>>
>> On Tue, Jun 9, 2015 at 1:01 PM, Marcelo Vanzin <van...@cloudera.com>
>> wrote:
>>
>>> On Tue, Jun 9, 2015 at 11:31 AM, Matt Kapilevich <matve...@gmail.com>
>>> wrote:
>>>
>>>>  Like I mentioned earlier, I'm able to execute Hadoop jobs fine even
>>>> now - this problem is specific to Spark.
>>>>
>>>
>>> That doesn't necessarily mean anything. Spark apps have different
>>> resource requirements than Hadoop apps.
>>>
>>> Check your RM logs for any line that mentions your Spark app id. That
>>> may give you some insight into what's happening or not.
>>>
>>> --
>>> Marcelo
>>>
>>
>>
>>
>> --
>> Marcelo
>>
>
>


-- 
Marcelo

Reply via email to