Well, what's happening here is that jobs become "un-finished" - they
complete, and then later on pop back into the "Active" section showing a
small number of complete/inprogress tasks.

In my screenshot, Job #1 completed as normal, and then later on switched
back to active with only 92 tasks... it never seems to change again, it's
stuck in this frozen, active state.


On Mon, May 22, 2017 at 12:50 PM, Vadim Semenov <vadim.seme...@datadoghq.com
> wrote:

> I believe it shows only the tasks that have actually being executed, if
> there were tasks with no data, they don't get reported.
>
> I might be mistaken, if somebody has a good explanation, would also like
> to hear.
>
> On Fri, May 19, 2017 at 5:45 PM, Miles Crawford <mil...@allenai.org>
> wrote:
>
>> Hey ya'll,
>>
>> Trying to migrate from Spark 1.6.1 to 2.1.0.
>>
>> I use EMR, and launched a new cluster using EMR 5.5, which runs spark
>> 2.1.0.
>>
>> I updated my dependencies, and fixed a few API changes related to
>> accumulators, and presto! my application was running on the new cluster.
>>
>> But the application UI shows crazy output:
>> https://www.dropbox.com/s/egtj1056qeudswj/sparkwut.png?dl=0
>>
>> The applications seem to complete successfully, but I was wondering if
>> anyone has an idea of what might be going wrong?
>>
>> Thanks,
>> -Miles
>>
>
>

Reply via email to