Bill,

were you able to process all information in time, or did maybe some
unprocessed data pile up? I think when I saw this once, the reason
seemed to be that I had received more data than would fit in memory,
while waiting for processing, so old data was deleted. When it was
time to process that data, it didn't exist any more. Is that a
possible reason in your case?

Tobias

On Sat, Jun 28, 2014 at 5:59 AM, Bill Jay <bill.jaypeter...@gmail.com> wrote:
> Hi,
>
> I am running a spark streaming job with 1 minute as the batch size. It ran
> around 84 minutes and was killed because of the exception with the following
> information:
>
> java.lang.Exception: Could not compute split, block input-0-1403893740400
> not found
>
>
> Before it was killed, it was able to correctly generate output for each
> batch.
>
> Any help on this will be greatly appreciated.
>
> Bill
>

Reply via email to