2*2 cents

1. You can try repartition and give a large number to achieve smaller
partitions.
2. OOM errors can be avoided by increasing executor memory or using off
heap storage
3. How are you persisting? You can try using persist using DISK_ONLY_SER
storage level
4. You may take a look in the algorithm once more. "Tasks typically preform
both operations several hundred thousand times." why it can not be done
distributed way?

On Thu, May 7, 2015 at 3:16 PM, Steve Lewis <lordjoe2...@gmail.com> wrote:

> I am performing a job where I perform a number of steps in succession.
> One step is a map on a JavaRDD which generates objects taking up
> significant memory.
> The this is followed by a join and an aggregateByKey.
> The problem is that the system is running getting OutOfMemoryErrors -
> Most tasks work but a few fail. Tasks typically preform both operations
> several hundred thousand times.
> I am convinced things would work if the map ran to completion and shuffled
> results to disk before starting the aggregateByKey.
> I tried calling persist and then count on the results of the map to force
> execution but this does not seem to help. Smaller partitions might also
> help if these could be forced.
> Any ideas?
>



-- 
Best Regards,
Ayan Guha

Reply via email to