I am having similar issues with much smaller data sets. I am using spark
EC2 scripts to launch clusters, but I almost always end up with straggling
executors that take over a node's CPU and memory and end up never finishing.
On Thu, Mar 20, 2014 at 1:54 PM, Soila Pertet Kavulya wrote:
> Hi Reyn
Hi Reynold,
Nice! What spark configuration parameters did you use to get your job to
run successfully on a large dataset? My job is failing on 1TB of input data
(uncompressed) on a 4-node cluster (64GB memory per node). No OutOfMemory
errors just lost executors.
Thanks,
Soila
On Mar 20, 2014 11:
Understood of course.
Did the data fit comfortably in memory or did you experience memory
pressure? I've had to do a fair amount of tuning when under memory
pressure in the past (0.7.x) and was hoping that the handling of this
scenario is improved in later Spark versions.
On Thu, Mar 20, 2014 a
I'm not really at liberty to discuss details of the job. It involves some
expensive aggregated statistics, and took 10 hours to complete (mostly
bottlenecked by network & io).
On Thu, Mar 20, 2014 at 11:12 AM, Surendranauth Hiraman <
suren.hira...@velos.io> wrote:
> Reynold,
>
> How complex w
Reynold,
How complex was that job (I guess in terms of number of transforms and
actions) and how long did that take to process?
-Suren
On Thu, Mar 20, 2014 at 2:08 PM, Reynold Xin wrote:
> Actually we just ran a job with 70TB+ compressed data on 28 worker nodes -
> I didn't count the size of