on setting but I
couldn't find the configurations to change the behavior back to what it was
before.
Best regards,
*Babak Alipour ,*
*University of Florida*
ework. Has anyone else encountered this?
*Babak Alipour ,*
*University of Florida*
On Sun, Oct 2, 2016 at 1:38 PM, Babak Alipour
wrote:
> Thanks Vadim for sharing your experience, but I have tried multi-JVM setup
> (2 workers), various sizes for spark.executor.memory (8g, 16g, 20g, 32g,
nly 1 column) is not that big, neither are the
original files. Any ideas?
>Babak
*Babak Alipour ,*
*University of Florida*
On Sun, Oct 2, 2016 at 1:45 AM, Vadim Semenov
wrote:
> oh, and try to run even smaller executors, i.e. with
> `spark.executor.memory` <= 16GiB. I wonder w
at java.lang.Thread.run(Thread.java:745)
>Babak
*Babak Alipour ,*
*University of Florida*
On Sat, Oct 1, 2016 at 11:35 PM, Babak Alipour
wrote:
> Do you mean running a multi-JVM 'cluster' on the single machine? How would
> that affect performance/memory-consumption?
y use a bigger page size?
>Babak
*Babak Alipour ,*
*University of Florida*
On Fri, Sep 30, 2016 at 3:03 PM, Vadim Semenov
wrote:
> Run more smaller executors: change `spark.executor.memory` to 32g and
> `spark.executor.cores` to 2-4, for example.
>
> Changing driver
142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I'm running spark in local mode so there is only one executor, the driver
and spark.driver.memory is set to 64g. Changing the driver's memory does
ne with more knowledge of Spark can shed some light on this.
Thank you!
*Best regards,*
*Babak Alipour ,*
*University of Florida*