Hi, 

is it physical server or AWS/Azure? What are the executed parameters for 
spark-shell command? Hadoop distro/version and Spark version?

Kind Regards,
Jan


> On 15 Apr 2016, at 16:15, luca_guerra <lgue...@bitbang.com> wrote:
> 
> Hi,
> I'm looking for a solution to improve my Spark cluster performances, I have
> read from http://spark.apache.org/docs/latest/hardware-provisioning.html:
> "We recommend having 4-8 disks per node", I have tried both with one and two
> disks but I have seen that with 2 disks the execution time is doubled. Any
> explanations about this?
> 
> This is my configuration:
> 1 machine with 140 GB RAM 2 disks and 32 CPU (I know that is an unusual
> configuration) and on this I have a standalone Spark cluster with 1 Worker.
> 
> Thank you very much for the help.
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/How-many-disks-for-spark-local-dirs-tp26790.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to