Depending the size of the memory you are having, you ccould allocate 60-80%
of the memory for the spark worker process. Datanode doesn't require too
much memory.
On 23 Jun 2015 21:26, "maxdml" <max...@cs.duke.edu> wrote:

> I'm wondering if there is a real benefit for splitting my memory in two for
> the datanode/workers.
>
> Datanodes and OS needs memory to perform their business. I suppose there
> could be loss of performance if they came to compete for memory with the
> worker(s).
>
> Any opinion? :-)
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Should-I-keep-memory-dedicated-for-HDFS-and-Spark-on-cluster-nodes-tp23451.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to