Perfect!! That makes so much sense to me now. Thanks a ton
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Executors-not-utilized-properly-tp7744p7793.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
http://apache-spark-user-list.1001560.n3.nabble.com/Executors-not-utilized-properly-tp7744p7787.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
coalesce is being processed faster then repartition. Which is
unusual.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Executors-not-utilized-properly-tp7744p7787.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
working
super fast(56 sec). So union() was the overhead.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Executors-not-utilized-properly-tp7744p7785.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
; thought because all of the executors are not being utilized properly my
> spark program is running slower than map reduce. I can provide you my code
> skeleton for your reference. Please help me with this.
>
>
>
> --
> View this message in context:
> http://apache-spark-us
thought because all of the executors are not being utilized properly my
spark program is running slower than map reduce. I can provide you my code
skeleton for your reference. Please help me with this.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Executors-not
-
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Executors-not-utilized-properly-tp7744.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
Can some one help me with this. Any help is appreciated.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Executors-not-utilized-properly-tp7744p7753.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
. Can
any one throw some light on executor configuration if any?How can i use all
the executors. I am running spark on yarn and Hadoop 2.4.0.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Executors-not-utilized-properly-tp7744.html
Sent from the Apache Spark