Hi,
You can try setting the heap space memory to a higher value.
Are you using an Ubuntu machine?
In bashrc set the following option.
export _JAVA_OPTIONS=-Xmx2g
This should set your heap size to a higher value.
Regards,
Madhura
--
View this message in context:
http://apache-spark-user-list
rtitions which look like
this:
part1: 1,2,3..,10
part2: 8,9,10,...,20
part3: 18,19,20,...,30 and so on...
Thanks and regards,
Madhura
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Need-help-with-coalesce-tp10243.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I am running my program on a spark cluster but when I look into my UI while
the job is running I see that only one worker does most of the tasks. My
cluster has one master and 4 workers where the master is also a worker.
I want my task to complete as quickly as possible and I believe that if the
n
omething similar to the following:
>
> val keyval=dRDD.mapPartitionsWithIndex { (ind,iter) =>
> iter.map(x => process(ind,x.trim().split(' ').map(_.toDouble),q,m,r))
> }
>
> -Xiangrui
>
> On Sun, Jul 13, 2014 at 11:26 PM, Madhura <[hidden email]
> <h
^
[error] /SimpleApp.scala:427: value _1 is not a member of Nothing
[error] var final= res(0)._1
[error] ^
[error] /home/madhura/DTWspark/src/main/scala/SimpleApp.scala:428: value _2
is not a member of Nothing
[err