each key to all fit in memory at once. In this case, if you're
>> going to reduce right after, you should use reduceByKey, which will be more
>> efficient.
>>
>> Matei
>>
>> > On Jun 1, 2015, at 2:21 PM, octa
Key, which will be more
> efficient.
>
> Matei
>
> > On Jun 1, 2015, at 2:21 PM, octavian.ganea > <mailto:octavian.ga...@inf.ethz.ch>> wrote:
> >
> > Dear all,
> >
> > Does anyone know how can I force Spark to use only the disk when doing a
> >
.groupByKey.reduce(_ + _) ? Thank you!
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/map-reduce-only-with-disk-tp23102.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> --
Dear all,
Does anyone know how can I force Spark to use only the disk when doing a
simple flatMap(..).groupByKey.reduce(_ + _) ? Thank you!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/map-reduce-only-with-disk-tp23102.html
Sent from the Apache Spark