I guess I can but it would be nicer if that is made a configuration, I can
create the issue, test and PR if you guys think its appropiate
On Wed, Jan 7, 2015 at 1:41 PM, Sean Owen wrote:
> Ah, Fernando means the usersOut / productsOut RDDs, not the intermediate
> links RDDs.
> Can you unpersist(
Ah, Fernando means the usersOut / productsOut RDDs, not the intermediate
links RDDs.
Can you unpersist() them, and persist() again at the desired level? the
downside is that this might mean recomputing and repersisting the RDDs.
On Wed, Jan 7, 2015 at 5:11 AM, Xiangrui Meng wrote:
> Which Spark
1.2
I run() you have
usersOut.setName("usersOut").persist(StorageLevel.MEMORY_AND_DISK)
productsOut.setName("productsOut").persist(StorageLevel.MEMORY_AND_DISK)
On Wed, Jan 7, 2015, 02:11 Xiangrui Meng wrote:
> Which Spark version are you using? We made this configurable in 1.1:
>
>
>
Which Spark version are you using? We made this configurable in 1.1:
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/recommendation/ALS.scala#L202
-Xiangrui
On Tue, Jan 6, 2015 at 12:57 PM, Fernando O. wrote:
> Hi,
>I was doing a tests with ALS and I
Hi,
I was doing a tests with ALS and I noticed that if I persist the inner
RDDs from a MatrixFactorizationModel the RDD is not replicated, it seems
like the storagelevel is hardcoded to MEMORY_AND_DISK, do you think it
makes sense to make that configurable?
[image: Inline image 1]