In a case that memory cannot hold all the cached RDD, then BlockManager
will evict some older block for storage of new RDD block.


Hope that will helpful.

2015-06-24 13:22 GMT+08:00 [email protected] <[email protected]>:

> I am kind of consused about when cached RDD will unpersist its data. I
> know we can explicitly unpersist it with RDD.unpersist ,but can it be
> unpersist automatically by the spark framework?
> Thanks.
>
> ------------------------------
> [email protected]
>



-- 
王海华

Reply via email to