Hi Patrick - is that cleanup present in 1.1? The overhead I am talking about is with regards to what I believe is shuffle related metadata. If I watch the execution log I see small broadcast variables created for every stage of execution, a few KB at a time, and a certain number of MB remaining of available memory on the driver. As I run, this available memory goes down, and these variables are never erased. The only RDDs that persist are those that are explicitly cached. The RDDs that are generated iteratively are not retained or referenced, so I would expect things to get cleaned up but they do not. The items consuming memory are not RDDs but what appears to be shuffle metadata.
I have a script that parses the logs to show memory consumption over time and the script shows memory very steadily being consumed over many hours without clearing one small bit at a time. The specific computation I am doing is the generation of dot products between two RDDs of vectors. I need to generate this product for every combination of products between the two RDDs but both RDDs are too big to fit in memory. Consequently, I iteratively generate this product across one entry from the first RDD and all entries from the second and retain the pared-down result within an accumulator (by retaining the top N results it is possible to actually store the Cartesian which is otherwise too large to fit on disk). After a certain number of iterations these intermediate results are then written to disk. Each of these steps is tractable in itself but due to the accumulation of memory, the overall job becomes intractable. I would appreciate any suggestions as to how to clean up these intermediate broadcast variables. Thank you. On Sun, Dec 28, 2014 at 1:56 PM Patrick Wendell <pwend...@gmail.com> wrote: > What do you mean when you say "the overhead of spark shuffles start to > accumulate"? Could you elaborate more? > > In newer versions of Spark shuffle data is cleaned up automatically > when an RDD goes out of scope. It is safe to remove shuffle data at > this point because the RDD can no longer be referenced. If you are > seeing a large build up of shuffle data, it's possible you are > retaining references to older RDDs inadvertently. Could you explain > what your job actually doing? > > - Patrick > > On Mon, Dec 22, 2014 at 2:36 PM, Ganelin, Ilya > <ilya.gane...@capitalone.com> wrote: > > Hi all, I have a long running job iterating over a huge dataset. Parts of > > this operation are cached. Since the job runs for so long, eventually the > > overhead of spark shuffles starts to accumulate culminating in the driver > > starting to swap. > > > > I am aware of the spark.cleanup.tll parameter that allows me to configure > > when cleanup happens but the issue with doing this is that it isn't done > > safely, e.g. I can be in the middle of processing a stage when this > cleanup > > happens and my cached RDDs get cleared. This ultimately causes a > > KeyNotFoundException when I try to reference the now cleared cached RDD. > > This behavior doesn't make much sense to me, I would expect the cached > RDD > > to either get regenerated or at the very least for there to be an option > to > > execute this cleanup without deleting those RDDs. > > > > Is there a programmatically safe way of doing this cleanup that doesn't > > break everything? > > > > If I instead tear down the spark context and bring up a new context for > > every iteration (assuming that each iteration is sufficiently > long-lived), > > would memory get released appropriately? > > > > ________________________________ > > > > The information contained in this e-mail is confidential and/or > proprietary > > to Capital One and/or its affiliates. The information transmitted > herewith > > is intended only for use by the individual or entity to which it is > > addressed. If the reader of this message is not the intended recipient, > you > > are hereby notified that any review, retransmission, dissemination, > > distribution, copying or other use of, or taking of any action in > reliance > > upon this information is strictly prohibited. If you have received this > > communication in error, please contact the sender and delete the material > > from your computer. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > >