On Jan 4, 5:55 am, Paul McGuire <pt...@austin.rr.com> wrote: > Just wanted to share some experience I had in doing some memory and > performance tuning of a graphics script. I've been running some long- > running scripts on high-resolution images, and added memoizing to > optimize/minimize object creation (my objects are immutable, so better > to reuse objects from a cache than constantly create and discard > instances). This helped my script early on, but as the data files got > larger and larger, and runs got longer and longer, the object memoize > cache started to suck up some serious memory, and performance started > to degrade for a different reason - low memory -> page thrashing -> > program crawling. > > It occurred to me that in my program, a given object may get used for > a while, but once its use declines, it doesn't come back for a while. > At first blush, I thought about using some sort of LRU cache, but > first I tried the WeakValueDictionary from the wkref module. This > worked great! > > Now I get fairly optimal reuse of my immutable instances, but my > object cache doesn't grow without bounds. > > Does this sound like a correct interpretation of this behavior? If > so, it would seem that WeakValueDictionary would be a good > recommendation to go along with any memoizing implementations.
It depends on what you use memoizing for. If your memoized function return data from a dataset, then yeah, you want to use a weakref scheme because when such data is removed from the dataset, you want the memory associated to it to be freed. However, if you use memoizing on a function that creates the return value, you don't want to use a weakref scheme, because it makes the memoizing useless: If the caller of the memoized function doesn't keep a reference to the returned value, there will be no memoizing taking place for the next call of the function with the same arguments. To make it short: it depends. -- http://mail.python.org/mailman/listinfo/python-list