Raymond Hettinger <raymond.hettin...@gmail.com> added the comment:

> In my use case, the objects hold references to large blocks 
> of GPU memory that should be freed as soon as possible. 

That makes sense.  I can see why you reached for weak references.

Would it be a workable alternative to have an explicit close() method to free 
the underlying resources?  We do this with StringIO instances because relying 
on reference counting or GC is a bit fragile.


> Additionally, I use cached_method to cache expensive hash 
> calculations which the alternatives cannot do because they 
> would run into bottomless recursion.

This is a new concern.  Can you provide a minimal example that recurses 
infinitely with @lru_cache but not with @cached_method?  Offhand, I can't think 
of an example.


> Do you think, in the face of these ambiguities, that the
> stdlib should potentially just not cover this?

Perhaps this should be put on PyPI rather than in the standard library. 

The core problem is that having @cached_method in the standard library tells 
users that this is the recommended and preferred way to cache methods.  However 
in most cases they would be worse off.  And it isn't easy to know when 
@cached_method would be a win or to assess the loss of other optimizations like 
__slots__ and key-sharing dicts.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue45588>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to