Raymond Hettinger added the comment:

> I think we should increase the priority of this issue.

I don't think so at all.  The LRU cache we have now is plenty efficient for its 
intended use cases (caching I/O bound functions and expensive functions).  If 
is only unsuitable for functions that are already blazingly fast.  

Getting the locks right and carefully looking for re-entrancy issues is 
important.  Also, keeping the memory footprint of the keys small is important 
(if people didn't care about space, they wouldn't be using an LRU at all).

I will look at this but currently have much higher priorities elsewhere in 
Python (adding C accelerators for tricky code is less important for the time 
being -- we have a long time until 3.5).

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue14373>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to