Serhiy Storchaka <storchaka+cpyt...@gmail.com> added the comment:

It was the explanation of possible history, not the justification of bugs. Of 
course bugs should be fixed. Thank you for rechecking this code and for your 
fix.

As for the optimization in lru_cache_make_key(), consider the following example:

@lru_cache()
def f(x):
    return x

print(f(1))
print(f(1.0))

Currently the C implementation memoizes only one result, and f(1.0) returns 1. 
With the Python implementation and with the proposed changes  it will return 
1.0. I do not say that any of answers is definitely wrong, but we should be 
aware of this, and it would be better if both implementation will be 
consistent. I am sure this example was already discussed, but I can not find it 
now.

I still did not analyze the new code for adding to the cache. The old code 
contains flaws, agree.

I am not sure about an addition _PyDict_GetItem_KnownHash(). Every of dict 
operations can cause executing an arbitrary code and re-entering the execution 
of bounded_lru_cache_wrapper(). Could not the API that atomically checks and 
updates the dict (like getdefault()/setdefault()) be useful here?

----------
assignee: serhiy.storchaka -> 

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue35780>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to