Raymond Hettinger added the comment:
I concur with Victor. The proposed API change defeats the purpose of the
cache. By design, the intent of the cache is to reuse the previously computed
value.
I can add something like this to the docs: """In general, the LRU cache should
only be used wh
bolorsociedad added the comment:
I understand it may be inefficient sometimes. Perhaps it would be nice to add
an argument to lru_cache to specify that we want to deep copy? Something like
def lru_cache(..., deepcopy=False):
...
--
___
Python t
STINNER Victor added the comment:
> When the returned mutable object is modified, the cache is modified as well.
> In my opinion, functools.lru_cache should store a deep copy of the returned
> object.
It would be inefficient to deep copy the mutable result and can defeat the
purpose of the
New submission from bolorsociedad :
The decorator functools.lru_cache seems to not work properly when the function
to be memoized returns a mutable object.
For instance:
>>> import functools
>>> @functools.lru_cache()
... def f(x):
...return [x, x + 1]
...
>>> a = f(4)
>>> print(a)
[4, 5
Change by bolorsociedad :
--
components: Library (Lib)
nosy: bolorsociedad
priority: normal
severity: normal
status: open
title: Bug with memoization and mutable objects
type: behavior
versions: Python 3.4, Python 3.5, Python 3.6, Python 3.7, Python 3.8