Virgil Dupras <hs...@hardcoded.net> added the comment:

About duplicated code and performance:

When I look at the duplicated code, I don't see anything that remotely 
looks like a performance tweak. Just to make sure, I made a bench:

#!/usr/bin/env python
import sys
sys.path.insert(0, 'Lib')
import timeit
import weakref

class Foo(object): pass
def setup():
    L = [Foo() for i in range(1000)]
    global d
    d = weakref.WeakValueDictionary(enumerate(L))
    del L[:500] # have some dead weakrefs

def do():
    d.values()

print timeit.timeit(do, setup, number=100000)

Results without the patch:
 ./python.exe weakref_bench.py
0.804216861725

Results with the patch:
$ ./python.exe weakref_bench.py
0.813000202179

I think the small difference in performance is more attributable to the 
extra processing the weakref dict does than the deduplication of the 
code itself.

About the test_weak_*_dict_flushed_dead_items_when_iters_go_out:

If a weakref dict keeps its weak reference alive, it's not an 
implementation detail, it's a bug. The whole point of using such dicts 
is to not keep keys or values alive when they go out.

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue839159>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to