Antoine Pitrou added the comment:

I just tried bumping KEEPALIVE_SIZE_LIMIT to 200. It makes up for a bit
of the speedup, but the PyVarObject version is still faster. Moreover,
and perhaps more importantly, it eats less memory (800KB less in
pybench, 3MB less when running the whole test suite).

(then there are of course microbenchmarks. For example:
# With KEEPALIVE_SIZE_LIMIT = 200
./python -m timeit -s "s=open('LICENSE', 'r').read()" "s.split()"
1000 loops, best of 3: 556 usec per loop
# With this patch
./python -m timeit -s "s=open('LICENSE', 'r').read()" "s.split()"
1000 loops, best of 3: 391 usec per loop
)

I don't understand the argument for codecs having to resize the unicode
object. Since codecs also have to resize the bytes object when encoding,
the argument should apply to bytes objects as well, yet bytes (and str
in 2.6) is a PyVarObject.

I admit I don't know the exact reasons for PyUnicode's design. I just
know that the patch doesn't break the API, and runs the test suite fine.
Are there any specific things to look for?

__________________________________
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1943>
__________________________________
_______________________________________________
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to