A. Skrobov added the comment:

I've now tried it with "perf.py -r -m", and the memory savings are as follows:

### 2to3 ###
Mem max: 45976.000 -> 47440.000: 1.0318x larger

### chameleon_v2 ###
Mem max: 436968.000 -> 401088.000: 1.0895x smaller

### django_v3 ###
Mem max: 23808.000 -> 22584.000: 1.0542x smaller

### fastpickle ###
Mem max: 10768.000 -> 9248.000: 1.1644x smaller

### fastunpickle ###
Mem max: 10988.000 -> 9328.000: 1.1780x smaller

### json_dump_v2 ###
Mem max: 10892.000 -> 10612.000: 1.0264x smaller

### json_load ###
Mem max: 11012.000 -> 9908.000: 1.1114x smaller

### nbody ###
Mem max: 8696.000 -> 7944.000: 1.0947x smaller

### regex_v8 ###
Mem max: 12504.000 -> 9432.000: 1.3257x smaller

### tornado_http ###
Mem max: 27636.000 -> 27608.000: 1.0010x smaller


So, on these benchmarks, the saving is not threefold, of course; but still 
quite substantial (up to 30%).


The run time difference, on these benchmarks, is between "1.04x slower" and 
"1.06x faster", for reasons beyond my understanding (variability of background 
load, possibly?)

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue26415>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to