Martin v. Löwis <mar...@v.loewis.de> added the comment:

>> Correct. I copied the algorithm from _io.FileIO, under the assumption
>> that there was a reason for not using a simpler O(n log n) doubling
>> strategy. Do you know of any reason for this? Or is it safe to ignore it?
>
> I don't know, but I'd say it's safe to ignore it.

To elaborate: ISTM that it's actually a bug in FileIO. I can imagine
where it's coming from (i.e. somebody feeling that overhead shouldn't
grow unbounded), but I think that's ill-advised - *if* somebody really
produces multi-gigabyte data (and some people eventually will), they
still deserve good performance, and they will be able to afford the
memory overhead (else they couldn't store the actual output, either).

> Generally we use a less-than-doubling strategy, to conserve memory (see
> e.g. bytearray.c).

Most definitely. In case it isn't clear (but it probably is here):
any constant factor > 1.0 will provide amortized linear complexity.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue6715>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to