On 31 July 2013 00:01, <wxjmfa...@gmail.com> wrote: > > I am pretty sure that once you have typed your 127504 > ascii characters, you are very happy the buffer of your > editor does not waste time in reencoding the buffer as > soon as you enter an €, the 125505th char. Sorry, I wanted > to say z instead of euro, just to show that backspacing the > last char and reentering a new char implies twice a reencoding. >
And here we come to the root of your complete misunderstanding and mischaracterisation of the FSR. You don't appear to understand that strings in Python are immutable and that to add a character to an existing string requires copying the entire string + new character. In your hypothetical situation above, you have already performed 127504 copy + new character operations before you ever get to a single widening operation. The overhead of the copy + new character repeated 127504 times dwarfs the overhead of a single widening operation. Given your misunderstanding, it's no surprise that you are focused on microbenchmarks that demonstrate that copying entire strings and adding a character can be slower in some situations than others. When the only use case you have is implementing the buffer of an editor using an immutable string I can fully understand why you would be concerned about the performance of adding and removing individual characters. However, in that case *you're focused on the wrong problem*. Until you can demonstrate an understanding that doing the above in any language which has immutable strings is completely insane you will have no credibility and the only interest anyone will pay to your posts is refuting your FUD so that people new to the language are not driven off by you. Tim Delaney
-- http://mail.python.org/mailman/listinfo/python-list