Hi Bruno, On 4/7/24 7:23 AM, Bruno Haible wrote: > I like the first one a little more. So I asked the prior knowledge > summarization engine (ChatGPT):
I like that description, "prior knowledge summarization engine". > Since you show me how to do benchmarks, and since ChatGPT warns about large > lists, I ran the same benchmark with a list of length 100 or 1000. See > attachment. The 'timeit' module is a nice way to quickly expirement with things. There are more detailed performance testing tools, but I am not familiar with them [1]. > The result is mysterious: Appending to a list of length 100 or 1000 takes > about the same time either way. In particular, the 'var += ["a"]' is *not* > particularly slow when the list is long. It's impossible to clone a list of > length 1000 and add an item to it, in 41 nanoseconds; that makes no sense. I think that I see the issue here. I believe this is a case of the "prior knowledge summarization engine" being incorrect. Given this statement: a += [var] it seems to think that a new copy of 'a' is being created. This is incorrect. The given statement is equal to: operator.iadd(a, [var]) The Python documentation states that "For mutable targets such as lists and dictionaries, the in-place method will perform the update, so no subsequent assignment is necessary [2]." It seems that 'a += [var]' be the same as 'a.extend([var])'. I think that the difference between 'a.extend([var])' and 'a.append(var)' would better explain the differences in timing that we see with a large number of repetitions. I've found the C source code for those functions if you would like to look [3] [4]. [1] https://github.com/psf/pyperf [2] https://docs.python.org/3/library/operator.html#in-place-operators [3] https://github.com/python/cpython/blob/558b517bb571999ef36123635e0245d083a28305/Objects/listobject.c#L980 [4] https://github.com/python/cpython/blob/558b517bb571999ef36123635e0245d083a28305/Objects/listobject.c#L838 Collin