Steven D'Aprano <steve+pyt...@pearwood.info> added the comment:

Regarding performance, on my computer, the overhead of calling tuple() on a 
list comp ranges from about 30% for tiny sequences down to about 5% for largish 
sequences.

Tiny sequences are fast either way:

[steve@ando cpython]$ ./python -m timeit "[i for i in (1,2,3)]"
50000 loops, best of 5: 5.26 usec per loop

[steve@ando cpython]$ ./python -m timeit "tuple([i for i in (1,2,3)])"
50000 loops, best of 5: 6.95 usec per loop


and for large sequences the time is dominated by the comprehension, not the 
call to tuple:

[steve@ando cpython]$ ./python -m timeit "[i for i in range(1000000)]"
1 loop, best of 5: 1.04 sec per loop

[steve@ando cpython]$ ./python -m timeit "tuple([i for i in range(1000000)])"
1 loop, best of 5: 1.1 sec per loop

(As the size of the list increases, the proportion of the time spent in calling 
tuple() approaches zero.)

So it is true that there is an opportunity to optimize the creation of a tuple. 
But we should all be aware of the dangerous of premature optimization and 
wasting our efforts on optimizing something that doesn't matter.

Marco, can you demonstrate an actual real piece of code, not a made-up 
contrived example, where the overhead of calling tuple is a bottleneck, or even 
a significant slow-down?



In real code, I would expect the processing inside the comprehension to be 
significant, which would decrease the proportional cost of calling tuple even 
more.


[steve@ando cpython]$ ./python -m timeit "[i**3 + 7*i**2 - 45*i + 11 
    for i in range(500) if (i%7 in (2, 3, 5))]"
100 loops, best of 5: 3.02 msec per loop


[steve@ando cpython]$ ./python -m timeit "tuple(
    [i**3 + 7*i**2 - 45*i + 11 
    for i in range(500) if (i%7 in (2, 3, 5))])"
100 loops, best of 5: 3.03 msec per loop


Remember too that timings of Python code on real computers is subject to a 
significant amount of variability and noise due to all the other processes 
running at the same time, from the OS down to other applications. In my tests, 
I found no less than five pairs of measurement where the call to tuple was 
faster than NOT calling tuple. This is of course impossible, but it 
demonstrates that the overhead of calling tuple is small enough that it is 
within the range of random variation in time.

The bottom line is that while this would be a genuine micro-optimization, it is 
doubtful that it would make a significant difference to performance of programs 
apart from contrived benchmarks.

(On the other hand, C is so fast overall because everything in C is 
micro-optimized.)

If adding tuple comprehensions were free of any other cost, I would say "Sure, 
why not? It can't hurt." but they are not. Inventing new, ugly, fragile syntax 
is a tax on the programmer's performance and the ability of newcomers to learn 
the language. Readability matters.

So I am a very strong -1 vote on the proposed syntax, even if it would 
micro-optimize the creation of a tuple.

Marco, if you still want to argue for this, you will need to

(1) take it back to Python-Ideas, hopefully to get consensus on syntax or at 
least a couple of options to choose between;

(2) find a core developer willing to sponsor a PEP;

(3) write a PEP;

(4) and get the PEP accepted.


I'm closing this as Pending/Postponed. If you get a PEP accepted, you can 
re-open it.

----------
resolution:  -> postponed
status: open -> pending

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue39784>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to