On Sun, 25 Feb 2018 18:33:47 -0800, Rick Johnson wrote: > On Friday, February 23, 2018 at 10:41:45 AM UTC-6, Steven D'Aprano > wrote: [...] >> There are dozens of languages that have made the design choice to limit >> their default integers to 16- 32- or 64-bit fixed size, and let the >> user worry about overflow. Bart, why does it upset you so that Python >> made a different choice? > > A default "integer-diversity-and-inclusivity-doctrine" is all fine and > dandy by me, (Hey, even integers need safe spaces), but i do wish we > pythonistas had a method to turn off this (and other) cycle burning > "features" -- you know -- in the 99.99999 percent of time that we don't > need them.
Ah, you mean just like the way things were in Python 1.0 through 2.1? Hands up anyone who has seen an integer OverflowError in the last 10 years? Anyone? [steve@ando ~]$ python1.5 -c "print 2**64" Traceback (innermost last): File "<string>", line 1, in ? OverflowError: integer pow() I really miss having to either add, or delete, an "L" suffix from my long ints, and having to catch OverflowError to get any work done, and generally spending half my time worrying how my algorithms will behave when integer operations overflow. Good times. I miss those days. I also miss the days when everyone had scabies. As someone who wrote Python code when bignums where *not* the default, I can tell you that: - it was a real PITA for those who cared about their code working correctly and being bug-free; - and the speed up actually wasn't that meaningful. As is so often the case with these things, using fixed size ints looks good in benchmark games, but what's fast in a toy benchmark and what's fast in real code are not always the same. -- Steve -- https://mail.python.org/mailman/listinfo/python-list