On 05/06/2015 09:55 AM, Chris Angelico wrote:
On Wed, May 6, 2015 at 11:12 PM, Dave Angel <da...@davea.name> wrote:
I had guessed that the order of multiplication would make a big difference,
once the product started getting bigger than the machine word size.
Reason I thought that is that if you multiply starting at the top value (and
end with multiplying by 2) you're spending more of the time multiplying
big-ints.
That's why I made sure that both Cecil's and my implementations were
counting up, so that wouldn't be a distinction.
I'm still puzzled, as it seems your results imply that big-int*int is faster
than int*int where the product is also int.
Are you using Python 2 or Python 3 for your testing? In Py3, there's
no type difference, and barely no performance difference as you cross
the word-size boundary. (Bigger numbers are a bit slower to work with,
but not hugely.)
Both Cecil and I are using 3.x I'm using 3.4 in particular. And I know
int covers both big-int and int32. that's why I called it big-int,
rather than long.
I was, however, mistaken. it's not that threshold that we're crossing
here, but another one, for MUCH larger numbers. factorial of 100000 and
of 200000 have 456473 and 97350 digits, respectively. In binary, that
would be about 190k bytes and 404k bytes, respectively.
I was seeing factorial of 200000 taking about 4.5 times as long as
factorial of 100000. All the other increments seemed fairly proportional.
I'll bet the difference is something like the memory allocator using a
different algorithm for blocks above 256k. Or the cache logic hitting a
threshold.
If it's caching, of course the threshold will differ wildly between
machine architectures.
If it's the memory allocator, that could easily vary between Python
versions as well.
--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list