On 26.11.14 02:46, Dave Angel wrote:
Unfortunately, for many values, the version of the function with >>1 is
slower. It's only when the argument is bigger than 10**40 that it's as
much as 1% faster. But it's true that for really large values, it's
quicker.
Note that this path is used only for
Stephen Tucker wrote:
> Another _possible_ performance improvement that is staring us in the face
> is that 2*b could be replaced with b<<1. Given that b+b (an earlier
> suggestion of mine) involves two table look-ups for b, whereas b<<1 only
> involves one, it seems that the scope here for improv
Another _possible_ performance improvement that is staring us in the face
is that 2*b could be replaced with b<<1. Given that b+b (an earlier
suggestion of mine) involves two table look-ups for b, whereas b<<1 only
involves one, it seems that the scope here for improvement is significant.
By the w
On 11/25/2014 02:31 PM, Serhiy Storchaka wrote:
п'ятниця, 21-лис-2014 08:15:57 ви написали:
This looks very good indeed. As a matter of interest, is there any
particular reason you have used 2*b instead of b+b? Might b+b be faster
than b*2?
Yes, it is slightly faster, but the effect is indisce
п'ятниця, 21-лис-2014 08:15:57 ви написали:
This looks very good indeed. As a matter of interest, is there any
particular reason you have used 2*b instead of b+b? Might b+b be faster
than b*2?
Yes, it is slightly faster, but the effect is indiscernible in total
time. But
there is not harm to
On 11/21/2014 03:15 AM, Stephen Tucker wrote:
On Thu, Nov 20, 2014 at 6:00 PM, Serhiy Storchaka
wrote:
On 01.11.14 03:29, Steven D'Aprano wrote:
There is an algorithm for calculating the integer square root of any
positive integer using only integer operations:
Here is my optimized im
Serhiy,
This looks very good indeed. As a matter of interest, is there any
particular reason you have used 2*b instead of b+b? Might b+b be faster
than b*2?
Also, in various lines, you use //2. Would >>1 be quicker? On reflection,
perhaps you have had to use //2 because >>1 cannot be used in thos
On 01.11.14 03:29, Steven D'Aprano wrote:
There is an algorithm for calculating the integer square root of any
positive integer using only integer operations:
Here is my optimized implementation inspired by Christian's ideas.
import math, sys
C1 = sys.float_info.radix ** sys.float_info.mant_d
Steven D'Aprano writes:
> There is an algorithm for calculating the integer square root of any
> positive integer using only integer operations:
>
> def isqrt(n):
> if n < 0: raise ValueError
> if n == 0:
> return 0
> bits = n.bit_length()
> a, b = divmod(bits, 2)
> x
On Sat, Nov 1, 2014 at 7:38 PM, Christian Gollwitzer wrote:
> Am 01.11.14 09:33, schrieb Chris Angelico:
>>
>> On Sat, Nov 1, 2014 at 7:25 PM, Christian Gollwitzer
>> wrote:
Part of the point of that algorithm is that it never uses FP, and is
therefore not limited by FP restriction
Am 01.11.14 09:33, schrieb Chris Angelico:
On Sat, Nov 1, 2014 at 7:25 PM, Christian Gollwitzer wrote:
Part of the point of that algorithm is that it never uses FP, and is
therefore not limited by FP restrictions.
which are???
Most notably, the inability to represent every integer beyond 2
On Sat, Nov 1, 2014 at 7:25 PM, Christian Gollwitzer wrote:
>> Part of the point of that algorithm is that it never uses FP, and is
>> therefore not limited by FP restrictions.
>
>
> which are???
Most notably, the inability to represent every integer beyond 2**53,
and the inability to represent a
Am 01.11.14 09:13, schrieb Chris Angelico:
On Sat, Nov 1, 2014 at 7:02 PM, Christian Gollwitzer wrote:
Your above algorithm is obviously doing Heron- or Newton-Raphson iterations,
so the same as with floating point math. The first line before the while
loop computes some approximation to sqrt(n
On Sat, Nov 1, 2014 at 7:02 PM, Christian Gollwitzer wrote:
> Your above algorithm is obviously doing Heron- or Newton-Raphson iterations,
> so the same as with floating point math. The first line before the while
> loop computes some approximation to sqrt(n). Instead of doing bit shuffling,
> you
Addendum: If my method below works, you can also use it to speed up
computations for n>2*1022, by splitting off an even power of two from
the integer and computing the FP sqrt of the mantissa for the seed, i.e.
doing the FP manually.
Am 01.11.14 09:02, schrieb Christian Gollwitzer:
Hi Steven,
Hi Steven,
let me start by answering from reverse:
> Q3: What is the largest value of n beyond which you can never use the
float
> optimization?
>
A3: There is no such value, besides the upper limit of floats (DBL_MAX~
10^308)
P3: If you feed a perfect square into the floating point square
On Sat, Nov 1, 2014 at 12:29 PM, Steven D'Aprano
wrote:
> Q1: What is the largest value of M, such that
>
> all(isqrt(i) == int(math.sqrt(n)) for n in range(M))
>
> returns True?
>
> I have done a brute force test, and in nine hours confirmed that M is at
> least 7627926244. That took nine hours,
There is an algorithm for calculating the integer square root of any
positive integer using only integer operations:
def isqrt(n):
if n < 0: raise ValueError
if n == 0:
return 0
bits = n.bit_length()
a, b = divmod(bits, 2)
x = 2**(a+b)
while True:
y = (x + n
18 matches
Mail list logo