Hi,

it looks like the BN_gcd() doesn't implement the 'full-strength'
Euclidean algorithm (do a - k.b in each loop) but instead
a simplification (do a - b in each loop).

So if the initial a and b differ by e.g. a factor 10000, you'll
get 10000 iterations instead of 1; and also afterwards much more
iterations are needed.

It works more than fast enough for small numbers (1024 bits or so)
but for 67.000.000 numbers (*) it takes months/years.

Q: can someone confirm this? Is there a faster (experimental) algo?
Would you be interested if I make one?

Cheers,
Stef

(*) I'm trying to recover an RSA public key when knowing the public
exponent e = 2^16+1 and being able to make signatures.
So I have to calculate the 1024-bit modulus by signing some inputs
a1, a2, ... and obtaining signatures s1, s2, ...
So we have:
  s1^e % m = a1     ->  s1^e-a1 = k1.m
  s1^e % m = a1     ->  s2^e-a2 = k2.m
  ...
and then calculate m = gcd(s1^e-a1, s2^e-a2, ...)
The calculations of si^e take a few hours which is acceptible but
they produce numbers of about some 65 milion bits.
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           majord...@openssl.org

Reply via email to