On 02/03/2011 16:39, Ben123 wrote:
...........
Languages can't support infinitely large or small numbers, so try to
multiply the inner variables by 10^n to increase their values if this
will not involve on the method. For example, I did this when was
calculating geometric means of computer benchmarks.
Currently I have values between 1 and 1E-300 (not infinitely small). I
don't see how scaling by powers of 10 will increase precision.
In such way you will be storing the number of zeros as n.
Are you saying python cares whether I express a number as 0.001 or
scaled by 10^5 to read 100? If this is the case, I'm still stuck. I
need the full range of eigenvalues from 1 to 1E-300, so the entire
range could be scaled by 1E300 but I would still need better precision
than 1E19
.......
If you enter a number as 1e-19 then python will treat as a float; by default I
think that float is IEEE double precision so you're getting a 48 bit mantissa
(others may have better details). That means you've already lost any idea of
arbitrary precision.
When you say you have numbers like 1E-300 are those actually numerically zero or
have you some valid inputs that vary over a huge range. It should be possible to
compute determinants/inverses etc to arbitrary precision as those are known to
be polynomial functions of the input elements. However, eigenvalues are not.
eg
[0 2]
[1 0]
has eigenvalues +/- sqrt(2) so even though you can represent the matrix in
finite precision the eigenvalues require infinite precision.
Eigenvalues are roots of a polynomial in the elements and root solving may
require an infinite number of steps so it will be difficult with arbitrary
matrices to keep arbitrary precision.
--
Robin Becker
--
http://mail.python.org/mailman/listinfo/python-list