On 1 Feb 2011, at 21:37, Andy Wingo wrote:
In (* inum flonum bigflonum), with what precision would the first
multiplication be performed? Note that currently the compiler
compiles
it as (* (* inum flonum) bigflownum).
An idea that comes to my mind is to set a minimum float precision,
which might be IEEE 64-bit or corresponding. Then, for multiprecision
floats, I think a user-friendly solution would be the output precision
of an operation being a function of the operation and the precision of
the values put to it based on an error analysis with a margin, as to
not introduce too big round-off errors. Experts might want to have
something else, but that would be less user-friendly.