On Thu, 2012-03-15 at 03:07 +0100, Vincent Lefevre wrote: > On 2012-03-14 14:40:06 +0000, Joseph S. Myers wrote: > > On Wed, 14 Mar 2012, Vincent Lefevre wrote: > > > > > For double-double (IBM long double), I don't think the notion of > > > correct rounding makes much sense anyway. Actually the double-double > > > arithmetic is mainly useful for the basic operations in order to be > > > able to implement elementary functions accurately (first step in > > > Ziv's strategy, possibly a second step as well). IMHO, on such a > > > platform, if expl() (for instance) just calls exp(), this is OK. > > Why would that be OK? If we have higher precision long double then the libm should deliver that higher precision.
> > expl just calling exp - losing 53 bits of precision - seems rather > > extreme. But I'd think it would be fine to say: when asked to compute > > f(x), take x' within 10ulp of x, and return a number within 10ulp of > > f(x'), where ulp is interpreted as if the mantissa were a fixed 106 bits > > (fewer bits for subnormals, of course). (And as a consequence, accurate > > range reduction for large arguments would be considered not to matter for > > IBM long double; sin and cos could return any value in the range [-1, 1] > > for sufficiently large arguments.) > > After thinking about this, you could assume that you have a 106-bit > floating-point system (BTW, LDBL_MANT_DIG = 106) and use the same > method to generate code that provides an accurate implementation > (if the code generator doesn't assume an IEEE 754 compatible FP > system). Concerning sin and cos, I think there should be a minimum > of specification and some consistency (such as sin(x)² + cos(x)² > being close to 1). > actually back in 2007 I overrode slowexp and slowpow for powerpc/power4 (and later) to use expl, powl on the slow path of exp/pow, instead of the mpa.h implementation. This provide a nice performance improvement but does imply some rounding mode issues.