Oh dear, sounds like everyone has an opinion, not necessarily well informed 
though.

Here are some facts/opinions.
1. If speed is an issue, you should use hardware floats.
2. On some (most?) processors, changing the rounding mode is more expensive
than doing arithmetic. Sometimes MUCH more expensive.
3.  You can simulate rounding toward negative infinity during addition by 
changing signs and
rounding toward positive infinity.  Etc.  
4. "ball" interval arithmetic, usually called midpoint-radius, has been 
well explored
in the literature. Is it cruder or faster? eh.
5. There is a huge literature on interval arithmetic .  See "reliable 
computation".
The likelihood that you would invent something new off the top of your head 
is
very close to zero.  The likelihood that you would implement something that 
is a
perhaps slightly defective version of something well documented is much 
higher.
6. The problem of evaluating a series or polynomial at an interval point is 
not
solved by using "balls". There are ways; see SUE or Single Use Expression or
completing the square for quadratics.  For higher degree there are no neat
ways, but there are taylor series methods and root-finding.  You can also
find a bound on the  accumulated roundoff error in evaluating a polynomial 
p by evaluating
a related polynomial, at a cost of about a factor of 2.  A modified 
Horner's rule.

 All in the literature.

7. If you want not-especially-fast but rigorous intervals for arbitrary 
precision
MPFI does it.  I don't know if this is what is currently used.  

8. I haven't looked at the particular citation (trac?) but maybe you could 
do
some error analysis of the expression(s) and determine a formula for the
error without intervals.

RJF


On Tuesday, October 29, 2013 7:41:52 AM UTC-7, Jeroen Demeyer wrote:
>
> On 2013-10-29 15:28, Jori Mantysalo wrote: 
> > On Tue, 29 Oct 2013, Vincent Delecroix wrote: 
> > 
> >> all rounding are implemented in the CPU (is that true ? perhaps 
> >> changing the rounding often makes it slower). Do you have timings ? 
> > 
> > Seems to be more complicated than just slow-or-fast -question. See 
> > 
> http://www.intel.co.uk/content/dam/doc/manual/64-ia-32-architectures-optimization-manual.pdf
>  
> > pages 3-98..3-100: 
> > 
> > "On the Pentium III processor, the FLDCW instruction is an expensive 
> > operation. On early generations of Pentium 4 processors, FLDCW is 
> > improved only for situations where an application alternates between two 
> > constant values of the x87 FPU control word (FCW), such as when 
> > performing conversions to integers. On Pentium M, Intel Core Solo, Intel 
> > Core Duo and Intel Core 2 Duo processors, FLDCW is improved over 
> > previous generations." 
> > 
> > This manual contains even an example of algorithm that avoids changing 
> > rounding mode. 
> > 
> > However, two rounding modes are enought for RIF. Or are they? 
> RIF (or RR for that matter) actually has nothing to do with the CPU 
> rounding mode. It "emulates" arbitrary precision floating-point 
> arithmetic independent of the processor. 
>
> Jeroen. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to