"Nick Maclaren" <[EMAIL PROTECTED]> wrote: [Tim Roberts] > |> Actually, this is a very well studied part of computer science called > |> "interval arithmetic". As you say, you do every computation twice, once to > |> compute the minimum, once to compute the maximum. When you're done, you > |> can be confident that the true answer lies within the interval.
> > The problem with it is that it is an unrealistically pessimal model, > and there are huge classes of algorithm that it can't handle at all; > anything involving iterative convergence for a start. It has been > around for yonks (I first dabbled with it 30+ years ago), and it has > never reached viability for most real applications. In 30 years, it > has got almost nowhere. > > Don't confuse interval methods with interval arithmetic, because you > don't need the latter for the former, despite the claims that you do. > > |> For people just getting into it, it can be shocking to realize just how > |> wide the interval can become after some computations. > > Yes. Even when you can prove (mathematically) that the bounds are > actually quite tight :-) This sounds like one of those pesky: "but you should be able to do better" - kinds of things... - Hendrik -- http://mail.python.org/mailman/listinfo/python-list