By the way, I really like the underlying theme that modern floating-point arithmetic has properties as rigorous and well defined as those of integer arithmetic, and these properties can be relied on.
Over the years I've had several people working on multi-precision arithmetic tell me that they just don't "trust" floating-point arithmetic, whether or not there are theorems that describe this behavior precisely. And in 2003 the great Arnold Schoenhage replied to an email of mine with > By the way, you may be interested in a very nice paper by Colin > Percival, which has the following review in Math Reviews: ..... > For modern processors where much effort has been placed to make > floating-point arithmetic very fast (often faster than integer > arithmetic), this paper might tip the speed balance to > floating-point-based FFT algorithms. The idea to use floating-point arithmetic because of its actual speed due to extra silicon efforts by the processor manufacturers is like recommending to a sportsman to `run' faster by driving a car. --- Seriously speaking, it is somewhat questionable to develop our algorithms under the biases of existing hardware; rather the hardware should be designed according to basic and clean algorithmic principles! --- Imagine how fast our multi-precision routines would be if some company would be willing to spend that much silicon for a TP32 in hardware! So not using floating-point arithmetic was also a cultural issue for him! TP32 is a virtual machine that Schoenhage designed to program multi-precision arithmetic algorithms, much as Knuth designed MIX to implement his algorithms. (They're roughly at the same level, too, it's like programming in assembler.) The difference is that TP32 has a relatively fast interpreter. Brad
____________________ Racket Users list: http://lists.racket-lang.org/users