On Mon, 2005-05-30 at 23:10 -0400, Robert Dewar wrote: > Toon Moene wrote: > > >> But even this were fixed, many users would still complain. > >> That's why I think that the Linux kernel should set the CPU > >> in double-precision mode, like some other OS's (MS Windows, > >> *BSD) -- but this is off-topic here. > > > > It's not off-topic. In fact, Jim Wilson argued this point here: > > > > http://gcc.gnu.org/ml/gcc/2003-08/msg01282.html > > There are good arguments on either side of this issue. If you set > double precision mode, then you get more predictable precision > (though range is still unpredictable), at the expense of not being > able to make use of extended precision (there are many algorithms > which can take very effective advantage of extended precision (e.g. > you can use log/exp to compute x**y if you have extended precision > but not otherwise).
Such algorithm usually require a very detailed control of what's going on at the machine level, given current high level programming languages that means using assembler. Also, I don't remember but I believe user code is able to change the default when needed, so knowlegeable users should still be able to do what's necessary (set and restore the state), albeit may be with a loss of processing performance. I also assume it's nearly impossible to get FP algorithms (eg: relying on FP equality) working with the currently (broken) compilers that operate in extended precision, but it's much easier when FPU mode is set to round to 64 bits. > Given that there are good arguments on both sides for what the > default should be, I see no good argument for changing the > default, which will cause even more confusion, since programs > that work now will suddenly stop working. Or that many programs that currently work on many OS will start to work the same under Linux instead of giving strange (and may be wrong) results. Laurent