[Steven D'Aprano] > (All previous quoting ruthlessly snipped.) And ruthlessly appreciated ;-)
> A question for Tim Peters, as I guess he'll have the most experience in > this sort of thing. > > With all the cross-platform hassles due to the various C compilers not > implementing the IEEE standard completely or correctly, I wonder how much > work would be involved for some kind soul to implement their own maths > library to do the lot, allowing Python to bypass the C libraries > altogether. > > Are you falling over laughing Tim, or thinking what a great idea? Neither, really. Doing basic + - * / in SW is way too slow to sell to programmers who take floats seriously in their work. For libm, K-C Ng at Sun was writing fdlibm at the same time Peter Tang & I were writing a "spirit of 754" libm for Kendall Square Research (early 90's). KSR is long gone, and the code was proprietary anyway; fdlibm lives on, with an MIT-like ("do whatever you want") license, although it doesn't appear to have enjoyed maintenance work for years now: http://www.netlib.org/fdlibm/ fdlibm is excellent (albeit largely inscrutable to non-specialists) work. I believe that, at some point, glibc replaced its math functions with fdlibm's, and went on to improve them. Taking advantage of those improvements may (or may not) raise licensing issues Python can't live with. There are at least two other potential issues with using it: 1. Speed again. The libm I wrote for KSR was as accurate and relentlessly 754-conforming as fdlibm, but approximately 10x faster. There's an enormous amount of optimization you can do if you can exploit every quirk of the HW you're working on -- and the code I wrote was entirely unportable, non-standard C, which couldn't possibly run on any HW other than KSR's custom FPU. C compiler vendors at least used to spend a lot of money similarly crafting libraries that exploited quirks of the HW they were targeting, and lots of platforms still have relatively fast libms as a result. fdlibm aims to run on "almost any" 32-bit 754 box, and pays for that in comparative runtime sloth. . Since fdlibm was written at Sun over a decade ago, you can guess that it wasn't primarily aiming at the Pentium architecture. 2. Compatibility with the platform libm. Some users will be unhappy unless the stuff they get from Python is quirk-for-quirk and bug-for-bug identical to the stuff they get from other languages on their platform. There's really no way to do that unless Python uses the same libm. For example, many people have no real idea what they're doing with libm functions, and value reproducibility over anything else -- "different outcomes means one of them must be in error" is the deepest analysis they can, or maybe just have time, to make. Alas, for many uses of libm, that's a defensible (albeit appalling <0.6 wink>) attitude (e.g., someone slings sin() and cos() to plot a circle in a GUI -- when snapping pixels to the closest grid point, the tiniest possible rounding difference can make a pixel "jump" to a neighboring pixel, and then "it's a bug" if Python doesn't reproduce the same pixel plotting accidents as, e.g., the platform C or JavaScript). > What sort of work is needed? Is it, say, as big a job as maintaining > Python? Bigger? One weekend spent working solidly? Half a full-time year if done from scratch by a numeric programming expert. Maybe a few weeks if building on fdlibm, which probably needs patches to deal with modern compilers (e.g., http://www.netlib.org/fdlibm/readme has no date on it, but lists as "NOT FIXED YET": 3. Compiler failure on non-standard code Statements like *(1+(int*)&t1) = 0; are not standard C and cause some optimizing compilers (e.g. GCC) to generate bad code under optimization. These cases are to be addressed in the next release. ). -- http://mail.python.org/mailman/listinfo/python-list