Hi all, recently, I checked the string to number (double precision float) capabilites of several spreadsheet programs as well as programming languages using a 1M sample and extended precision packages. Among all the spreadsheet programs tested, gnumeric (Linux version 1.10.17 on Ubuntu 10.04) turned out to give the most correct results, it only failed in case of 46 numbers (Excel 2010/2002 directly: several thousand errors, VBA7 in XL2010x64: 443 errors, VBA 6 in XL2002x32: 323 errors, for example).
A closer look revealed that all of gnumeric's casting errors were caused by not rounding up the least significant bit when that would have lead to a closer double. Inspecting an 80-bit mantissa representation, it turned out that in all 46 cases the 54th bit would have to be set, then at least 8 zero bits would have followed before again a trailing bit would have to be set. Thus, I have the following two questions: 1.) Are there any theoretical objections against uprounding if the 54-th bit was to be set? 2.) Where do I find the source code of gnumerics built-in "value" function, in particular, the part that really converts a string to a float? The "latest source" link on the home page directs towards "http://hammurabi.acc.umu.se/pub/GNOME/sources/gnumeric/1.10/gnumeric-1.10.17.tar.gz", but that yields a 404-error. On git.gnome.org, I found value.c, parser.y ..., but I still have not found what I'm looking for: the source code of the conversion routine from string to float. Can somebody please tell me where it is or whether an internal C function is used for that purpose or ...? Thank you Schorsch _______________________________________________ gnumeric-list mailing list [email protected] http://mail.gnome.org/mailman/listinfo/gnumeric-list
