In working with small numbers, I usually look for a solution involving a log transformation. For instance, when a or b are very small, we can compute the ratio
a / ( a + b ) more safely by 1 / ( 1 + exp( log(b) - log(a) ) ) Alternative data types are sure to cause issues with portability. It's not worth the headache if you can find an analytical solution like that above. Some supposedly standard data types (including long double) are simply not standard in regard to their size in memory. This short Wikipedia article ought to scare anyone who cares about portability: <http://en.wikipedia.org/wiki/Long_double>. If you are doing some specialty computations involving very large or small numbers (e.g. computing the Bell number), you might consider the CRAN package gmp, which wraps the GNU multiple precision arithmetic library. Or consider using a language like Ruby, where multiple precision is automatic <http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic>. -Matt On Thu, 2010-10-07 at 14:28 -0400, Richard D. Morey wrote: > I'm using .Call() to call C code from R under Windows (on an Intel > Core 2 duo). The C code involves some very small numbers, and I think > I'm losing precision using doubles. I thought I might use long doubles > to see if I can get that precision back. I have a few questions: > > 1. Does this affect the portability to other OSs or processors? > 2. I'm returning the results in a matrix. Will a matrix of REALs be > sufficient for holding long doubles, or will it be cast back to doubles? > 3. Will calls to FORTRAN BLAS (like dsymv, dpotrf, dpotri) still work > with long doubles? > > Thanks for any help you can provide. > -- Matthew S. Shotwell Graduate Student Division of Biostatistics and Epidemiology Medical University of South Carolina ______________________________________________ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel