We (CodeSourcery) currently working on developing ColdFire targeted GNU toolchains (gcc, etc).
Currently gcc nominally uses a 12-byte "extended" precision type for the C "long double" floating point type. This is inherited from the m68k gcc port, but doesn't really make a whole lot of sense for ColdFire. It's also broken. The ColdFire FPU only has 64-bit registers, and the current gcc soft-float routines are just wrappers round the 64-bit "double" routines. So, we're proposing changing long double to be something more sensible. There are two options: 1) Make long double == double. This is what Arm does, amongst others. This pretty much just works and should reduce the amount of support code required. Anyone wanting more than IEEE double precision has to use a third party bugnum/MP/quad library of which there are several, but no standard ABI. 2) Choose a sensible format for long double. The obvious candidate is a 128-bit PPC/MIPS stye almost-quad precision type implemented with a pair of 64-bit doubles. This provides a higher precision type for those that want it at the expense of additional complexity and support code for those that don't. This email is a RFC to try and gauge which of the two options is most useful to the ColdFire community. ie. are there significant users that would benefit from (2). Also if there's anyone who really wants to keep the existing long double we'd like to hear from you, and why you think it should be kept. Paul