https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98384

--- Comment #11 from Patrick Palka <ppalka at gcc dot gnu.org> ---
(In reply to Jakub Jelinek from comment #9)
> I think glibc %a printing uses 0x1.xxxx (for normalized values) at least for
> float/double and the IEEE quad long doubles, but uses 0xf.xxxx printing etc.
> for the 80-bit long doubles.  My personal preference would be to always use
> 0x1.xxxx for normalized numbers and for denormals 0x0.xxxx, I think it is
> less surprising to users, and transforming one form to another is pretty
> easy.

Yeah, currently to_chars hex output mimics glibc's choice of leading hex digit.
 But always using 0/1, even for 80-bit long double sounds good to me too.

FWIW, I think the shortest hex form for some number is at most 3 characters
shorter than any other conforming hex form, e.g. 1.2p+12 vs 9p+9.

> And agree on the tests just trying to parse the returned string back to see
> if it is the original value.

I posted a patch at
https://gcc.gnu.org/pipermail/gcc-patches/2021-February/565726.html that does
this, but also salvages the verification via printf by first checking if the
leading hex digit of the printf output agrees with that of to_chars. 
Conveniently, the patch sidesteps the question of choosing a consistent
representation vs shortest representation :)

Reply via email to