+FAIL: gcc.dg/tree-ssa/builtin-sprintf.c execution test
FAIL: test_a_double:364: "%a" expected result for "0x0.0000000000000p+0"
doesn't match function call return value: 20 != 6
FAIL: test_a_double:365: "%a" expected result for "0x1.0000000000000p+0"
doesn't match function call return value: 20 != 6
FAIL: test_a_double:366: "%a" expected result for "0x1.0000000000000p+1"
doesn't match function call return value: 20 != 6
FAIL: test_a_long_double:375: "%La" expected result for
"0x0.0000000000000000000000000000p+0" doesn't match function call return
value: 35 != 6
FAIL: test_a_long_double:376: "%La" expected result for
"0x1.0000000000000000000000000000p+0" doesn't match function call return
value: 35 != 6
FAIL: test_a_long_double:377: "%La" expected result for
"0x1.0000000000000000000000000000p+1" doesn't match function call return
value: 35 != 6
I don't know about these. It looks like the Solaris printf doesn't
handle the %a directive correctly and the tests (and the related
checks/optimization) might need to be disabled, which in turn might
involve extending the existing printf hook or adding a new one.
I've found the following in Solaris 10 (and up) printf(3C):
a, A A double argument representing a floating-point
number is converted in the style "[-]0xh.hhhhp+d",
where the single hexadecimal digit preceding the
radix point is 0 if the value converted is zero and
1 otherwise and the number of hexadecimal digits
after it is equal to the precision; if the precision
is missing, the number of digits printed after the
radix point is 13 for the conversion of a double
value, 16 for the conversion of a long double value
on x86, and 28 for the conversion of a long double
value on SPARC; if the precision is zero and the '#'
flag is not specified, no decimal-point character
will appear. The letters "abcdef" are used for a
conversion and the letters "ABCDEF" for A conver-
sion. The A conversion specifier produces a number
with 'X' and 'P' instead of 'x' and 'p'. The
exponent will always contain at least one digit, and
only as many more digits as necessary to represent
the decimal exponent of 2. If the value is zero, the
exponent is zero.
The converted value is rounded to fit the specified
output format according to the prevailing floating
point rounding direction mode. If the conversion is
not exact, an inexact exception is raised.
A double argument representing an infinity or NaN is
converted in the SUSv3 style of an e or E conversion
specifier.
I tried to check the relevant sections of the latest C99 and C11 drafts
to check if this handling of missing precision is allowed by the
standard, but I couldn't even fully parse the language there.
I don't have access to Solaris to fully debug and test this there.
Would you mind helping with it?
Not at all: if it turns out that Solaris has bugs in this area, I can
easily file them, too.
I think it's actually a defect in the C standard. It doesn't
specify how many decimal digits an implementation must produce
on output for a plain %a directive (i.e., when precision isn't
explicitly specified). With Glibc, for instance, printf("%a",
1.0) prints 0x8p-3 while on Solaris it prints 0x8.000000p-3.
Both seem reasonable but neither is actually specified. In
theory, an implementation is allowed print any number of zeros
after the decimal point, which the standard should (IMO) not
permit. There should be a cap (e.g., of at most 6 decimal
digits when precision is not specified with %a, just like
there us with %e). I'll propose to change the standard and
forward it to the C committee. Until then, I've worked
around it in the patch for pr77735 (under review). If you
have a moment and could try it out on Solaris and let me
know how it goes I'd be grateful.
Thanks
Martin