On Thu, Sep 22, 2022 at 06:49:10PM +0200, Aldy Hernandez wrote:
> It has been suggested that if we start bumping numbers by an ULP when
> calculating open ranges (for example the numbers less than 3.0) that
> dumping these will become increasingly harder to read, and instead we
> should opt for the hex representation.  I still find the floating
> point representation easier to read for most numbers, but perhaps we
> could have both?
> 
> With this patch this is the representation for [15.0, 20.0]:
> 
>      [frange] float [1.5e+1 (0x0.fp+4), 2.0e+1 (0x0.ap+5)]
> 
> Would you find this useful, or should we stick to the hex
> representation only (or something altogether different)?

I think dumping both is the way to go, but real_to_hexadecimal doesn't
do anything useful with decimal floats, so that part should be
guarded on !DECIMAL_FLOAT_TYPE_P (type).

Why do you build a tree + dump_generic_node for decimal instead of
real_to_decimal_for_mode ?
The former I think calls:
            char string[100];
            real_to_decimal (string, &d, sizeof (string), 0, 1);
so perhaps:
  char s[100];
  real_to_decimal_for_mode (s, &r, sizeof (string), 0, 1, TYPE_MODE (type));
  pp_string (pp, "%s", s);
  if (!DECIMAL_FLOAT_TYPE_P (type))
    {
      real_to_hexadecimal (s, &r, sizeof (s), 0, 1);
      pp_printf (pp, " (%s)", s);
    }
?

        Jakub

Reply via email to