On Wed, Dec 21, 2022 at 09:40:24PM +0000, Joseph Myers wrote:
> On Wed, 21 Dec 2022, Segher Boessenkool wrote:
> 
> > > --- a/gcc/tree.cc
> > > +++ b/gcc/tree.cc
> > > @@ -9442,15 +9442,6 @@ build_common_tree_nodes (bool signed_char)
> > >        if (!targetm.floatn_mode (n, extended).exists (&mode))
> > >   continue;
> > >        int precision = GET_MODE_PRECISION (mode);
> > > -      /* Work around the rs6000 KFmode having precision 113 not
> > > -  128.  */
> > 
> > It has precision 126 now fwiw.
> > 
> > Joseph: what do you think about this patch?  Is the workaround it
> > removes still useful in any way, do we need to do that some other way if
> > we remove this?
> 
> I think it's best for the TYPE_PRECISION, for any type with the binary128 
> format, to be 128 (not 126).

Agreed.

> It's necessary that _Float128, _Float64x and long double all have the same 
> TYPE_PRECISION when they have the same (binary128) format, or at least 
> that TYPE_PRECISION for _Float128 >= that for long double >= that for 
> _Float64x, so that the rules in c_common_type apply properly.
> 
> How the TYPE_PRECISION compares to that of __ibm128, or of long double 
> when that's double-double, is less important.

I guess it can affect the common type for {long double
(when binary128),_Float128,_Float64x,__float128,__ieee128} vs. {long double 
(when
ibm128),__ibm128}, especially in C (for C++ only when non-standard types are
involved (__float128, __ieee128, __ibm128).
But I think unless we error (e.g. in C++ when we see unordered floating
point types), prefering binary128 is better, it certainly has much bigger
exponent range over __ibm128 and most of the time also the precision
(__ibm128 wastes some bits on the other exponent).

        Jakub

Reply via email to