Hi Jakub, Thanks for the comments!
on 2022/12/14 17:36, Jakub Jelinek wrote: > On Wed, Dec 14, 2022 at 04:46:07PM +0800, Kewen.Lin via Gcc-patches wrote: >> on 2022/12/6 19:27, Kewen.Lin via Gcc-patches wrote: >>> Hi Mike, >>> >>> Thanks for fixing this, some comments are inlined below. >>> >>> on 2022/11/2 10:42, Michael Meissner wrote: >>>> This patch fixes the issue that GCC cannot build when the default long >>>> double >>>> is IEEE 128-bit. It fails in building libgcc, specifically when it is >>>> trying >>>> to buld the __mulkc3 function in libgcc. It is failing in >>>> gimple-range-fold.cc >>>> during the evrp pass. Ultimately it is failing because the code declared >>>> the >>>> type to use TFmode but it used F128 functions (i.e. KFmode). >> >> By further looking into this, I found that though __float128 and _Float128 >> types >> are two different types, they have the same mode TFmode, the unexpected >> thing is >> these two types have different precision. I noticed it's due to the >> "workaround" >> in build_common_tree_nodes: >> >> /* Work around the rs6000 KFmode having precision 113 not >> 128. */ >> const struct real_format *fmt = REAL_MODE_FORMAT (mode); >> gcc_assert (fmt->b == 2 && fmt->emin + fmt->emax == 3); >> int min_precision = fmt->p + ceil_log2 (fmt->emax - fmt->emin); >> if (!extended) >> gcc_assert (min_precision == n); >> if (precision < min_precision) >> precision = min_precision; >> >> Since function useless_type_conversion_p considers two float types are >> compatible >> if they have the same mode, so it doesn't require the explicit conversions >> between >> these two types. I think it's exactly what we want. And to me, it looks >> unexpected >> to have two types with the same mode but different precision. >> >> So could we consider disabling the above workaround to make _Float128 have >> the same >> precision as __float128 (long double) (the underlying TFmode)? I tried the >> below >> change: > > The hacks with different precisions of powerpc 128-bit floating types are > very unfortunate, it is I assume because the middle-end asserted that scalar > floating point types with different modes have different precision. > We no longer assert that, as BFmode and HFmode (__bf16 and _Float16) have > the same 16-bit precision as well and e.g. C++ FE knows to treat standard > vs. extended floating point types vs. other unknown floating point types > differently in finding result type of binary operations or in which type > comparisons will be done. It's good news, for now those three long double modes on Power have different precisions, if they can have the same precision, I'd expect the ICE should be gone. > That said, we'd need some target hooks to > preserve the existing behavior with __float128/__ieee128 vs. __ibm128 > vs. _Float128 with both -mabi=ibmlongdouble and -mabi=ieeelongdouble. > > I bet the above workaround in generic code was added for a reason, it would > surprise me if _Float128 worked at all without that hack. OK, I'll have a look at those nan failures soon. > Shouldn't float128_type_node be adjusted instead the same way? Not sure, the regression testing only had nan related failures exposed. BR, Kewen