Thanks all for the information.
Based on the information so far, my understanding is that we cannot revert 
r12-979-g782e57f2c09 
Since it’s for enabling YMM and ZMM registers to be used for by_pieces 
operations on X86.
Let me know if I miss anything here.

FYI. 

This issue was found during my work to back port all the patches of 
-ftrivial-auto-var-init so far from 
GCC12 to GCC11. 

The following small testing case (_Complex long double)

_Complex long double result;

_Complex long double foo()
{
   _Complex long double temp3;

  result = temp3;
  return result;
}

Failed with -ftrivial-auto-var-init=pattern with GCC11 at the following Line 
3087 at call to “build_nonstandard_integer_type”
: (expand_DEFERRED_INIT):

3076       if (TREE_CODE (TREE_TYPE (lhs)) != BOOLEAN_TYPE
3077           && tree_fits_uhwi_p (var_size)
3078           && (init_type == AUTO_INIT_PATTERN
3079               || !is_gimple_reg_type (var_type))
3080           && int_mode_for_size (tree_to_uhwi (var_size) * BITS_PER_UNIT,
3081                                 0).exists ())
3082         {
3083           unsigned HOST_WIDE_INT total_bytes = tree_to_uhwi (var_size);
3084           unsigned char *buf = (unsigned char *) xmalloc (total_bytes);
3085           memset (buf, (init_type == AUTO_INIT_PATTERN
3086                         ? INIT_PATTERN_VALUE : 0), total_bytes);
3087           tree itype = build_nonstandard_integer_type
3088                          (total_bytes * BITS_PER_UNIT, 1);

The exact failing point is at function 
“set_min_and_max_values_for_integral_type”:

2851   gcc_assert (precision <= WIDE_INT_MAX_PRECISION);

For _Complex long double,  “precision” is 256.  
In GCC11, “WIDE_INT_MAX_PRECISION” is 192,  in GCC12, it’s 512. 
As a result, the above assertion failed on GCC11. 

I am wondering what’s the best fix for this issue in gcc11? 

Qing


> On Nov 5, 2021, at 5:01 AM, Richard Biener via Gcc-patches 
> <gcc-patches@gcc.gnu.org> wrote:
> 
> On Fri, Nov 5, 2021 at 7:54 AM Jakub Jelinek via Gcc-patches
> <gcc-patches@gcc.gnu.org> wrote:
>> 
>> On Thu, Nov 04, 2021 at 11:05:35PM -0700, Andrew Pinski via Gcc-patches 
>> wrote:
>>>> I noticed that the macro “WIDE_INT_MAX_ELTS” has different values in GCC11 
>>>> and GCC12 (on the same X86 machine)
>>>> 
>>>> For gcc11:
>>>> 
>>>> wide int max elts =3
>>>> 
>>>> For gcc12:
>>>> 
>>>> wide int max elts =9
>>>> 
>>>> Does anyone know what’s the reason for this difference?
>>>> 
>>>> Thanks a lot for any help.
>>> 
>>> Yes originally, the x86 backend only used OI and XI modes for vectors
>>> during data movement.
>>> This changed with r10-5741-gc57b4c22089 which added the use of OI mode
>>> for TImode adding with overflow and then MAX_BITSIZE_MODE_ANY_INT
>>> changed from 128 to 160 (in r10-6178-gc124b345e46078) to fix the ICE
>>> introduced by that change .
>>> And then with r12-979-g782e57f2c09 removed the define of
>>> MAX_BITSIZE_MODE_ANY_INT.
>>> Now what was not mentioned in r12-979-g782e57f2c09 (or before) of why
>>> MAX_BITSIZE_MODE_ANY_INT was defined in the first place for x86. HJL
>>> assumed there was some problem of why it was defined that way but not
>>> realizing memory usage was the reason.
>>> It was defined to keep the memory usage down as you see that it is now
>>> almost a 3x memory increase for all wi::wide_int.
>>> I do think r12-979-g782e57f2c09 should be reverted with an added
>>> comment on saying defining MAX_BITSIZE_MODE_ANY_INT here is to
>>> decrease the memory footprint.
>> 
>> I completely agree.
> 
> Do we have permanent objects embedding wide[st]_int?  I know of
> class loop and loop_bound.  Btw, there are other targets with large
> integer modes (aarch64 with XImode) and not defining
> MAX_BITSIZE_MODE_ANY_INT
> 
> Richard.
> 
>>        Jakub

Reply via email to