On Fri, Aug 04, 2023 at 01:25:07PM +0000, Richard Biener wrote:
> > @@ -144,6 +144,9 @@ DEFTREECODE (BOOLEAN_TYPE, "boolean_type
> >     and TYPE_PRECISION (number of bits used by this type).  */
> >  DEFTREECODE (INTEGER_TYPE, "integer_type", tcc_type, 0)

Thanks.

> > +/* Bit-precise integer type.  */
> > +DEFTREECODE (BITINT_TYPE, "bitint_type", tcc_type, 0)
> > +
> 
> So what was the main reason to not make BITINT_TYPE equal to INTEGER_TYPE?

The fact that they do or can have different calling conventions from normal
integers; they e.g. don't promote to integers, so IFN_VA_ARG handling is
affected (lowered only during stdarg pass after IPA), calling conventions
depend (with a single finalized target it is premature to hardcode how it
will behave for all the others, and while on x86_64 the up to 128-bit
_BitInt pass/return mostly the same, e.g. _BitInt(128) has alignof
like long long, while __int128 has twice as large alignment.

So, the above was the main reason to make BITINT_TYPE <-> non-BITINT_TYPE
conversions non-useless such that calls have the right type of arguments.

I'll try to adjust the comments and mention it in generic.texi.

> Maybe note that in the comment as
> 
> "While bit-precise integer types share the same properties as
> INTEGER_TYPE ..."
> 
> ?
> 
> Note INTEGER_TYPE is documeted in generic.texi but unless I missed
> it the changelog above doesn't mention documentation for BITINT_TYPE
> added there.

> > +  if (bitint_type_cache == NULL)
> > +    vec_safe_grow_cleared (bitint_type_cache, 2 * MAX_INT_CACHED_PREC + 2);
> > +
> > +  if (precision <= MAX_INT_CACHED_PREC)
> > +    {
> > +      itype = (*bitint_type_cache)[precision + unsignedp];
> > +      if (itype)
> > +   return itype;
> 
> I think we added this kind of cache for standard INTEGER_TYPE because
> the middle-end builds those all over the place and going through
> the type_hash is expensive.  Is that true for _BitInt as well?  If
> not it doesn't seem worth the extra caching.

As even the very large _BitInts are used in the pre-IPA passes, IPA passes
and a few post-IPA passes similarly to other integral types, I think the
caching is very useful.  But if you want, I could gather some statistics
on those.  Most importantly, no price (almost) is paid if one doesn't use
those types in the source.

> In fact, I wonder whether the middle-end does/should treat
> _BitInt<N> and an INTEGER_TYPE with precision N any different?

See above.

> Aka, should we build an INTEGER_TYPE whenever N is say less than
> the number of bits in word_mode?
> 
> > +      if (TREE_CODE (pval) == INTEGER_CST
> > +     && TREE_CODE (TREE_TYPE (pval)) == BITINT_TYPE)
> > +   {
> > +     unsigned int prec = TYPE_PRECISION (TREE_TYPE (pval));
> > +     struct bitint_info info;
> > +     gcc_assert (targetm.c.bitint_type_info (prec, &info));
> > +     scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
> > +     unsigned int limb_prec = GET_MODE_PRECISION (limb_mode);
> > +     if (prec > limb_prec)
> > +       {
> > +         scalar_int_mode arith_mode
> > +           = (targetm.scalar_mode_supported_p (TImode)
> > +              ? TImode : DImode);
> > +         if (prec > GET_MODE_PRECISION (arith_mode))
> > +           pval = tree_output_constant_def (pval);
> > +       }
> 
> A comment would be helpful to understand what we are doing here.

Ok, will add that.  Note, this particular spot is an area for future
improvement, I've spent half of day on it but then gave up for now.
In the lowering pass I'm trying to optimize the common case where a lot
of constants don't need all the limbs and can be represented as one limb
or several limbs in memory with all the higher limbs then filled with 0s
or -1s.  For the argument passing, it would be even useful to have smaller
_BitInt constants passed by not having them in memory at all and just
pushing a couple of constants (i.e. store_by_pieces way).  But trying to
do that in emit_push_insn wasn't really easy...

> > --- gcc/config/i386/i386.cc.jj      2023-07-19 10:01:17.380467993 +0200
> > +++ gcc/config/i386/i386.cc 2023-07-27 15:03:24.230234508 +0200
> > @@ -2121,7 +2121,8 @@ classify_argument (machine_mode mode, co
> >     return 0;
> >      }
> 
> splitting out target support to a separate patch might be helpful

Ok.

> > --- gcc/doc/tm.texi.jj      2023-05-30 17:52:34.474857301 +0200
> > +++ gcc/doc/tm.texi 2023-07-27 15:03:24.284233753 +0200
> > @@ -1020,6 +1020,11 @@ Return a value, with the same meaning as
> >  @code{FLT_EVAL_METHOD} that describes which excess precision should be
> >  applied.
> >  
> > +@deftypefn {Target Hook} bool TARGET_C_BITINT_TYPE_INFO (int @var{n}, 
> > struct bitint_info *@var{info})
> > +This target hook returns true if _BitInt(N) is supported and provides some
> > +details on it.
> > +@end deftypefn
> > +
> 
> document the "details" here please?

Will do.

> > @@ -20523,6 +20546,22 @@ rtl_for_decl_init (tree init, tree type)
> >         return NULL;
> >       }
> >  
> > +      /* RTL can't deal with BLKmode INTEGER_CSTs.  */
> > +      if (TREE_CODE (init) == INTEGER_CST
> > +     && TREE_CODE (TREE_TYPE (init)) == BITINT_TYPE
> > +     && TYPE_MODE (TREE_TYPE (init)) == BLKmode)
> > +   {
> > +     if (tree_fits_shwi_p (init))
> > +       {
> > +         bool uns = TYPE_UNSIGNED (TREE_TYPE (init));
> > +         tree type
> > +           = build_nonstandard_integer_type (HOST_BITS_PER_WIDE_INT, uns);
> > +         init = fold_convert (type, init);
> > +       }
> > +     else
> > +       return NULL;
> > +   }
> > +
> 
> it feels like we should avoid the above and fix expand_expr instead.
> The assert immediately following seems to "support" a NULL_RTX return
> value so the above trick should work there, too, and we can possibly
> avoid creating a new INTEGER_TYPE and INTEGER_CST?  Another option
> would be to simply use immed_wide_int_const or simply
> build a VOIDmode CONST_INT directly here?

Not really sure in this case.  I guess I could instead deal with BLKmode
BITINT_TYPE INTEGER_CSTs in expand_expr* and emit those into memory, but
I think dwarf2out would be upset that a constant was forced into memory,
it really wants some DWARF constant.
Sure, I could create a CONST_INT directly.  What to do for larger ones
is I'm afraid an area for future DWARF improvements.

> > --- gcc/expr.cc.jj  2023-07-02 12:07:08.455164393 +0200
> > +++ gcc/expr.cc     2023-07-27 15:03:24.253234187 +0200
> > @@ -10828,6 +10828,8 @@ expand_expr_real_1 (tree exp, rtx target
> >        ssa_name = exp;
> >        decl_rtl = get_rtx_for_ssa_name (ssa_name);
> >        exp = SSA_NAME_VAR (ssa_name);
> > +      if (!exp || VAR_P (exp))
> > +   reduce_bit_field = false;
> 
> That needs an explanation.  Can we do this and related changes
> as prerequesite instead?

I can add a comment, but those 2 lines are an optimization for the other
hunks in the same function.  The intent is to do the zero/sign extensions
of _BitInt < mode precision objects (note, this is about the small/middle
ones which aren't or aren't much lowered in the lowering pass) when reading
from memory, or function arguments (or RESULT_DECL?) because the ABI says
those bits are undefined there, but not to do that for temporaries
(SSA_NAMEs other than the parameters/RESULT_DECLs) because RTL expansion
has done those extensions already when storing them into the pseudos.

> >        goto expand_decl_rtl;
> >  
> >      case VAR_DECL:
> > @@ -10961,6 +10963,13 @@ expand_expr_real_1 (tree exp, rtx target
> >         temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
> >                                           MEM_ALIGN (temp), NULL_RTX, NULL);
> >  
> > +     if (TREE_CODE (type) == BITINT_TYPE
> > +         && reduce_bit_field
> > +         && mode != BLKmode
> > +         && modifier != EXPAND_MEMORY
> > +         && modifier != EXPAND_WRITE
> > +         && modifier != EXPAND_CONST_ADDRESS)
> > +       return reduce_to_bit_field_precision (temp, NULL_RTX, type);
> 
> I wonder how much work it would be to "lower" 'reduce_bit_field' earlier
> on GIMPLE...

I know that the expr.cc hacks aren't nice, but I'm afraid it would be a lot
of work and lot of code.  And not really sure how to make sure further
GIMPLE passes wouldn't optimize that away.
> 
> > @@ -11192,6 +11215,13 @@ expand_expr_real_1 (tree exp, rtx target
> >         && align < GET_MODE_ALIGNMENT (mode))
> >       temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
> >                                         align, NULL_RTX, NULL);
> > +   if (TREE_CODE (type) == BITINT_TYPE
> > +       && reduce_bit_field
> > +       && mode != BLKmode
> > +       && modifier != EXPAND_WRITE
> > +       && modifier != EXPAND_MEMORY
> > +       && modifier != EXPAND_CONST_ADDRESS)
> > +     return reduce_to_bit_field_precision (temp, NULL_RTX, type);
> 
> so this is quite repetitive, I suppose the checks ensure we apply
> it to rvalues only, but I don't really get why we only reduce
> BITINT_TYPE, esp. as we are not considering BLKmode here?

There could be a macro for that or something to avoid the repetitions.
The reason to do that for BITINT_TYPE only is that for everything else
unfortunately RTL does it completely differently.  There is separate
code when reading from bit-fields (which does those extensions), but for
anything else RTL assumes that sub-mode integers are always extended to the
corresponding mode.  Say for the case where the non-mode integers leak into
code (C long long/__int128 bit-fields larger than 32 bits) and where say
FRE/SRA optimizes into SSA_NAMEs, everything assumes that when it is spilled
in memory, it is always extended and re-extends after every binary/unary
operation.
Unfortunately, the x86-64 psABI (and the plans in other psABIs) says the
padding bits are undefined and so for ABI compatibility we can't rely
on those bits.  Now, for the large/huge ones where lowering occurs I believe
this shouldn't be a problem, those are VCEd to full limbs and then
explicitly extend from smaller number of bits on reads.

> > @@ -11253,18 +11283,21 @@ expand_expr_real_1 (tree exp, rtx target
> >     set_mem_addr_space (temp, as);
> >     if (TREE_THIS_VOLATILE (exp))
> >       MEM_VOLATILE_P (temp) = 1;
> > -   if (modifier != EXPAND_WRITE
> > -       && modifier != EXPAND_MEMORY
> > -       && !inner_reference_p
> > +   if (modifier == EXPAND_WRITE || modifier == EXPAND_MEMORY)
> > +     return temp;
> > +   if (!inner_reference_p
> >         && mode != BLKmode
> >         && align < GET_MODE_ALIGNMENT (mode))
> >       temp = expand_misaligned_mem_ref (temp, mode, unsignedp, align,
> >                                         modifier == EXPAND_STACK_PARM
> >                                         ? NULL_RTX : target, alt_rtl);
> > -   if (reverse
> > -       && modifier != EXPAND_MEMORY
> > -       && modifier != EXPAND_WRITE)
> > +   if (reverse)
> 
> the above two look like a useful prerequesite, OK to push separately.

Ok, will do.

> > +enum bitint_prec_kind {
> > +  bitint_prec_small,
> > +  bitint_prec_middle,
> > +  bitint_prec_large,
> > +  bitint_prec_huge
> > +};
> > +
> > +/* Caches to speed up bitint_precision_kind.  */
> > +
> > +static int small_max_prec, mid_min_prec, large_min_prec, huge_min_prec;
> > +static int limb_prec;
> 
> I would appreciate the lowering pass to be in a separate patch in
> case we need to iterate on it.

I guess that is possible, as long as the C + testcases patches go last,
nothing will really create those types.
> 
> > +/* Categorize _BitInt(PREC) as small, middle, large or huge.  */
> > +
> > +static bitint_prec_kind
> > +bitint_precision_kind (int prec)
> > +{
> > +  if (prec <= small_max_prec)
> > +    return bitint_prec_small;
> > +  if (huge_min_prec && prec >= huge_min_prec)
> > +    return bitint_prec_huge;
> > +  if (large_min_prec && prec >= large_min_prec)
> > +    return bitint_prec_large;
> > +  if (mid_min_prec && prec >= mid_min_prec)
> > +    return bitint_prec_middle;
> > +
> > +  struct bitint_info info;
> > +  gcc_assert (targetm.c.bitint_type_info (prec, &info));
> > +  scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
> > +  if (prec <= GET_MODE_PRECISION (limb_mode))
> > +    {
> > +      small_max_prec = prec;
> > +      return bitint_prec_small;
> > +    }
> > +  scalar_int_mode arith_mode = (targetm.scalar_mode_supported_p (TImode)
> > +                           ? TImode : DImode);
> > +  if (!large_min_prec
> > +      && GET_MODE_PRECISION (arith_mode) > GET_MODE_PRECISION (limb_mode))
> > +    large_min_prec = GET_MODE_PRECISION (arith_mode) + 1;
> > +  if (!limb_prec)
> > +    limb_prec = GET_MODE_PRECISION (limb_mode);
> > +  if (!huge_min_prec)
> > +    {
> > +      if (4 * limb_prec >= GET_MODE_PRECISION (arith_mode))
> > +   huge_min_prec = 4 * limb_prec;
> > +      else
> > +   huge_min_prec = GET_MODE_PRECISION (arith_mode) + 1;
> > +    }
> > +  if (prec <= GET_MODE_PRECISION (arith_mode))
> > +    {
> > +      if (!mid_min_prec || prec < mid_min_prec)
> > +   mid_min_prec = prec;
> > +      return bitint_prec_middle;
> > +    }
> > +  if (large_min_prec && prec <= large_min_prec)
> > +    return bitint_prec_large;
> > +  return bitint_prec_huge;
> > +}
> > +
> > +/* Same for a TYPE.  */
> > +
> > +static bitint_prec_kind
> > +bitint_precision_kind (tree type)
> > +{
> > +  return bitint_precision_kind (TYPE_PRECISION (type));
> > +}
> > +
> > +/* Return minimum precision needed to describe INTEGER_CST
> > +   CST.  All bits above that precision up to precision of
> > +   TREE_TYPE (CST) are cleared if EXT is set to 0, or set
> > +   if EXT is set to -1.  */
> > +
> > +static unsigned
> > +bitint_min_cst_precision (tree cst, int &ext)
> > +{
> > +  ext = tree_int_cst_sgn (cst) < 0 ? -1 : 0;
> > +  wide_int w = wi::to_wide (cst);
> > +  unsigned min_prec = wi::min_precision (w, TYPE_SIGN (TREE_TYPE (cst)));
> > +  /* For signed values, we don't need to count the sign bit,
> > +     we'll use constant 0 or -1 for the upper bits.  */
> > +  if (!TYPE_UNSIGNED (TREE_TYPE (cst)))
> > +    --min_prec;
> > +  else
> > +    {
> > +      /* For unsigned values, also try signed min_precision
> > +    in case the constant has lots of most significant bits set.  */
> > +      unsigned min_prec2 = wi::min_precision (w, SIGNED) - 1;
> > +      if (min_prec2 < min_prec)
> > +   {
> > +     ext = -1;
> > +     return min_prec2;
> > +   }
> > +    }
> > +  return min_prec;
> > +}
> > +
> > +namespace {
> > +
> > +/* If OP is middle _BitInt, cast it to corresponding INTEGER_TYPE
> > +   cached in TYPE and return it.  */
> > +
> > +tree
> > +maybe_cast_middle_bitint (gimple_stmt_iterator *gsi, tree op, tree &type)
> > +{
> > +  if (op == NULL_TREE
> > +      || TREE_CODE (TREE_TYPE (op)) != BITINT_TYPE
> > +      || bitint_precision_kind (TREE_TYPE (op)) != bitint_prec_middle)
> > +    return op;
> > +
> > +  int prec = TYPE_PRECISION (TREE_TYPE (op));
> > +  int uns = TYPE_UNSIGNED (TREE_TYPE (op));
> > +  if (type == NULL_TREE
> > +      || TYPE_PRECISION (type) != prec
> > +      || TYPE_UNSIGNED (type) != uns)
> > +    type = build_nonstandard_integer_type (prec, uns);
> > +
> > +  if (TREE_CODE (op) != SSA_NAME)
> > +    {
> > +      tree nop = fold_convert (type, op);
> > +      if (is_gimple_val (nop))
> > +   return nop;
> > +    }
> > +
> > +  tree nop = make_ssa_name (type);
> > +  gimple *g = gimple_build_assign (nop, NOP_EXPR, op);
> > +  gsi_insert_before (gsi, g, GSI_SAME_STMT);
> > +  return nop;
> > +}
> > +
> > +/* Return true if STMT can be handled in a loop from least to most
> > +   significant limb together with its dependencies.  */
> > +
> > +bool
> > +mergeable_op (gimple *stmt)
> > +{
> > +  if (!is_gimple_assign (stmt))
> > +    return false;
> > +  switch (gimple_assign_rhs_code (stmt))
> > +    {
> > +    case PLUS_EXPR:
> > +    case MINUS_EXPR:
> > +    case NEGATE_EXPR:
> > +    case BIT_AND_EXPR:
> > +    case BIT_IOR_EXPR:
> > +    case BIT_XOR_EXPR:
> > +    case BIT_NOT_EXPR:
> > +    case SSA_NAME:
> > +    case INTEGER_CST:
> > +      return true;
> > +    case LSHIFT_EXPR:
> > +      {
> > +   tree cnt = gimple_assign_rhs2 (stmt);
> > +   if (tree_fits_uhwi_p (cnt)
> > +       && tree_to_uhwi (cnt) < (unsigned HOST_WIDE_INT) limb_prec)
> > +     return true;
> > +      }
> > +      break;
> > +    CASE_CONVERT:
> > +    case VIEW_CONVERT_EXPR:
> > +      {
> > +   tree lhs_type = TREE_TYPE (gimple_assign_lhs (stmt));
> > +   tree rhs_type = TREE_TYPE (gimple_assign_rhs1 (stmt));
> > +   if (TREE_CODE (gimple_assign_rhs1 (stmt)) == SSA_NAME
> > +       && TREE_CODE (lhs_type) == BITINT_TYPE
> > +       && TREE_CODE (rhs_type) == BITINT_TYPE
> > +       && bitint_precision_kind (lhs_type) >= bitint_prec_large
> > +       && bitint_precision_kind (rhs_type) >= bitint_prec_large
> > +       && tree_int_cst_equal (TYPE_SIZE (lhs_type), TYPE_SIZE (rhs_type)))
> > +     {
> > +       if (TYPE_PRECISION (rhs_type) >= TYPE_PRECISION (lhs_type))
> > +         return true;
> > +       if ((unsigned) TYPE_PRECISION (lhs_type) % (2 * limb_prec) != 0)
> > +         return true;
> > +       if (bitint_precision_kind (lhs_type) == bitint_prec_large)
> > +         return true;
> > +     }
> > +   break;
> > +      }
> > +    default:
> > +      break;
> > +    }
> > +  return false;
> > +}
> > +
> > +/* Return non-zero if stmt is .{ADD,SUB,MUL}_OVERFLOW call with
> > +   _Complex large/huge _BitInt lhs which has at most two immediate uses,
> > +   at most one use in REALPART_EXPR stmt in the same bb and exactly one
> > +   IMAGPART_EXPR use in the same bb with a single use which casts it to
> > +   non-BITINT_TYPE integral type.  If there is a REALPART_EXPR use,
> > +   return 2.  Such cases (most common uses of those builtins) can be
> > +   optimized by marking their lhs and lhs of IMAGPART_EXPR and maybe lhs
> > +   of REALPART_EXPR as not needed to be backed up by a stack variable.
> > +   For .UBSAN_CHECK_{ADD,SUB,MUL} return 3.  */
> > +
> > +int
> > +optimizable_arith_overflow (gimple *stmt)
> > +{
> > +  bool is_ubsan = false;
> > +  if (!is_gimple_call (stmt) || !gimple_call_internal_p (stmt))
> > +    return false;
> > +  switch (gimple_call_internal_fn (stmt))
> > +    {
> > +    case IFN_ADD_OVERFLOW:
> > +    case IFN_SUB_OVERFLOW:
> > +    case IFN_MUL_OVERFLOW:
> > +      break;
> > +    case IFN_UBSAN_CHECK_ADD:
> > +    case IFN_UBSAN_CHECK_SUB:
> > +    case IFN_UBSAN_CHECK_MUL:
> > +      is_ubsan = true;
> > +      break;
> > +    default:
> > +      return 0;
> > +    }
> > +  tree lhs = gimple_call_lhs (stmt);
> > +  if (!lhs)
> > +    return 0;
> > +  if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (lhs))
> > +    return 0;
> > +  tree type = is_ubsan ? TREE_TYPE (lhs) : TREE_TYPE (TREE_TYPE (lhs));
> > +  if (TREE_CODE (type) != BITINT_TYPE
> > +      || bitint_precision_kind (type) < bitint_prec_large)
> > +    return 0;
> > +
> > +  if (is_ubsan)
> > +    {
> > +      use_operand_p use_p;
> > +      gimple *use_stmt;
> > +      if (!single_imm_use (lhs, &use_p, &use_stmt)
> > +     || gimple_bb (use_stmt) != gimple_bb (stmt)
> > +     || !gimple_store_p (use_stmt)
> > +     || !is_gimple_assign (use_stmt)
> > +     || gimple_has_volatile_ops (use_stmt)
> > +     || stmt_ends_bb_p (use_stmt))
> > +   return 0;
> > +      return 3;
> > +    }
> > +
> > +  imm_use_iterator ui;
> > +  use_operand_p use_p;
> > +  int seen = 0;
> > +  FOR_EACH_IMM_USE_FAST (use_p, ui, lhs)
> > +    {
> > +      gimple *g = USE_STMT (use_p);
> > +      if (is_gimple_debug (g))
> > +   continue;
> > +      if (!is_gimple_assign (g) || gimple_bb (g) != gimple_bb (stmt))
> > +   return 0;
> > +      if (gimple_assign_rhs_code (g) == REALPART_EXPR)
> > +   {
> > +     if ((seen & 1) != 0)
> > +       return 0;
> > +     seen |= 1;
> > +   }
> > +      else if (gimple_assign_rhs_code (g) == IMAGPART_EXPR)
> > +   {
> > +     if ((seen & 2) != 0)
> > +       return 0;
> > +     seen |= 2;
> > +
> > +     use_operand_p use2_p;
> > +     gimple *use_stmt;
> > +     tree lhs2 = gimple_assign_lhs (g);
> > +     if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (lhs2))
> > +       return 0;
> > +     if (!single_imm_use (lhs2, &use2_p, &use_stmt)
> > +         || gimple_bb (use_stmt) != gimple_bb (stmt)
> > +         || !gimple_assign_cast_p (use_stmt))
> > +       return 0;
> > +
> > +     lhs2 = gimple_assign_lhs (use_stmt);
> > +     if (!INTEGRAL_TYPE_P (TREE_TYPE (lhs2))
> > +         || TREE_CODE (TREE_TYPE (lhs2)) == BITINT_TYPE)
> > +       return 0;
> > +   }
> > +      else
> > +   return 0;
> > +    }
> > +  if ((seen & 2) == 0)
> > +    return 0;
> > +  return seen == 3 ? 2 : 1;
> > +}
> > +
> > +/* If STMT is some kind of comparison (GIMPLE_COND, comparison
> > +   assignment or COND_EXPR) comparing large/huge _BitInt types,
> > +   return the comparison code and if non-NULL fill in the comparison
> > +   operands to *POP1 and *POP2.  */
> > +
> > +tree_code
> > +comparison_op (gimple *stmt, tree *pop1, tree *pop2)
> > +{
> > +  tree op1 = NULL_TREE, op2 = NULL_TREE;
> > +  tree_code code = ERROR_MARK;
> > +  if (gimple_code (stmt) == GIMPLE_COND)
> > +    {
> > +      code = gimple_cond_code (stmt);
> > +      op1 = gimple_cond_lhs (stmt);
> > +      op2 = gimple_cond_rhs (stmt);
> > +    }
> > +  else if (is_gimple_assign (stmt))
> > +    {
> > +      code = gimple_assign_rhs_code (stmt);
> > +      op1 = gimple_assign_rhs1 (stmt);
> > +      if (TREE_CODE_CLASS (code) == tcc_comparison
> > +     || TREE_CODE_CLASS (code) == tcc_binary)
> > +   op2 = gimple_assign_rhs2 (stmt);
> > +      switch (code)
> > +   {
> > +   default:
> > +     break;
> > +   case COND_EXPR:
> > +     tree cond = gimple_assign_rhs1 (stmt);
> > +     code = TREE_CODE (cond);
> > +     op1 = TREE_OPERAND (cond, 0);
> > +     op2 = TREE_OPERAND (cond, 1);
> 
> this should ICE, COND_EXPRs now have is_gimple_reg conditions.

COND_EXPR was a case I haven't managed to reproduce (I think
usually if it is created at all it is created later).
I see tree-cfg.cc for this was changed in GCC 13, but I see tons
of spots which still try to handle COMPARISON_CLASS_P rhs1 of COND_EXPR
(e.g. in tree-ssa-math-opts.cc).  Does the rhs1 have to be boolean,
or could it be any integral type (so, would I need to e.g. be prepared
for BITINT_TYPE rhs1 which would need to have lowered != 0 comparison for
it)?

> > +/* Return a tree how to access limb IDX of VAR corresponding to BITINT_TYPE
> > +   TYPE.  If WRITE_P is true, it will be a store, otherwise a read.  */
> > +
> > +tree
> > +bitint_large_huge::limb_access (tree type, tree var, tree idx, bool 
> > write_p)
> > +{
> > +  tree atype = (tree_fits_uhwi_p (idx)
> > +           ? limb_access_type (type, idx) : m_limb_type);
> > +  tree ret;
> > +  if (DECL_P (var) && tree_fits_uhwi_p (idx))
> > +    {
> > +      tree ptype = build_pointer_type (strip_array_types (TREE_TYPE 
> > (var)));
> > +      unsigned HOST_WIDE_INT off = tree_to_uhwi (idx) * m_limb_size;
> > +      ret = build2 (MEM_REF, m_limb_type,
> > +               build_fold_addr_expr (var),
> > +               build_int_cst (ptype, off));
> > +      if (TREE_THIS_VOLATILE (var) || TREE_THIS_VOLATILE (TREE_TYPE (var)))
> > +   TREE_THIS_VOLATILE (ret) = 1;
> 
> Note if we have
> 
> volatile int i;
> x = *(int *)&i;
> 
> we get a non-volatile load from 'i', likewise in the reverse case
> where we get a volatile load from a non-volatile decl.  The above
> gets this wrong - the volatileness should be derived from the
> original reference with just TREE_THIS_VOLATILE checking
> (and not on the type).
> 
> You possibly also want to set TREE_SIDE_EFFECTS (not sure when
> that was exactly set), forwprop for example makes sure to copy
> that (and also TREE_THIS_NOTRAP in some cases).

Ok.

> How do "volatile" _BitInt(n) work?  People expect 'volatile'
> objects to be operated on in whole, thus a 'volatile int'
> load not split into two, etc.  I guess if we split a volatile
> _BitInt access it's reasonable to remove the 'volatile'?

They work like volatile bitfields or volatile __int128 or long long
on 32-bit arches, we don't really guarantee a single load or store there
(unless one uses __atomic* APIs which are lock-free).
The intent for volatile and what I've checked e.g. by eyeballing dumps
was that the volatile _BitInt loads or stores aren't merged with other
operations (if they were merged and we e.g. had z = x + y where all 3
vars would be volatile, we'd first read LSB limb of all those and store
result etc., when not merged each "load" or "store" isn't interleaved
with others) and e.g. even _BitInt bit-field loads/stores aren't reading
the same memory multiple times (which is what can happen e.g. for shifts
or </<=/>/>= comparisons when they aren't iterating on limbs strictly
upwards from least significant to most).

> > +  else
> > +    {
> > +      var = unshare_expr (var);
> > +      if (TREE_CODE (TREE_TYPE (var)) != ARRAY_TYPE
> > +     || !useless_type_conversion_p (m_limb_type,
> > +                                    TREE_TYPE (TREE_TYPE (var))))
> > +   {
> > +     unsigned HOST_WIDE_INT nelts
> > +       = tree_to_uhwi (TYPE_SIZE (type)) / limb_prec;
> > +     tree atype = build_array_type_nelts (m_limb_type, nelts);
> > +     var = build1 (VIEW_CONVERT_EXPR, atype, var);
> > +   }
> > +      ret = build4 (ARRAY_REF, m_limb_type, var, idx, NULL_TREE, 
> > NULL_TREE);
> > +    }
> 
> maybe the volatile handling can be commonized here?

>From my experience with it, the volatile handling didn't have to be added
in this case because it works from the VIEW_CONVERT_EXPRs.
It was just the optimizations for decls and MEM_REFs with constant indexes
where I had to do something about volatile.

> > +    case SSA_NAME:
> > +      if (m_names == NULL
> > +     || !bitmap_bit_p (m_names, SSA_NAME_VERSION (op)))
> > +   {
> > +     if (gimple_code (SSA_NAME_DEF_STMT (op)) == GIMPLE_NOP)
> 
> SSA_NAME_IS_DEFAULT_DEF

Ok.
> 
> > +       {
> > +         if (m_first)
> > +           {
> > +             tree v = create_tmp_var (m_limb_type);
> 
> create_tmp_reg?

I see create_tmp_reg just calls create_tmp_var, but if you prefer it,
sure, it isn't an addressable var and so either is fine.

> > +     edge e1 = split_block (gsi_bb (m_gsi), g);
> > +     edge e2 = split_block (e1->dest, (gimple *) NULL);
> > +     edge e3 = make_edge (e1->src, e2->dest, EDGE_TRUE_VALUE);
> > +     e3->probability = profile_probability::likely ();
> > +     if (min_prec >= (prec - rem) / 2)
> > +       e3->probability = e3->probability.invert ();
> > +     e1->flags = EDGE_FALSE_VALUE;
> > +     e1->probability = e3->probability.invert ();
> > +     set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > +     m_gsi = gsi_after_labels (e1->dest);
> > +     if (min_prec > (unsigned) limb_prec)
> > +       {
> > +         c = limb_access (TREE_TYPE (op), c, idx, false);
> > +         g = gimple_build_assign (make_ssa_name (TREE_TYPE (c)), c);
> > +         insert_before (g);
> > +         c = gimple_assign_lhs (g);
> > +       }
> > +     tree c2 = build_int_cst (m_limb_type, ext);
> > +     m_gsi = gsi_after_labels (e2->dest);
> > +     t = make_ssa_name (m_limb_type);
> > +     gphi *phi = create_phi_node (t, e2->dest);
> > +     add_phi_arg (phi, c, e2, UNKNOWN_LOCATION);
> > +     add_phi_arg (phi, c2, e3, UNKNOWN_LOCATION);
> 
> Not sure if I get to see more than the two cases above but maybe
> a helper to emit a (half-)diamond for N values (PHI results) would be
> helpful (possibly indicating the fallthru edge truth value if any)?

I've added a helper to create a loop, but indeed doing this for the
ifs might be a good idea too, just quite a lot of work to get it right
because it is now used in many places.
I think the code uses 3 cases, one is to create
C1
|\
|B1
|/
+
another
  C1
 / \
B1 B2
 \ /
  +
and another
  C1
 / \
 | C2
 | |\
 | | \
 |B1 B2
 \ | /
  \|/
   +
and needs to remember for later the edges to create phis if needed.
And, sometimes the B1 or B2 bbs are split to deal with EH edges.  So will
need to think about best interface for these.  Could this be done
incrementally when/if it is committed to trunk?

> > +      tree in = add_cast (rhs1_type, data_in);
> > +      lhs = make_ssa_name (rhs1_type);
> > +      g = gimple_build_assign (lhs, code, rhs1, rhs2);
> > +      insert_before (g);
> > +      rhs1 = make_ssa_name (rhs1_type);
> > +      g = gimple_build_assign (rhs1, code, lhs, in);
> > +      insert_before (g);
> 
> I'll just note there's now gimple_build overloads inserting at an
> iterator:
> 
> extern tree gimple_build (gimple_stmt_iterator *, bool,
>                           enum gsi_iterator_update,
>                           location_t, code_helper, tree, tree, tree);
> 
> I guess there's not much folding possibilities during the building,
> but it would allow to write

Changing that would mean rewriting everything I'm afraid.  Indeed as you
wrote, it is very rare that something could be folded during the lowering.
> 
>   rhs1 = gimple_build (&gsi, true, GSI_SAME_STMT, m_loc, code, rhs1_type, 
> lhs, in);
> 
> instead of
> 
> > +      rhs1 = make_ssa_name (rhs1_type);
> > +      g = gimple_build_assign (rhs1, code, lhs, in);
> > +      insert_before (g);
> 
> just in case you forgot about those.  I think we're missing some
> gimple-build "state" class to keep track of common arguments, like
> 
>   gimple_build gb (&gsi, true, GSI_SAME_STMT, m_loc);
>   rhs1 = gb.build (code, rhs1_type, lhs, in);
> ...
> 
> anyway, just wanted to note this - no need to change the patch.

> > +  switch (gimple_code (stmt))
> > +    {
> > +    case GIMPLE_ASSIGN:
> > +      if (gimple_assign_load_p (stmt))
> > +   {
> > +     rhs1 = gimple_assign_rhs1 (stmt);
> 
> so TREE_THIS_VOLATILE/TREE_SIDE_EFFECTS (rhs1) would be the thing
> to eventually preserve

limb_access should do that.

> > +tree
> > +bitint_large_huge::create_loop (tree init, tree *idx_next)
> > +{
> > +  if (!gsi_end_p (m_gsi))
> > +    gsi_prev (&m_gsi);
> > +  else
> > +    m_gsi = gsi_last_bb (gsi_bb (m_gsi));
> > +  edge e1 = split_block (gsi_bb (m_gsi), gsi_stmt (m_gsi));
> > +  edge e2 = split_block (e1->dest, (gimple *) NULL);
> > +  edge e3 = make_edge (e1->dest, e1->dest, EDGE_TRUE_VALUE);
> > +  e3->probability = profile_probability::very_unlikely ();
> > +  e2->flags = EDGE_FALSE_VALUE;
> > +  e2->probability = e3->probability.invert ();
> > +  tree idx = make_ssa_name (sizetype);
> 
> maybe you want integer_type_node instead?

The indexes are certainly unsigned, and given that they are used
as array indexes, I thought sizetype would avoid zero or sign extensions
in lots of places.

> > +  gphi *phi = create_phi_node (idx, e1->dest);
> > +  add_phi_arg (phi, init, e1, UNKNOWN_LOCATION);
> > +  *idx_next = make_ssa_name (sizetype);
> > +  add_phi_arg (phi, *idx_next, e3, UNKNOWN_LOCATION);
> > +  m_gsi = gsi_after_labels (e1->dest);
> > +  m_bb = e1->dest;
> > +  m_preheader_bb = e1->src;
> > +  class loop *loop = alloc_loop ();
> > +  loop->header = e1->dest;
> > +  add_loop (loop, e1->src->loop_father);
> 
> There is create_empty_loop_on_edge, it does a little bit more
> than the above though.

That looks much larger than what I need.
> 
> > +  return idx;
> > +}
> > +
> > +/* Lower large/huge _BitInt statement mergeable or similar STMT which can 
> > be
> > +   lowered using iteration from the least significant limb up to the most
> > +   significant limb.  For large _BitInt it is emitted as straight line code
> > +   before current location, for huge _BitInt as a loop handling two limbs
> > +   at once, followed by handling up to limbs in straight line code (at most
> > +   one full and one partial limb).  It can also handle EQ_EXPR/NE_EXPR
> > +   comparisons, in that case CMP_CODE should be the comparison code and
> > +   CMP_OP1/CMP_OP2 the comparison operands.  */
> > +
> > +tree
> > +bitint_large_huge::lower_mergeable_stmt (gimple *stmt, tree_code &cmp_code,
> > +                                    tree cmp_op1, tree cmp_op2)
> > +{
> > +  bool eq_p = cmp_code != ERROR_MARK;
> > +  tree type;
> > +  if (eq_p)
> > +    type = TREE_TYPE (cmp_op1);
> > +  else
> > +    type = TREE_TYPE (gimple_assign_lhs (stmt));
> > +  gcc_assert (TREE_CODE (type) == BITINT_TYPE);
> > +  bitint_prec_kind kind = bitint_precision_kind (type);
> > +  gcc_assert (kind >= bitint_prec_large);
> > +  gimple *g;
> > +  tree lhs = gimple_get_lhs (stmt);
> > +  tree rhs1, lhs_type = lhs ? TREE_TYPE (lhs) : NULL_TREE;
> > +  if (lhs
> > +      && TREE_CODE (lhs) == SSA_NAME
> > +      && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> > +      && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
> > +    {
> > +      int p = var_to_partition (m_map, lhs);
> > +      gcc_assert (m_vars[p] != NULL_TREE);
> > +      m_lhs = lhs = m_vars[p];
> > +    }
> > +  unsigned cnt, rem = 0, end = 0, prec = TYPE_PRECISION (type);
> > +  bool sext = false;
> > +  tree ext = NULL_TREE, store_operand = NULL_TREE;
> > +  bool eh = false;
> > +  basic_block eh_pad = NULL;
> > +  if (gimple_store_p (stmt))
> > +    {
> > +      store_operand = gimple_assign_rhs1 (stmt);
> > +      eh = stmt_ends_bb_p (stmt);
> > +      if (eh)
> > +   {
> > +     edge e;
> > +     edge_iterator ei;
> > +     basic_block bb = gimple_bb (stmt);
> > +
> > +     FOR_EACH_EDGE (e, ei, bb->succs)
> > +       if (e->flags & EDGE_EH)
> > +         {
> > +           eh_pad = e->dest;
> > +           break;
> > +         }
> > +   }
> > +    }
> > +  if ((store_operand
> > +       && TREE_CODE (store_operand) == SSA_NAME
> > +       && (m_names == NULL
> > +      || !bitmap_bit_p (m_names, SSA_NAME_VERSION (store_operand)))
> > +       && gimple_assign_cast_p (SSA_NAME_DEF_STMT (store_operand)))
> > +      || gimple_assign_cast_p (stmt))
> > +    {
> > +      rhs1 = gimple_assign_rhs1 (store_operand
> > +                            ? SSA_NAME_DEF_STMT (store_operand)
> > +                            : stmt);
> > +      /* Optimize mergeable ops ending with widening cast to _BitInt
> > +    (or followed by store).  We can lower just the limbs of the
> > +    cast operand and widen afterwards.  */
> > +      if (TREE_CODE (rhs1) == SSA_NAME
> > +     && (m_names == NULL
> > +         || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1)))
> > +     && TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE
> > +     && bitint_precision_kind (TREE_TYPE (rhs1)) >= bitint_prec_large
> > +     && (CEIL ((unsigned) TYPE_PRECISION (TREE_TYPE (rhs1)),
> > +               limb_prec) < CEIL (prec, limb_prec)
> > +         || (kind == bitint_prec_huge
> > +             && TYPE_PRECISION (TREE_TYPE (rhs1)) < prec)))
> > +   {
> > +     store_operand = rhs1;
> > +     prec = TYPE_PRECISION (TREE_TYPE (rhs1));
> > +     kind = bitint_precision_kind (TREE_TYPE (rhs1));
> > +     if (!TYPE_UNSIGNED (TREE_TYPE (rhs1)))
> > +       sext = true;
> > +   }
> > +    }
> > +  tree idx = NULL_TREE, idx_first = NULL_TREE, idx_next = NULL_TREE;
> > +  if (kind == bitint_prec_large)
> > +    cnt = CEIL (prec, limb_prec);
> > +  else
> > +    {
> > +      rem = (prec % (2 * limb_prec));
> > +      end = (prec - rem) / limb_prec;
> > +      cnt = 2 + CEIL (rem, limb_prec);
> > +      idx = idx_first = create_loop (size_zero_node, &idx_next);
> > +    }
> > +
> > +  basic_block edge_bb = NULL;
> > +  if (eq_p)
> > +    {
> > +      gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> > +      gsi_prev (&gsi);
> > +      edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
> > +      edge_bb = e->src;
> > +      if (kind == bitint_prec_large)
> > +   {
> > +     m_gsi = gsi_last_bb (edge_bb);
> > +     if (!gsi_end_p (m_gsi))
> > +       gsi_next (&m_gsi);
> > +   }
> > +    }
> > +  else
> > +    m_after_stmt = stmt;
> > +  if (kind != bitint_prec_large)
> > +    m_upwards_2limb = end;
> > +
> > +  for (unsigned i = 0; i < cnt; i++)
> > +    {
> > +      m_data_cnt = 0;
> > +      if (kind == bitint_prec_large)
> > +   idx = size_int (i);
> > +      else if (i >= 2)
> > +   idx = size_int (end + (i > 2));
> > +      if (eq_p)
> > +   {
> > +     rhs1 = handle_operand (cmp_op1, idx);
> > +     tree rhs2 = handle_operand (cmp_op2, idx);
> > +     g = gimple_build_cond (NE_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
> > +     insert_before (g);
> > +     edge e1 = split_block (gsi_bb (m_gsi), g);
> > +     e1->flags = EDGE_FALSE_VALUE;
> > +     edge e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
> > +     e1->probability = profile_probability::unlikely ();
> > +     e2->probability = e1->probability.invert ();
> > +     if (i == 0)
> > +       set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
> > +     m_gsi = gsi_after_labels (e1->dest);
> > +   }
> > +      else
> > +   {
> > +     if (store_operand)
> > +       rhs1 = handle_operand (store_operand, idx);
> > +     else
> > +       rhs1 = handle_stmt (stmt, idx);
> > +     tree l = limb_access (lhs_type, lhs, idx, true);
> > +     if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs1)))
> > +       rhs1 = add_cast (TREE_TYPE (l), rhs1);
> > +     if (sext && i == cnt - 1)
> > +       ext = rhs1;
> > +     g = gimple_build_assign (l, rhs1);
> > +     insert_before (g);
> > +     if (eh)
> > +       {
> > +         maybe_duplicate_eh_stmt (g, stmt);
> > +         if (eh_pad)
> > +           {
> > +             edge e = split_block (gsi_bb (m_gsi), g);
> > +             m_gsi = gsi_after_labels (e->dest);
> > +             make_edge (e->src, eh_pad, EDGE_EH)->probability
> > +               = profile_probability::very_unlikely ();
> > +           }
> > +       }
> > +   }
> > +      m_first = false;
> > +      if (kind == bitint_prec_huge && i <= 1)
> > +   {
> > +     if (i == 0)
> > +       {
> > +         idx = make_ssa_name (sizetype);
> > +         g = gimple_build_assign (idx, PLUS_EXPR, idx_first,
> > +                                  size_one_node);
> > +         insert_before (g);
> > +       }
> > +     else
> > +       {
> > +         g = gimple_build_assign (idx_next, PLUS_EXPR, idx_first,
> > +                                  size_int (2));
> > +         insert_before (g);
> > +         g = gimple_build_cond (NE_EXPR, idx_next, size_int (end),
> > +                                NULL_TREE, NULL_TREE);
> > +         insert_before (g);
> > +         if (eq_p)
> > +           m_gsi = gsi_after_labels (edge_bb);
> > +         else
> > +           m_gsi = gsi_for_stmt (stmt);
> > +       }
> > +   }
> > +    }
> > +
> > +  if (prec != (unsigned) TYPE_PRECISION (type)
> > +      && (CEIL ((unsigned) TYPE_PRECISION (type), limb_prec)
> > +     > CEIL (prec, limb_prec)))
> > +    {
> > +      if (sext)
> > +   {
> > +     ext = add_cast (signed_type_for (m_limb_type), ext);
> > +     tree lpm1 = build_int_cst (unsigned_type_node,
> > +                                limb_prec - 1);
> > +     tree n = make_ssa_name (TREE_TYPE (ext));
> > +     g = gimple_build_assign (n, RSHIFT_EXPR, ext, lpm1);
> > +     insert_before (g);
> > +     ext = add_cast (m_limb_type, n);
> > +   }
> > +      else
> > +   ext = build_zero_cst (m_limb_type);
> > +      kind = bitint_precision_kind (type);
> > +      unsigned start = CEIL (prec, limb_prec);
> > +      prec = TYPE_PRECISION (type);
> > +      idx = idx_first = idx_next = NULL_TREE;
> > +      if (prec <= (start + 2) * limb_prec)
> > +   kind = bitint_prec_large;
> > +      if (kind == bitint_prec_large)
> > +   cnt = CEIL (prec, limb_prec) - start;
> > +      else
> > +   {
> > +     rem = prec % limb_prec;
> > +     end = (prec - rem) / limb_prec;
> > +     cnt = 1 + (rem != 0);
> > +     idx = create_loop (size_int (start), &idx_next);
> > +   }
> > +      for (unsigned i = 0; i < cnt; i++)
> > +   {
> > +     if (kind == bitint_prec_large)
> > +       idx = size_int (start + i);
> > +     else if (i == 1)
> > +       idx = size_int (end);
> > +     rhs1 = ext;
> > +     tree l = limb_access (lhs_type, lhs, idx, true);
> > +     if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs1)))
> > +       rhs1 = add_cast (TREE_TYPE (l), rhs1);
> > +     g = gimple_build_assign (l, rhs1);
> > +     insert_before (g);
> > +     if (kind == bitint_prec_huge && i == 0)
> > +       {
> > +         g = gimple_build_assign (idx_next, PLUS_EXPR, idx,
> > +                                  size_one_node);
> > +         insert_before (g);
> > +         g = gimple_build_cond (NE_EXPR, idx_next, size_int (end),
> > +                                NULL_TREE, NULL_TREE);
> > +         insert_before (g);
> > +         m_gsi = gsi_for_stmt (stmt);
> > +       }
> > +   }
> > +    }
> > +
> > +  if (gimple_store_p (stmt))
> > +    {
> > +      unlink_stmt_vdef (stmt);
> > +      release_ssa_name (gimple_vdef (stmt));
> > +      gsi_remove (&m_gsi, true);
> > +    }
> > +  if (eq_p)
> > +    {
> > +      lhs = make_ssa_name (boolean_type_node);
> > +      basic_block bb = gimple_bb (stmt);
> > +      gphi *phi = create_phi_node (lhs, bb);
> > +      edge e = find_edge (gsi_bb (m_gsi), bb);
> > +      unsigned int n = EDGE_COUNT (bb->preds);
> > +      for (unsigned int i = 0; i < n; i++)
> > +   {
> > +     edge e2 = EDGE_PRED (bb, i);
> > +     add_phi_arg (phi, e == e2 ? boolean_true_node : boolean_false_node,
> > +                  e2, UNKNOWN_LOCATION);
> > +   }
> > +      cmp_code = cmp_code == EQ_EXPR ? NE_EXPR : EQ_EXPR;
> > +      return lhs;
> > +    }
> > +  else
> > +    return NULL_TREE;
> > +}
> > +
> > +/* Handle a large/huge _BitInt comparison statement STMT other than
> > +   EQ_EXPR/NE_EXPR.  CMP_CODE, CMP_OP1 and CMP_OP2 meaning is like in
> > +   lower_mergeable_stmt.  The {GT,GE,LT,LE}_EXPR comparisons are
> > +   lowered by iteration from the most significant limb downwards to
> > +   the least significant one, for large _BitInt in straight line code,
> > +   otherwise with most significant limb handled in
> > +   straight line code followed by a loop handling one limb at a time.
> > +   Comparisons with unsigned huge _BitInt with precisions which are
> > +   multiples of limb precision can use just the loop and don't need to
> > +   handle most significant limb before the loop.  The loop or straight
> > +   line code jumps to final basic block if a particular pair of limbs
> > +   is not equal.  */
> > +
> > +tree
> > +bitint_large_huge::lower_comparison_stmt (gimple *stmt, tree_code 
> > &cmp_code,
> > +                                     tree cmp_op1, tree cmp_op2)
> > +{
> > +  tree type = TREE_TYPE (cmp_op1);
> > +  gcc_assert (TREE_CODE (type) == BITINT_TYPE);
> > +  bitint_prec_kind kind = bitint_precision_kind (type);
> > +  gcc_assert (kind >= bitint_prec_large);
> > +  gimple *g;
> > +  if (!TYPE_UNSIGNED (type)
> > +      && integer_zerop (cmp_op2)
> > +      && (cmp_code == GE_EXPR || cmp_code == LT_EXPR))
> > +    {
> > +      unsigned end = CEIL ((unsigned) TYPE_PRECISION (type), limb_prec) - 
> > 1;
> > +      tree idx = size_int (end);
> > +      m_data_cnt = 0;
> > +      tree rhs1 = handle_operand (cmp_op1, idx);
> > +      if (TYPE_UNSIGNED (TREE_TYPE (rhs1)))
> > +   {
> > +     tree stype = signed_type_for (TREE_TYPE (rhs1));
> > +     rhs1 = add_cast (stype, rhs1);
> > +   }
> > +      tree lhs = make_ssa_name (boolean_type_node);
> > +      g = gimple_build_assign (lhs, cmp_code, rhs1,
> > +                          build_zero_cst (TREE_TYPE (rhs1)));
> > +      insert_before (g);
> > +      cmp_code = NE_EXPR;
> > +      return lhs;
> > +    }
> > +
> > +  unsigned cnt, rem = 0, end = 0;
> > +  tree idx = NULL_TREE, idx_next = NULL_TREE;
> > +  if (kind == bitint_prec_large)
> > +    cnt = CEIL ((unsigned) TYPE_PRECISION (type), limb_prec);
> > +  else
> > +    {
> > +      rem = ((unsigned) TYPE_PRECISION (type) % limb_prec);
> > +      if (rem == 0 && !TYPE_UNSIGNED (type))
> > +   rem = limb_prec;
> > +      end = ((unsigned) TYPE_PRECISION (type) - rem) / limb_prec;
> > +      cnt = 1 + (rem != 0);
> > +    }
> > +
> > +  basic_block edge_bb = NULL;
> > +  gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> > +  gsi_prev (&gsi);
> > +  edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
> > +  edge_bb = e->src;
> > +  m_gsi = gsi_last_bb (edge_bb);
> > +  if (!gsi_end_p (m_gsi))
> > +    gsi_next (&m_gsi);
> > +
> > +  edge *edges = XALLOCAVEC (edge, cnt * 2);
> > +  for (unsigned i = 0; i < cnt; i++)
> > +    {
> > +      m_data_cnt = 0;
> > +      if (kind == bitint_prec_large)
> > +   idx = size_int (cnt - i - 1);
> > +      else if (i == cnt - 1)
> > +   idx = create_loop (size_int (end - 1), &idx_next);
> > +      else
> > +   idx = size_int (end);
> > +      tree rhs1 = handle_operand (cmp_op1, idx);
> > +      tree rhs2 = handle_operand (cmp_op2, idx);
> > +      if (i == 0
> > +     && !TYPE_UNSIGNED (type)
> > +     && TYPE_UNSIGNED (TREE_TYPE (rhs1)))
> > +   {
> > +     tree stype = signed_type_for (TREE_TYPE (rhs1));
> > +     rhs1 = add_cast (stype, rhs1);
> > +     rhs2 = add_cast (stype, rhs2);
> > +   }
> > +      g = gimple_build_cond (GT_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
> > +      insert_before (g);
> > +      edge e1 = split_block (gsi_bb (m_gsi), g);
> > +      e1->flags = EDGE_FALSE_VALUE;
> > +      edge e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
> > +      e1->probability = profile_probability::likely ();
> > +      e2->probability = e1->probability.invert ();
> > +      if (i == 0)
> > +   set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
> > +      m_gsi = gsi_after_labels (e1->dest);
> > +      edges[2 * i] = e2;
> > +      g = gimple_build_cond (LT_EXPR, rhs1, rhs2, NULL_TREE, NULL_TREE);
> > +      insert_before (g);
> > +      e1 = split_block (gsi_bb (m_gsi), g);
> > +      e1->flags = EDGE_FALSE_VALUE;
> > +      e2 = make_edge (e1->src, gimple_bb (stmt), EDGE_TRUE_VALUE);
> > +      e1->probability = profile_probability::unlikely ();
> > +      e2->probability = e1->probability.invert ();
> > +      m_gsi = gsi_after_labels (e1->dest);
> > +      edges[2 * i + 1] = e2;
> > +      m_first = false;
> > +      if (kind == bitint_prec_huge && i == cnt - 1)
> > +   {
> > +     g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> > +     insert_before (g);
> > +     g = gimple_build_cond (NE_EXPR, idx, size_zero_node,
> > +                            NULL_TREE, NULL_TREE);
> > +     insert_before (g);
> > +     edge true_edge, false_edge;
> > +     extract_true_false_edges_from_block (gsi_bb (m_gsi),
> > +                                          &true_edge, &false_edge);
> > +     m_gsi = gsi_after_labels (false_edge->dest);
> > +   }
> > +    }
> > +
> > +  tree lhs = make_ssa_name (boolean_type_node);
> > +  basic_block bb = gimple_bb (stmt);
> > +  gphi *phi = create_phi_node (lhs, bb);
> > +  for (unsigned int i = 0; i < cnt * 2; i++)
> > +    {
> > +      tree val = ((cmp_code == GT_EXPR || cmp_code == GE_EXPR)
> > +             ^ (i & 1)) ? boolean_true_node : boolean_false_node;
> > +      add_phi_arg (phi, val, edges[i], UNKNOWN_LOCATION);
> > +    }
> > +  add_phi_arg (phi, (cmp_code == GE_EXPR || cmp_code == LE_EXPR)
> > +               ? boolean_true_node : boolean_false_node,
> > +          find_edge (gsi_bb (m_gsi), bb), UNKNOWN_LOCATION);
> > +  cmp_code = NE_EXPR;
> > +  return lhs;
> > +}
> > +
> > +/* Lower large/huge _BitInt left and right shift except for left
> > +   shift by < limb_prec constant.  */
> > +
> > +void
> > +bitint_large_huge::lower_shift_stmt (tree obj, gimple *stmt)
> > +{
> > +  tree rhs1 = gimple_assign_rhs1 (stmt);
> > +  tree lhs = gimple_assign_lhs (stmt);
> > +  tree_code rhs_code = gimple_assign_rhs_code (stmt);
> > +  tree type = TREE_TYPE (rhs1);
> > +  gimple *final_stmt = gsi_stmt (m_gsi);
> > +  gcc_assert (TREE_CODE (type) == BITINT_TYPE
> > +         && bitint_precision_kind (type) >= bitint_prec_large);
> > +  int prec = TYPE_PRECISION (type);
> > +  tree n = gimple_assign_rhs2 (stmt), n1, n2, n3, n4;
> > +  gimple *g;
> > +  if (obj == NULL_TREE)
> > +    {
> > +      int part = var_to_partition (m_map, lhs);
> > +      gcc_assert (m_vars[part] != NULL_TREE);
> > +      obj = m_vars[part];
> > +    }
> > +  /* Preparation code common for both left and right shifts.
> > +     unsigned n1 = n % limb_prec;
> > +     size_t n2 = n / limb_prec;
> > +     size_t n3 = n1 != 0;
> > +     unsigned n4 = (limb_prec - n1) % limb_prec;
> > +     (for power of 2 limb_prec n4 can be -n1 & (limb_prec)).  */
> > +  if (TREE_CODE (n) == INTEGER_CST)
> > +    {
> > +      tree lp = build_int_cst (TREE_TYPE (n), limb_prec);
> > +      n1 = int_const_binop (TRUNC_MOD_EXPR, n, lp);
> > +      n2 = fold_convert (sizetype, int_const_binop (TRUNC_DIV_EXPR, n, 
> > lp));
> > +      n3 = size_int (!integer_zerop (n1));
> > +      n4 = int_const_binop (TRUNC_MOD_EXPR,
> > +                       int_const_binop (MINUS_EXPR, lp, n1), lp);
> > +    }
> > +  else
> > +    {
> > +      n1 = make_ssa_name (TREE_TYPE (n));
> > +      n2 = make_ssa_name (sizetype);
> > +      n3 = make_ssa_name (sizetype);
> > +      n4 = make_ssa_name (TREE_TYPE (n));
> > +      if (pow2p_hwi (limb_prec))
> > +   {
> > +     tree lpm1 = build_int_cst (TREE_TYPE (n), limb_prec - 1);
> > +     g = gimple_build_assign (n1, BIT_AND_EXPR, n, lpm1);
> > +     insert_before (g);
> > +     g = gimple_build_assign (useless_type_conversion_p (sizetype,
> > +                                                         TREE_TYPE (n))
> > +                              ? n2 : make_ssa_name (TREE_TYPE (n)),
> > +                              RSHIFT_EXPR, n,
> > +                              build_int_cst (TREE_TYPE (n),
> > +                                             exact_log2 (limb_prec)));
> > +     insert_before (g);
> > +     if (gimple_assign_lhs (g) != n2)
> > +       {
> > +         g = gimple_build_assign (n2, NOP_EXPR, gimple_assign_lhs (g));
> > +         insert_before (g);
> > +       }
> > +     g = gimple_build_assign (make_ssa_name (TREE_TYPE (n)),
> > +                              NEGATE_EXPR, n1);
> > +     insert_before (g);
> > +     g = gimple_build_assign (n4, BIT_AND_EXPR, gimple_assign_lhs (g),
> > +                              lpm1);
> > +     insert_before (g);
> > +   }
> > +      else
> > +   {
> > +     tree lp = build_int_cst (TREE_TYPE (n), limb_prec);
> > +     g = gimple_build_assign (n1, TRUNC_MOD_EXPR, n, lp);
> > +     insert_before (g);
> > +     g = gimple_build_assign (useless_type_conversion_p (sizetype,
> > +                                                         TREE_TYPE (n))
> > +                              ? n2 : make_ssa_name (TREE_TYPE (n)),
> > +                              TRUNC_DIV_EXPR, n, lp);
> > +     insert_before (g);
> > +     if (gimple_assign_lhs (g) != n2)
> > +       {
> > +         g = gimple_build_assign (n2, NOP_EXPR, gimple_assign_lhs (g));
> > +         insert_before (g);
> > +       }
> > +     g = gimple_build_assign (make_ssa_name (TREE_TYPE (n)),
> > +                              MINUS_EXPR, lp, n1);
> > +     insert_before (g);
> > +     g = gimple_build_assign (n4, TRUNC_MOD_EXPR, gimple_assign_lhs (g),
> > +                              lp);
> > +     insert_before (g);
> > +   }
> > +      g = gimple_build_assign (make_ssa_name (boolean_type_node), NE_EXPR, 
> > n1,
> > +                          build_zero_cst (TREE_TYPE (n)));
> > +      insert_before (g);
> > +      g = gimple_build_assign (n3, NOP_EXPR, gimple_assign_lhs (g));
> > +      insert_before (g);
> > +    }
> > +  tree p = build_int_cst (sizetype,
> > +                     prec / limb_prec - (prec % limb_prec == 0));
> > +  if (rhs_code == RSHIFT_EXPR)
> > +    {
> > +      /* Lower
> > +      dst = src >> n;
> > +    as
> > +      unsigned n1 = n % limb_prec;
> > +      size_t n2 = n / limb_prec;
> > +      size_t n3 = n1 != 0;
> > +      unsigned n4 = (limb_prec - n1) % limb_prec;
> > +      size_t idx;
> > +      size_t p = prec / limb_prec - (prec % limb_prec == 0);
> > +      int signed_p = (typeof (src) -1) < 0;
> > +      for (idx = n2; idx < ((!signed_p && (prec % limb_prec == 0))
> > +                            ? p : p - n3); ++idx)
> > +        dst[idx - n2] = (src[idx] >> n1) | (src[idx + n3] << n4);
> > +      limb_type ext;
> > +      if (prec % limb_prec == 0)
> > +        ext = src[p];
> > +      else if (signed_p)
> > +        ext = ((signed limb_type) (src[p] << (limb_prec
> > +                                              - (prec % limb_prec))))
> > +              >> (limb_prec - (prec % limb_prec));
> > +      else
> > +        ext = src[p] & (((limb_type) 1 << (prec % limb_prec)) - 1);
> > +      if (!signed_p && (prec % limb_prec == 0))
> > +        ;
> > +      else if (idx < prec / 64)
> > +        {
> > +          dst[idx - n2] = (src[idx] >> n1) | (ext << n4);
> > +          ++idx;
> > +        }
> > +      idx -= n2;
> > +      if (signed_p)
> > +        {
> > +          dst[idx] = ((signed limb_type) ext) >> n1;
> > +          ext = ((signed limb_type) ext) >> (limb_prec - 1);
> > +        }
> > +      else
> > +        {
> > +          dst[idx] = ext >> n1;
> > +          ext = 0;
> > +        }
> > +      for (++idx; idx <= p; ++idx)
> > +        dst[idx] = ext;  */
> > +      tree pmn3;
> > +      if (TYPE_UNSIGNED (type) && prec % limb_prec == 0)
> > +   pmn3 = p;
> > +      else if (TREE_CODE (n3) == INTEGER_CST)
> > +   pmn3 = int_const_binop (MINUS_EXPR, p, n3);
> > +      else
> > +   {
> > +     pmn3 = make_ssa_name (sizetype);
> > +     g = gimple_build_assign (pmn3, MINUS_EXPR, p, n3);
> > +     insert_before (g);
> > +   }
> > +      g = gimple_build_cond (LT_EXPR, n2, pmn3, NULL_TREE, NULL_TREE);
> > +      insert_before (g);
> > +      edge e1 = split_block (gsi_bb (m_gsi), g);
> > +      edge e2 = split_block (e1->dest, (gimple *) NULL);
> > +      edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > +      e3->probability = profile_probability::unlikely ();
> > +      e1->flags = EDGE_TRUE_VALUE;
> > +      e1->probability = e3->probability.invert ();
> > +      set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > +      m_gsi = gsi_after_labels (e1->dest);
> > +      tree idx_next;
> > +      tree idx = create_loop (n2, &idx_next);
> > +      tree idxmn2 = make_ssa_name (sizetype);
> > +      tree idxpn3 = make_ssa_name (sizetype);
> > +      g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> > +      insert_before (g);
> > +      g = gimple_build_assign (idxpn3, PLUS_EXPR, idx, n3);
> > +      insert_before (g);
> > +      m_data_cnt = 0;
> > +      tree t1 = handle_operand (rhs1, idx);
> > +      m_first = false;
> > +      g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                          RSHIFT_EXPR, t1, n1);
> > +      insert_before (g);
> > +      t1 = gimple_assign_lhs (g);
> > +      if (!integer_zerop (n3))
> > +   {
> > +     m_data_cnt = 0;
> > +     tree t2 = handle_operand (rhs1, idxpn3);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                              LSHIFT_EXPR, t2, n4);
> > +     insert_before (g);
> > +     t2 = gimple_assign_lhs (g);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                              BIT_IOR_EXPR, t1, t2);
> > +     insert_before (g);
> > +     t1 = gimple_assign_lhs (g);
> > +   }
> > +      tree l = limb_access (TREE_TYPE (lhs), obj, idxmn2, true);
> > +      g = gimple_build_assign (l, t1);
> > +      insert_before (g);
> > +      g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
> > +      insert_before (g);
> > +      g = gimple_build_cond (LT_EXPR, idx_next, pmn3, NULL_TREE, 
> > NULL_TREE);
> > +      insert_before (g);
> > +      idx = make_ssa_name (sizetype);
> > +      m_gsi = gsi_for_stmt (final_stmt);
> > +      gphi *phi = create_phi_node (idx, gsi_bb (m_gsi));
> > +      e1 = find_edge (e1->src, gsi_bb (m_gsi));
> > +      e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> > +      add_phi_arg (phi, n2, e1, UNKNOWN_LOCATION);
> > +      add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> > +      m_data_cnt = 0;
> > +      tree ms = handle_operand (rhs1, p);
> > +      tree ext = ms;
> > +      if (!types_compatible_p (TREE_TYPE (ms), m_limb_type))
> > +   ext = add_cast (m_limb_type, ms);
> > +      if (!(TYPE_UNSIGNED (type) && prec % limb_prec == 0)
> > +     && !integer_zerop (n3))
> > +   {
> > +     g = gimple_build_cond (LT_EXPR, idx, p, NULL_TREE, NULL_TREE);
> > +     insert_before (g);
> > +     e1 = split_block (gsi_bb (m_gsi), g);
> > +     e2 = split_block (e1->dest, (gimple *) NULL);
> > +     e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > +     e3->probability = profile_probability::unlikely ();
> > +     e1->flags = EDGE_TRUE_VALUE;
> > +     e1->probability = e3->probability.invert ();
> > +     set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > +     m_gsi = gsi_after_labels (e1->dest);
> > +     m_data_cnt = 0;
> > +     t1 = handle_operand (rhs1, idx);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                              RSHIFT_EXPR, t1, n1);
> > +     insert_before (g);
> > +     t1 = gimple_assign_lhs (g);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                              LSHIFT_EXPR, ext, n4);
> > +     insert_before (g);
> > +     tree t2 = gimple_assign_lhs (g);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                              BIT_IOR_EXPR, t1, t2);
> > +     insert_before (g);
> > +     t1 = gimple_assign_lhs (g);
> > +     idxmn2 = make_ssa_name (sizetype);
> > +     g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> > +     insert_before (g);
> > +     l = limb_access (TREE_TYPE (lhs), obj, idxmn2, true);
> > +     g = gimple_build_assign (l, t1);
> > +     insert_before (g);
> > +     idx_next = make_ssa_name (sizetype);
> > +     g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
> > +     insert_before (g);
> > +     m_gsi = gsi_for_stmt (final_stmt);
> > +     tree nidx = make_ssa_name (sizetype);
> > +     phi = create_phi_node (nidx, gsi_bb (m_gsi));
> > +     e1 = find_edge (e1->src, gsi_bb (m_gsi));
> > +     e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> > +     add_phi_arg (phi, idx, e1, UNKNOWN_LOCATION);
> > +     add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> > +     idx = nidx;
> > +   }
> > +      g = gimple_build_assign (make_ssa_name (sizetype), MINUS_EXPR, idx, 
> > n2);
> > +      insert_before (g);
> > +      idx = gimple_assign_lhs (g);
> > +      tree sext = ext;
> > +      if (!TYPE_UNSIGNED (type))
> > +   sext = add_cast (signed_type_for (m_limb_type), ext);
> > +      g = gimple_build_assign (make_ssa_name (TREE_TYPE (sext)),
> > +                          RSHIFT_EXPR, sext, n1);
> > +      insert_before (g);
> > +      t1 = gimple_assign_lhs (g);
> > +      if (!TYPE_UNSIGNED (type))
> > +   {
> > +     t1 = add_cast (m_limb_type, t1);
> > +     g = gimple_build_assign (make_ssa_name (TREE_TYPE (sext)),
> > +                              RSHIFT_EXPR, sext,
> > +                              build_int_cst (TREE_TYPE (n),
> > +                                             limb_prec - 1));
> > +     insert_before (g);
> > +     ext = add_cast (m_limb_type, gimple_assign_lhs (g));
> > +   }
> > +      else
> > +   ext = build_zero_cst (m_limb_type);
> > +      l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> > +      g = gimple_build_assign (l, t1);
> > +      insert_before (g);
> > +      g = gimple_build_assign (make_ssa_name (sizetype), PLUS_EXPR, idx,
> > +                          size_one_node);
> > +      insert_before (g);
> > +      idx = gimple_assign_lhs (g);
> > +      g = gimple_build_cond (LE_EXPR, idx, p, NULL_TREE, NULL_TREE);
> > +      insert_before (g);
> > +      e1 = split_block (gsi_bb (m_gsi), g);
> > +      e2 = split_block (e1->dest, (gimple *) NULL);
> > +      e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > +      e3->probability = profile_probability::unlikely ();
> > +      e1->flags = EDGE_TRUE_VALUE;
> > +      e1->probability = e3->probability.invert ();
> > +      set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > +      m_gsi = gsi_after_labels (e1->dest);
> > +      idx = create_loop (idx, &idx_next);
> > +      l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> > +      g = gimple_build_assign (l, ext);
> > +      insert_before (g);
> > +      g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_one_node);
> > +      insert_before (g);
> > +      g = gimple_build_cond (LE_EXPR, idx_next, p, NULL_TREE, NULL_TREE);
> > +      insert_before (g);
> > +    }
> > +  else
> > +    {
> > +      /* Lower
> > +      dst = src << n;
> > +    as
> > +      unsigned n1 = n % limb_prec;
> > +      size_t n2 = n / limb_prec;
> > +      size_t n3 = n1 != 0;
> > +      unsigned n4 = (limb_prec - n1) % limb_prec;
> > +      size_t idx;
> > +      size_t p = prec / limb_prec - (prec % limb_prec == 0);
> > +      for (idx = p; (ssize_t) idx >= (ssize_t) (n2 + n3); --idx)
> > +        dst[idx] = (src[idx - n2] << n1) | (src[idx - n2 - n3] >> n4);
> > +      if (n1)
> > +        {
> > +          dst[idx] = src[idx - n2] << n1;
> > +          --idx;
> > +        }
> > +      for (; (ssize_t) idx >= 0; --idx)
> > +        dst[idx] = 0;  */
> > +      tree n2pn3;
> > +      if (TREE_CODE (n2) == INTEGER_CST && TREE_CODE (n3) == INTEGER_CST)
> > +   n2pn3 = int_const_binop (PLUS_EXPR, n2, n3);
> > +      else
> > +   {
> > +     n2pn3 = make_ssa_name (sizetype);
> > +     g = gimple_build_assign (n2pn3, PLUS_EXPR, n2, n3);
> > +     insert_before (g);
> > +   }
> > +      /* For LSHIFT_EXPR, we can use handle_operand with non-INTEGER_CST
> > +    idx even to access the most significant partial limb.  */
> > +      m_var_msb = true;
> > +      if (integer_zerop (n3))
> > +   /* For n3 == 0 p >= n2 + n3 is always true for all valid shift
> > +      counts.  Emit if (true) condition that can be optimized later.  */
> > +   g = gimple_build_cond (NE_EXPR, boolean_true_node, boolean_false_node,
> > +                          NULL_TREE, NULL_TREE);
> > +      else
> > +   g = gimple_build_cond (LE_EXPR, n2pn3, p, NULL_TREE, NULL_TREE);
> > +      insert_before (g);
> > +      edge e1 = split_block (gsi_bb (m_gsi), g);
> > +      edge e2 = split_block (e1->dest, (gimple *) NULL);
> > +      edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > +      e3->probability = profile_probability::unlikely ();
> > +      e1->flags = EDGE_TRUE_VALUE;
> > +      e1->probability = e3->probability.invert ();
> > +      set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > +      m_gsi = gsi_after_labels (e1->dest);
> > +      tree idx_next;
> > +      tree idx = create_loop (p, &idx_next);
> > +      tree idxmn2 = make_ssa_name (sizetype);
> > +      tree idxmn2mn3 = make_ssa_name (sizetype);
> > +      g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> > +      insert_before (g);
> > +      g = gimple_build_assign (idxmn2mn3, MINUS_EXPR, idxmn2, n3);
> > +      insert_before (g);
> > +      m_data_cnt = 0;
> > +      tree t1 = handle_operand (rhs1, idxmn2);
> > +      m_first = false;
> > +      g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                          LSHIFT_EXPR, t1, n1);
> > +      insert_before (g);
> > +      t1 = gimple_assign_lhs (g);
> > +      if (!integer_zerop (n3))
> > +   {
> > +     m_data_cnt = 0;
> > +     tree t2 = handle_operand (rhs1, idxmn2mn3);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                              RSHIFT_EXPR, t2, n4);
> > +     insert_before (g);
> > +     t2 = gimple_assign_lhs (g);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                              BIT_IOR_EXPR, t1, t2);
> > +     insert_before (g);
> > +     t1 = gimple_assign_lhs (g);
> > +   }
> > +      tree l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> > +      g = gimple_build_assign (l, t1);
> > +      insert_before (g);
> > +      g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> > +      insert_before (g);
> > +      tree sn2pn3 = add_cast (ssizetype, n2pn3);
> > +      g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx_next), 
> > sn2pn3,
> > +                        NULL_TREE, NULL_TREE);
> > +      insert_before (g);
> > +      idx = make_ssa_name (sizetype);
> > +      m_gsi = gsi_for_stmt (final_stmt);
> > +      gphi *phi = create_phi_node (idx, gsi_bb (m_gsi));
> > +      e1 = find_edge (e1->src, gsi_bb (m_gsi));
> > +      e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> > +      add_phi_arg (phi, p, e1, UNKNOWN_LOCATION);
> > +      add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> > +      m_data_cnt = 0;
> > +      if (!integer_zerop (n3))
> > +   {
> > +     g = gimple_build_cond (NE_EXPR, n3, size_zero_node,
> > +                            NULL_TREE, NULL_TREE);
> > +     insert_before (g);
> > +     e1 = split_block (gsi_bb (m_gsi), g);
> > +     e2 = split_block (e1->dest, (gimple *) NULL);
> > +     e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > +     e3->probability = profile_probability::unlikely ();
> > +     e1->flags = EDGE_TRUE_VALUE;
> > +     e1->probability = e3->probability.invert ();
> > +     set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > +     m_gsi = gsi_after_labels (e1->dest);
> > +     idxmn2 = make_ssa_name (sizetype);
> > +     g = gimple_build_assign (idxmn2, MINUS_EXPR, idx, n2);
> > +     insert_before (g);
> > +     m_data_cnt = 0;
> > +     t1 = handle_operand (rhs1, idxmn2);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                              LSHIFT_EXPR, t1, n1);
> > +     insert_before (g);
> > +     t1 = gimple_assign_lhs (g);
> > +     l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> > +     g = gimple_build_assign (l, t1);
> > +     insert_before (g);
> > +     idx_next = make_ssa_name (sizetype);
> > +     g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> > +     insert_before (g);
> > +     m_gsi = gsi_for_stmt (final_stmt);
> > +     tree nidx = make_ssa_name (sizetype);
> > +     phi = create_phi_node (nidx, gsi_bb (m_gsi));
> > +     e1 = find_edge (e1->src, gsi_bb (m_gsi));
> > +     e2 = EDGE_PRED (gsi_bb (m_gsi), EDGE_PRED (gsi_bb (m_gsi), 0) == e1);
> > +     add_phi_arg (phi, idx, e1, UNKNOWN_LOCATION);
> > +     add_phi_arg (phi, idx_next, e2, UNKNOWN_LOCATION);
> > +     idx = nidx;
> > +   }
> > +      g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx),
> > +                        ssize_int (0), NULL_TREE, NULL_TREE);
> > +      insert_before (g);
> > +      e1 = split_block (gsi_bb (m_gsi), g);
> > +      e2 = split_block (e1->dest, (gimple *) NULL);
> > +      e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > +      e3->probability = profile_probability::unlikely ();
> > +      e1->flags = EDGE_TRUE_VALUE;
> > +      e1->probability = e3->probability.invert ();
> > +      set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > +      m_gsi = gsi_after_labels (e1->dest);
> > +      idx = create_loop (idx, &idx_next);
> > +      l = limb_access (TREE_TYPE (lhs), obj, idx, true);
> > +      g = gimple_build_assign (l, build_zero_cst (m_limb_type));
> > +      insert_before (g);
> > +      g = gimple_build_assign (idx_next, PLUS_EXPR, idx, size_int (-1));
> > +      insert_before (g);
> > +      g = gimple_build_cond (GE_EXPR, add_cast (ssizetype, idx_next),
> > +                        ssize_int (0), NULL_TREE, NULL_TREE);
> > +      insert_before (g);
> > +    }
> > +}
> > +
> > +/* Lower large/huge _BitInt multiplication or division.  */
> > +
> > +void
> > +bitint_large_huge::lower_muldiv_stmt (tree obj, gimple *stmt)
> > +{
> > +  tree rhs1 = gimple_assign_rhs1 (stmt);
> > +  tree rhs2 = gimple_assign_rhs2 (stmt);
> > +  tree lhs = gimple_assign_lhs (stmt);
> > +  tree_code rhs_code = gimple_assign_rhs_code (stmt);
> > +  tree type = TREE_TYPE (rhs1);
> > +  gcc_assert (TREE_CODE (type) == BITINT_TYPE
> > +         && bitint_precision_kind (type) >= bitint_prec_large);
> > +  int prec = TYPE_PRECISION (type), prec1, prec2;
> > +  rhs1 = handle_operand_addr (rhs1, stmt, NULL, &prec1);
> > +  rhs2 = handle_operand_addr (rhs2, stmt, NULL, &prec2);
> > +  if (obj == NULL_TREE)
> > +    {
> > +      int part = var_to_partition (m_map, lhs);
> > +      gcc_assert (m_vars[part] != NULL_TREE);
> > +      obj = m_vars[part];
> > +      lhs = build_fold_addr_expr (obj);
> > +    }
> > +  else
> > +    {
> > +      lhs = build_fold_addr_expr (obj);
> > +      lhs = force_gimple_operand_gsi (&m_gsi, lhs, true,
> > +                                 NULL_TREE, true, GSI_SAME_STMT);
> > +    }
> > +  tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
> > +  gimple *g;
> > +  switch (rhs_code)
> > +    {
> > +    case MULT_EXPR:
> > +      g = gimple_build_call_internal (IFN_MULBITINT, 6,
> > +                                 lhs, build_int_cst (sitype, prec),
> > +                                 rhs1, build_int_cst (sitype, prec1),
> > +                                 rhs2, build_int_cst (sitype, prec2));
> > +      insert_before (g);
> > +      break;
> > +    case TRUNC_DIV_EXPR:
> > +      g = gimple_build_call_internal (IFN_DIVMODBITINT, 8,
> > +                                 lhs, build_int_cst (sitype, prec),
> > +                                 null_pointer_node,
> > +                                 build_int_cst (sitype, 0),
> > +                                 rhs1, build_int_cst (sitype, prec1),
> > +                                 rhs2, build_int_cst (sitype, prec2));
> > +      if (!stmt_ends_bb_p (stmt))
> > +   gimple_call_set_nothrow (as_a <gcall *> (g), true);
> > +      insert_before (g);
> > +      break;
> > +    case TRUNC_MOD_EXPR:
> > +      g = gimple_build_call_internal (IFN_DIVMODBITINT, 8, 
> > null_pointer_node,
> > +                                 build_int_cst (sitype, 0),
> > +                                 lhs, build_int_cst (sitype, prec),
> > +                                 rhs1, build_int_cst (sitype, prec1),
> > +                                 rhs2, build_int_cst (sitype, prec2));
> > +      if (!stmt_ends_bb_p (stmt))
> > +   gimple_call_set_nothrow (as_a <gcall *> (g), true);
> > +      insert_before (g);
> > +      break;
> > +    default:
> > +      gcc_unreachable ();
> > +    }
> > +  if (stmt_ends_bb_p (stmt))
> > +    {
> > +      maybe_duplicate_eh_stmt (g, stmt);
> > +      edge e1;
> > +      edge_iterator ei;
> > +      basic_block bb = gimple_bb (stmt);
> > +
> > +      FOR_EACH_EDGE (e1, ei, bb->succs)
> > +   if (e1->flags & EDGE_EH)
> > +     break;
> > +      if (e1)
> > +   {
> > +     edge e2 = split_block (gsi_bb (m_gsi), g);
> > +     m_gsi = gsi_after_labels (e2->dest);
> > +     make_edge (e2->src, e1->dest, EDGE_EH)->probability
> > +       = profile_probability::very_unlikely ();
> > +   }
> > +    }
> > +}
> > +
> > +/* Lower large/huge _BitInt conversion to/from floating point.  */
> > +
> > +void
> > +bitint_large_huge::lower_float_conv_stmt (tree obj, gimple *stmt)
> > +{
> > +  tree rhs1 = gimple_assign_rhs1 (stmt);
> > +  tree lhs = gimple_assign_lhs (stmt);
> > +  tree_code rhs_code = gimple_assign_rhs_code (stmt);
> > +  if (DECIMAL_FLOAT_MODE_P (TYPE_MODE (TREE_TYPE (rhs1)))
> > +      || DECIMAL_FLOAT_MODE_P (TYPE_MODE (TREE_TYPE (lhs))))
> > +    {
> > +      sorry_at (gimple_location (stmt),
> > +           "unsupported conversion between %<_BitInt(%d)%> and %qT",
> > +           rhs_code == FIX_TRUNC_EXPR
> > +           ? TYPE_PRECISION (TREE_TYPE (lhs))
> > +           : TYPE_PRECISION (TREE_TYPE (rhs1)),
> > +           rhs_code == FIX_TRUNC_EXPR
> > +           ? TREE_TYPE (rhs1) : TREE_TYPE (lhs));
> > +      if (rhs_code == FLOAT_EXPR)
> > +   {
> > +     gimple *g
> > +       = gimple_build_assign (lhs, build_zero_cst (TREE_TYPE (lhs)));
> > +     gsi_replace (&m_gsi, g, true);
> > +   }
> > +      return;
> > +    }
> > +  tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
> > +  gimple *g;
> > +  if (rhs_code == FIX_TRUNC_EXPR)
> > +    {
> > +      int prec = TYPE_PRECISION (TREE_TYPE (lhs));
> > +      if (!TYPE_UNSIGNED (TREE_TYPE (lhs)))
> > +   prec = -prec;
> > +      if (obj == NULL_TREE)
> > +   {
> > +     int part = var_to_partition (m_map, lhs);
> > +     gcc_assert (m_vars[part] != NULL_TREE);
> > +     obj = m_vars[part];
> > +     lhs = build_fold_addr_expr (obj);
> > +   }
> > +      else
> > +   {
> > +     lhs = build_fold_addr_expr (obj);
> > +     lhs = force_gimple_operand_gsi (&m_gsi, lhs, true,
> > +                                     NULL_TREE, true, GSI_SAME_STMT);
> > +   }
> > +      scalar_mode from_mode
> > +   = as_a <scalar_mode> (TYPE_MODE (TREE_TYPE (rhs1)));
> > +#ifdef HAVE_SFmode
> > +      /* IEEE single is a full superset of both IEEE half and
> > +    bfloat formats, convert to float first and then to _BitInt
> > +    to avoid the need of another 2 library routines.  */
> > +      if ((REAL_MODE_FORMAT (from_mode) == &arm_bfloat_half_format
> > +      || REAL_MODE_FORMAT (from_mode) == &ieee_half_format)
> > +     && REAL_MODE_FORMAT (SFmode) == &ieee_single_format)
> > +   {
> > +     tree type = lang_hooks.types.type_for_mode (SFmode, 0);
> > +     if (type)
> > +       rhs1 = add_cast (type, rhs1);
> > +   }
> > +#endif
> > +      g = gimple_build_call_internal (IFN_FLOATTOBITINT, 3,
> > +                                 lhs, build_int_cst (sitype, prec),
> > +                                 rhs1);
> > +      insert_before (g);
> > +    }
> > +  else
> > +    {
> > +      int prec;
> > +      rhs1 = handle_operand_addr (rhs1, stmt, NULL, &prec);
> > +      g = gimple_build_call_internal (IFN_BITINTTOFLOAT, 2,
> > +                                 rhs1, build_int_cst (sitype, prec));
> > +      gimple_call_set_lhs (g, lhs);
> > +      if (!stmt_ends_bb_p (stmt))
> > +   gimple_call_set_nothrow (as_a <gcall *> (g), true);
> > +      gsi_replace (&m_gsi, g, true);
> > +    }
> > +}
> > +
> > +/* Helper method for lower_addsub_overflow and lower_mul_overflow.
> > +   If check_zero is true, caller wants to check if all bits in [start, end)
> > +   are zero, otherwise if bits in [start, end) are either all zero or
> > +   all ones.  L is the limb with index LIMB, START and END are measured
> > +   in bits.  */
> > +
> > +tree
> > +bitint_large_huge::arith_overflow_extract_bits (unsigned int start,
> > +                                           unsigned int end, tree l,
> > +                                           unsigned int limb,
> > +                                           bool check_zero)
> > +{
> > +  unsigned startlimb = start / limb_prec;
> > +  unsigned endlimb = (end - 1) / limb_prec;
> > +  gimple *g;
> > +
> > +  if ((start % limb_prec) == 0 && (end % limb_prec) == 0)
> > +    return l;
> > +  if (startlimb == endlimb && limb == startlimb)
> > +    {
> > +      if (check_zero)
> > +   {
> > +     wide_int w = wi::shifted_mask (start % limb_prec,
> > +                                    end - start, false, limb_prec);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                              BIT_AND_EXPR, l,
> > +                              wide_int_to_tree (m_limb_type, w));
> > +     insert_before (g);
> > +     return gimple_assign_lhs (g);
> > +   }
> > +      unsigned int shift = start % limb_prec;
> > +      if ((end % limb_prec) != 0)
> > +   {
> > +     unsigned int lshift = (-end) % limb_prec;
> > +     shift += lshift;
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                              LSHIFT_EXPR, l,
> > +                              build_int_cst (unsigned_type_node,
> > +                                             lshift));
> > +     insert_before (g);
> > +     l = gimple_assign_lhs (g);
> > +   }
> > +      l = add_cast (signed_type_for (m_limb_type), l);
> > +      g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
> > +                          RSHIFT_EXPR, l,
> > +                          build_int_cst (unsigned_type_node, shift));
> > +      insert_before (g);
> > +      return add_cast (m_limb_type, gimple_assign_lhs (g));
> > +    }
> > +  else if (limb == startlimb)
> > +    {
> > +      if ((start % limb_prec) == 0)
> > +   return l;
> > +      if (!check_zero)
> > +   l = add_cast (signed_type_for (m_limb_type), l);
> > +      g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
> > +                          RSHIFT_EXPR, l,
> > +                          build_int_cst (unsigned_type_node,
> > +                                         start % limb_prec));
> > +      insert_before (g);
> > +      l = gimple_assign_lhs (g);
> > +      if (!check_zero)
> > +   l = add_cast (m_limb_type, l);
> > +      return l;
> > +    }
> > +  else if (limb == endlimb)
> > +    {
> > +      if ((end % limb_prec) == 0)
> > +   return l;
> > +      if (check_zero)
> > +   {
> > +     wide_int w = wi::mask (end % limb_prec, false, limb_prec);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                              BIT_AND_EXPR, l,
> > +                              wide_int_to_tree (m_limb_type, w));
> > +     insert_before (g);
> > +     return gimple_assign_lhs (g);
> > +   }
> > +      unsigned int shift = (-end) % limb_prec;
> > +      g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                          LSHIFT_EXPR, l,
> > +                          build_int_cst (unsigned_type_node, shift));
> > +      insert_before (g);
> > +      l = add_cast (signed_type_for (m_limb_type), gimple_assign_lhs (g));
> > +      g = gimple_build_assign (make_ssa_name (TREE_TYPE (l)),
> > +                          RSHIFT_EXPR, l,
> > +                          build_int_cst (unsigned_type_node, shift));
> > +      insert_before (g);
> > +      return add_cast (m_limb_type, gimple_assign_lhs (g));
> > +    }
> > +  return l;
> > +}
> > +
> > +/* Helper method for lower_addsub_overflow and lower_mul_overflow.  Store
> > +   result including overflow flag into the right locations.  */
> > +
> > +void
> > +bitint_large_huge::finish_arith_overflow (tree var, tree obj, tree type,
> > +                                     tree ovf, tree lhs, tree orig_obj,
> > +                                     gimple *stmt, tree_code code)
> > +{
> > +  gimple *g;
> > +
> > +  if (obj == NULL_TREE
> > +      && (TREE_CODE (type) != BITINT_TYPE
> > +     || bitint_precision_kind (type) < bitint_prec_large))
> > +    {
> > +      /* Add support for 3 or more limbs filled in from normal integral
> > +    type if this assert fails.  If no target chooses limb mode smaller
> > +    than half of largest supported normal integral type, this will not
> > +    be needed.  */
> > +      gcc_assert (TYPE_PRECISION (type) <= 2 * limb_prec);
> > +      tree lhs_type = type;
> > +      if (TREE_CODE (type) == BITINT_TYPE
> > +     && bitint_precision_kind (type) == bitint_prec_middle)
> > +   lhs_type = build_nonstandard_integer_type (TYPE_PRECISION (type),
> > +                                              TYPE_UNSIGNED (type));
> > +      tree r1 = limb_access (NULL_TREE, var, size_int (0), true);
> > +      g = gimple_build_assign (make_ssa_name (m_limb_type), r1);
> > +      insert_before (g);
> > +      r1 = gimple_assign_lhs (g);
> > +      if (!useless_type_conversion_p (lhs_type, TREE_TYPE (r1)))
> > +   r1 = add_cast (lhs_type, r1);
> > +      if (TYPE_PRECISION (lhs_type) > limb_prec)
> > +   {
> > +     tree r2 = limb_access (NULL_TREE, var, size_int (1), true);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type), r2);
> > +     insert_before (g);
> > +     r2 = gimple_assign_lhs (g);
> > +     r2 = add_cast (lhs_type, r2);
> > +     g = gimple_build_assign (make_ssa_name (lhs_type), LSHIFT_EXPR, r2,
> > +                              build_int_cst (unsigned_type_node,
> > +                                             limb_prec));
> > +     insert_before (g);
> > +     g = gimple_build_assign (make_ssa_name (lhs_type), BIT_IOR_EXPR, r1,
> > +                              gimple_assign_lhs (g));
> > +     insert_before (g);
> > +     r1 = gimple_assign_lhs (g);
> > +   }
> > +      if (lhs_type != type)
> > +   r1 = add_cast (type, r1);
> > +      ovf = add_cast (lhs_type, ovf);
> > +      if (lhs_type != type)
> > +   ovf = add_cast (type, ovf);
> > +      g = gimple_build_assign (lhs, COMPLEX_EXPR, r1, ovf);
> > +      m_gsi = gsi_for_stmt (stmt);
> > +      gsi_replace (&m_gsi, g, true);
> > +    }
> > +  else
> > +    {
> > +      unsigned HOST_WIDE_INT nelts = 0;
> > +      tree atype = NULL_TREE;
> > +      if (obj)
> > +   {
> > +     nelts = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (obj))) / limb_prec;
> > +     if (orig_obj == NULL_TREE)
> > +       nelts >>= 1;
> > +     atype = build_array_type_nelts (m_limb_type, nelts);
> > +   }
> > +      if (var && obj)
> > +   {
> > +     tree v1, v2;
> > +     tree zero;
> > +     if (orig_obj == NULL_TREE)
> > +       {
> > +         zero = build_zero_cst (build_pointer_type (TREE_TYPE (obj)));
> > +         v1 = build2 (MEM_REF, atype,
> > +                      build_fold_addr_expr (unshare_expr (obj)), zero);
> > +       }
> > +     else if (!useless_type_conversion_p (atype, TREE_TYPE (obj)))
> > +       v1 = build1 (VIEW_CONVERT_EXPR, atype, unshare_expr (obj));
> > +     else
> > +       v1 = unshare_expr (obj);
> > +     zero = build_zero_cst (build_pointer_type (TREE_TYPE (var)));
> > +     v2 = build2 (MEM_REF, atype, build_fold_addr_expr (var), zero);
> > +     g = gimple_build_assign (v1, v2);
> > +     insert_before (g);
> > +   }
> > +      if (orig_obj == NULL_TREE && obj)
> > +   {
> > +     ovf = add_cast (m_limb_type, ovf);
> > +     tree l = limb_access (NULL_TREE, obj, size_int (nelts), true);
> > +     g = gimple_build_assign (l, ovf);
> > +     insert_before (g);
> > +     if (nelts > 1)
> > +       {
> > +         atype = build_array_type_nelts (m_limb_type, nelts - 1);
> > +         tree off = build_int_cst (build_pointer_type (TREE_TYPE (obj)),
> > +                                   (nelts + 1) * m_limb_size);
> > +         tree v1 = build2 (MEM_REF, atype,
> > +                           build_fold_addr_expr (unshare_expr (obj)),
> > +                           off);
> > +         g = gimple_build_assign (v1, build_zero_cst (atype));
> > +         insert_before (g);
> > +       }
> > +   }
> > +      else if (TREE_CODE (TREE_TYPE (lhs)) == COMPLEX_TYPE)
> > +   {
> > +     imm_use_iterator ui;
> > +     use_operand_p use_p;
> > +     FOR_EACH_IMM_USE_FAST (use_p, ui, lhs)
> > +       {
> > +         g = USE_STMT (use_p);
> > +         if (!is_gimple_assign (g)
> > +             || gimple_assign_rhs_code (g) != IMAGPART_EXPR)
> > +           continue;
> > +         tree lhs2 = gimple_assign_lhs (g);
> > +         gimple *use_stmt;
> > +         single_imm_use (lhs2, &use_p, &use_stmt);
> > +         lhs2 = gimple_assign_lhs (use_stmt);
> > +         gimple_stmt_iterator gsi = gsi_for_stmt (use_stmt);
> > +         if (useless_type_conversion_p (TREE_TYPE (lhs2), TREE_TYPE (ovf)))
> > +           g = gimple_build_assign (lhs2, ovf);
> > +         else
> > +           g = gimple_build_assign (lhs2, NOP_EXPR, ovf);
> > +         gsi_replace (&gsi, g, true);
> > +         break;
> > +       }
> > +   }
> > +      else if (ovf != boolean_false_node)
> > +   {
> > +     g = gimple_build_cond (NE_EXPR, ovf, boolean_false_node,
> > +                            NULL_TREE, NULL_TREE);
> > +     insert_before (g);
> > +     edge e1 = split_block (gsi_bb (m_gsi), g);
> > +     edge e2 = split_block (e1->dest, (gimple *) NULL);
> > +     edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > +     e3->probability = profile_probability::very_likely ();
> > +     e1->flags = EDGE_TRUE_VALUE;
> > +     e1->probability = e3->probability.invert ();
> > +     set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > +     m_gsi = gsi_after_labels (e1->dest);
> > +     tree zero = build_zero_cst (TREE_TYPE (lhs));
> > +     tree fn = ubsan_build_overflow_builtin (code, m_loc,
> > +                                             TREE_TYPE (lhs),
> > +                                             zero, zero, NULL);
> > +     force_gimple_operand_gsi (&m_gsi, fn, true, NULL_TREE,
> > +                               true, GSI_SAME_STMT);
> > +     m_gsi = gsi_after_labels (e2->dest);
> > +   }
> > +    }
> > +  if (var)
> > +    {
> > +      tree clobber = build_clobber (TREE_TYPE (var), CLOBBER_EOL);
> > +      g = gimple_build_assign (var, clobber);
> > +      gsi_insert_after (&m_gsi, g, GSI_SAME_STMT);
> > +    }
> > +}
> > +
> > +/* Helper function for lower_addsub_overflow and lower_mul_overflow.
> > +   Given precisions of result TYPE (PREC), argument 0 precision PREC0,
> > +   argument 1 precision PREC1 and minimum precision for the result
> > +   PREC2, compute *START, *END, *CHECK_ZERO and return OVF.  */
> > +
> > +static tree
> > +arith_overflow (tree_code code, tree type, int prec, int prec0, int prec1,
> > +           int prec2, unsigned *start, unsigned *end, bool *check_zero)
> > +{
> > +  *start = 0;
> > +  *end = 0;
> > +  *check_zero = true;
> > +  /* Ignore this special rule for subtraction, even if both
> > +     prec0 >= 0 and prec1 >= 0, their subtraction can be negative
> > +     in infinite precision.  */
> > +  if (code != MINUS_EXPR && prec0 >= 0 && prec1 >= 0)
> > +    {
> > +      /* Result in [0, prec2) is unsigned, if prec > prec2,
> > +    all bits above it will be zero.  */
> > +      if ((prec - !TYPE_UNSIGNED (type)) >= prec2)
> > +   return boolean_false_node;
> > +      else
> > +   {
> > +     /* ovf if any of bits in [start, end) is non-zero.  */
> > +     *start = prec - !TYPE_UNSIGNED (type);
> > +     *end = prec2;
> > +   }
> > +    }
> > +  else if (TYPE_UNSIGNED (type))
> > +    {
> > +      /* If result in [0, prec2) is signed and if prec > prec2,
> > +    all bits above it will be sign bit copies.  */
> > +      if (prec >= prec2)
> > +   {
> > +     /* ovf if bit prec - 1 is non-zero.  */
> > +     *start = prec - 1;
> > +     *end = prec;
> > +   }
> > +      else
> > +   {
> > +     /* ovf if any of bits in [start, end) is non-zero.  */
> > +     *start = prec;
> > +     *end = prec2;
> > +   }
> > +    }
> > +  else if (prec >= prec2)
> > +    return boolean_false_node;
> > +  else
> > +    {
> > +      /* ovf if [start, end) bits aren't all zeros or all ones.  */
> > +      *start = prec - 1;
> > +      *end = prec2;
> > +      *check_zero = false;
> > +    }
> > +  return NULL_TREE;
> > +}
> > +
> > +/* Lower a .{ADD,SUB}_OVERFLOW call with at least one large/huge _BitInt
> > +   argument or return type _Complex large/huge _BitInt.  */
> > +
> > +void
> > +bitint_large_huge::lower_addsub_overflow (tree obj, gimple *stmt)
> > +{
> > +  tree arg0 = gimple_call_arg (stmt, 0);
> > +  tree arg1 = gimple_call_arg (stmt, 1);
> > +  tree lhs = gimple_call_lhs (stmt);
> > +  gimple *g;
> > +
> > +  if (!lhs)
> > +    {
> > +      gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> > +      gsi_remove (&gsi, true);
> > +      return;
> > +    }
> > +  gimple *final_stmt = gsi_stmt (m_gsi);
> > +  tree type = TREE_TYPE (lhs);
> > +  if (TREE_CODE (type) == COMPLEX_TYPE)
> > +    type = TREE_TYPE (type);
> > +  int prec = TYPE_PRECISION (type);
> > +  int prec0 = range_to_prec (arg0, stmt);
> > +  int prec1 = range_to_prec (arg1, stmt);
> > +  int prec2 = ((prec0 < 0) == (prec1 < 0)
> > +          ? MAX (prec0 < 0 ? -prec0 : prec0,
> > +                 prec1 < 0 ? -prec1 : prec1) + 1
> > +          : MAX (prec0 < 0 ? -prec0 : prec0 + 1,
> > +                 prec1 < 0 ? -prec1 : prec1 + 1) + 1);
> > +  int prec3 = MAX (prec0 < 0 ? -prec0 : prec0,
> > +              prec1 < 0 ? -prec1 : prec1);
> > +  prec3 = MAX (prec3, prec);
> > +  tree var = NULL_TREE;
> > +  tree orig_obj = obj;
> > +  if (obj == NULL_TREE
> > +      && TREE_CODE (type) == BITINT_TYPE
> > +      && bitint_precision_kind (type) >= bitint_prec_large
> > +      && m_names
> > +      && bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
> > +    {
> > +      int part = var_to_partition (m_map, lhs);
> > +      gcc_assert (m_vars[part] != NULL_TREE);
> > +      obj = m_vars[part];
> > +      if (TREE_TYPE (lhs) == type)
> > +   orig_obj = obj;
> > +    }
> > +  if (TREE_CODE (type) != BITINT_TYPE
> > +      || bitint_precision_kind (type) < bitint_prec_large)
> > +    {
> > +      unsigned HOST_WIDE_INT nelts = CEIL (prec, limb_prec);
> > +      tree atype = build_array_type_nelts (m_limb_type, nelts);
> > +      var = create_tmp_var (atype);
> > +    }
> > +
> > +  enum tree_code code;
> > +  switch (gimple_call_internal_fn (stmt))
> > +    {
> > +    case IFN_ADD_OVERFLOW:
> > +    case IFN_UBSAN_CHECK_ADD:
> > +      code = PLUS_EXPR;
> > +      break;
> > +    case IFN_SUB_OVERFLOW:
> > +    case IFN_UBSAN_CHECK_SUB:
> > +      code = MINUS_EXPR;
> > +      break;
> > +    default:
> > +      gcc_unreachable ();
> > +    }
> > +  unsigned start, end;
> > +  bool check_zero;
> > +  tree ovf = arith_overflow (code, type, prec, prec0, prec1, prec2,
> > +                        &start, &end, &check_zero);
> > +
> > +  unsigned startlimb, endlimb;
> > +  if (ovf)
> > +    {
> > +      startlimb = ~0U;
> > +      endlimb = ~0U;
> > +    }
> > +  else
> > +    {
> > +      startlimb = start / limb_prec;
> > +      endlimb = (end - 1) / limb_prec;
> > +    }
> > +
> > +  int prec4 = ovf != NULL_TREE ? prec : prec3;
> > +  bitint_prec_kind kind = bitint_precision_kind (prec4);
> > +  unsigned cnt, rem = 0, fin = 0;
> > +  tree idx = NULL_TREE, idx_first = NULL_TREE, idx_next = NULL_TREE;
> > +  bool last_ovf = (ovf == NULL_TREE
> > +              && CEIL (prec2, limb_prec) > CEIL (prec3, limb_prec));
> > +  if (kind != bitint_prec_huge)
> > +    cnt = CEIL (prec4, limb_prec) + last_ovf;
> > +  else
> > +    {
> > +      rem = (prec4 % (2 * limb_prec));
> > +      fin = (prec4 - rem) / limb_prec;
> > +      cnt = 2 + CEIL (rem, limb_prec) + last_ovf;
> > +      idx = idx_first = create_loop (size_zero_node, &idx_next);
> > +    }
> > +
> > +  if (kind == bitint_prec_huge)
> > +    m_upwards_2limb = fin;
> > +
> > +  tree type0 = TREE_TYPE (arg0);
> > +  tree type1 = TREE_TYPE (arg1);
> > +  if (TYPE_PRECISION (type0) < prec3)
> > +    {
> > +      type0 = build_bitint_type (prec3, TYPE_UNSIGNED (type0));
> > +      if (TREE_CODE (arg0) == INTEGER_CST)
> > +   arg0 = fold_convert (type0, arg0);
> > +    }
> > +  if (TYPE_PRECISION (type1) < prec3)
> > +    {
> > +      type1 = build_bitint_type (prec3, TYPE_UNSIGNED (type1));
> > +      if (TREE_CODE (arg1) == INTEGER_CST)
> > +   arg1 = fold_convert (type1, arg1);
> > +    }
> > +  unsigned int data_cnt = 0;
> > +  tree last_rhs1 = NULL_TREE, last_rhs2 = NULL_TREE;
> > +  tree cmp = build_zero_cst (m_limb_type);
> > +  unsigned prec_limbs = CEIL ((unsigned) prec, limb_prec);
> > +  tree ovf_out = NULL_TREE, cmp_out = NULL_TREE;
> > +  for (unsigned i = 0; i < cnt; i++)
> > +    {
> > +      m_data_cnt = 0;
> > +      tree rhs1, rhs2;
> > +      if (kind != bitint_prec_huge)
> > +   idx = size_int (i);
> > +      else if (i >= 2)
> > +   idx = size_int (fin + (i > 2));
> > +      if (!last_ovf || i < cnt - 1)
> > +   {
> > +     if (type0 != TREE_TYPE (arg0))
> > +       rhs1 = handle_cast (type0, arg0, idx);
> > +     else
> > +       rhs1 = handle_operand (arg0, idx);
> > +     if (type1 != TREE_TYPE (arg1))
> > +       rhs2 = handle_cast (type1, arg1, idx);
> > +     else
> > +       rhs2 = handle_operand (arg1, idx);
> > +     if (i == 0)
> > +       data_cnt = m_data_cnt;
> > +     if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs1)))
> > +       rhs1 = add_cast (m_limb_type, rhs1);
> > +     if (!useless_type_conversion_p (m_limb_type, TREE_TYPE (rhs2)))
> > +       rhs2 = add_cast (m_limb_type, rhs2);
> > +     last_rhs1 = rhs1;
> > +     last_rhs2 = rhs2;
> > +   }
> > +      else
> > +   {
> > +     m_data_cnt = data_cnt;
> > +     if (TYPE_UNSIGNED (type0))
> > +       rhs1 = build_zero_cst (m_limb_type);
> > +     else
> > +       {
> > +         rhs1 = add_cast (signed_type_for (m_limb_type), last_rhs1);
> > +         if (TREE_CODE (rhs1) == INTEGER_CST)
> > +           rhs1 = build_int_cst (m_limb_type,
> > +                                 tree_int_cst_sgn (rhs1) < 0 ? -1 : 0);
> > +         else
> > +           {
> > +             tree lpm1 = build_int_cst (unsigned_type_node,
> > +                                        limb_prec - 1);
> > +             g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs1)),
> > +                                      RSHIFT_EXPR, rhs1, lpm1);
> > +             insert_before (g);
> > +             rhs1 = add_cast (m_limb_type, gimple_assign_lhs (g));
> > +           }
> > +       }
> > +     if (TYPE_UNSIGNED (type1))
> > +       rhs2 = build_zero_cst (m_limb_type);
> > +     else
> > +       {
> > +         rhs2 = add_cast (signed_type_for (m_limb_type), last_rhs2);
> > +         if (TREE_CODE (rhs2) == INTEGER_CST)
> > +           rhs2 = build_int_cst (m_limb_type,
> > +                                 tree_int_cst_sgn (rhs2) < 0 ? -1 : 0);
> > +         else
> > +           {
> > +             tree lpm1 = build_int_cst (unsigned_type_node,
> > +                                        limb_prec - 1);
> > +             g = gimple_build_assign (make_ssa_name (TREE_TYPE (rhs2)),
> > +                                      RSHIFT_EXPR, rhs2, lpm1);
> > +             insert_before (g);
> > +             rhs2 = add_cast (m_limb_type, gimple_assign_lhs (g));
> > +           }
> > +       }
> > +   }
> > +      tree rhs = handle_plus_minus (code, rhs1, rhs2, idx);
> > +      if (ovf != boolean_false_node)
> > +   {
> > +     if (tree_fits_uhwi_p (idx))
> > +       {
> > +         unsigned limb = tree_to_uhwi (idx);
> > +         if (limb >= startlimb && limb <= endlimb)
> > +           {
> > +             tree l = arith_overflow_extract_bits (start, end, rhs,
> > +                                                   limb, check_zero);
> > +             tree this_ovf = make_ssa_name (boolean_type_node);
> > +             if (ovf == NULL_TREE && !check_zero)
> > +               {
> > +                 cmp = l;
> > +                 g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                                          PLUS_EXPR, l,
> > +                                          build_int_cst (m_limb_type, 1));
> > +                 insert_before (g);
> > +                 g = gimple_build_assign (this_ovf, GT_EXPR,
> > +                                          gimple_assign_lhs (g),
> > +                                          build_int_cst (m_limb_type, 1));
> > +               }
> > +             else
> > +               g = gimple_build_assign (this_ovf, NE_EXPR, l, cmp);
> > +             insert_before (g);
> > +             if (ovf == NULL_TREE)
> > +               ovf = this_ovf;
> > +             else
> > +               {
> > +                 tree b = make_ssa_name (boolean_type_node);
> > +                 g = gimple_build_assign (b, BIT_IOR_EXPR, ovf, this_ovf);
> > +                 insert_before (g);
> > +                 ovf = b;
> > +               }
> > +           }
> > +       }
> > +     else if (startlimb < fin)
> > +       {
> > +         if (m_first && startlimb + 2 < fin)
> > +           {
> > +             tree data_out;
> > +             ovf = prepare_data_in_out (boolean_false_node, idx, 
> > &data_out);
> > +             ovf_out = m_data.pop ();
> > +             m_data.pop ();
> > +             if (!check_zero)
> > +               {
> > +                 cmp = prepare_data_in_out (cmp, idx, &data_out);
> > +                 cmp_out = m_data.pop ();
> > +                 m_data.pop ();
> > +               }
> > +           }
> > +         if (i != 0 || startlimb != fin - 1)
> > +           {
> > +             tree_code cmp_code;
> > +             bool single_comparison
> > +               = (startlimb + 2 >= fin || (startlimb & 1) != (i & 1));
> > +             if (!single_comparison)
> > +               {
> > +                 cmp_code = GE_EXPR;
> > +                 if (!check_zero && (start % limb_prec) == 0)
> > +                   single_comparison = true;
> > +               }
> > +             else if ((startlimb & 1) == (i & 1))
> > +               cmp_code = EQ_EXPR;
> > +             else
> > +               cmp_code = GT_EXPR;
> > +             g = gimple_build_cond (cmp_code, idx, size_int (startlimb),
> > +                                    NULL_TREE, NULL_TREE);
> > +             insert_before (g);
> > +             edge e1 = split_block (gsi_bb (m_gsi), g);
> > +             edge e2 = split_block (e1->dest, (gimple *) NULL);
> > +             edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > +             edge e4 = NULL;
> > +             e3->probability = profile_probability::unlikely ();
> > +             e1->flags = EDGE_TRUE_VALUE;
> > +             e1->probability = e3->probability.invert ();
> > +             set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > +             if (!single_comparison)
> > +               {
> > +                 m_gsi = gsi_after_labels (e1->dest);
> > +                 g = gimple_build_cond (EQ_EXPR, idx,
> > +                                        size_int (startlimb), NULL_TREE,
> > +                                        NULL_TREE);
> > +                 insert_before (g);
> > +                 e2 = split_block (gsi_bb (m_gsi), g);
> > +                 basic_block bb = create_empty_bb (e2->dest);
> > +                 add_bb_to_loop (bb, e2->dest->loop_father);
> > +                 e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
> > +                 set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
> > +                 e4->probability = profile_probability::unlikely ();
> > +                 e2->flags = EDGE_FALSE_VALUE;
> > +                 e2->probability = e4->probability.invert ();
> > +                 e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
> > +                 e2 = find_edge (e2->dest, e3->dest);
> > +               }
> > +             m_gsi = gsi_after_labels (e2->src);
> > +             unsigned tidx = startlimb + (cmp_code == GT_EXPR);
> > +             tree l = arith_overflow_extract_bits (start, end, rhs, tidx,
> > +                                                   check_zero);
> > +             tree this_ovf = make_ssa_name (boolean_type_node);
> > +             if (cmp_code != GT_EXPR && !check_zero)
> > +               {
> > +                 g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                                          PLUS_EXPR, l,
> > +                                          build_int_cst (m_limb_type, 1));
> > +                 insert_before (g);
> > +                 g = gimple_build_assign (this_ovf, GT_EXPR,
> > +                                          gimple_assign_lhs (g),
> > +                                          build_int_cst (m_limb_type, 1));
> > +               }
> > +             else
> > +               g = gimple_build_assign (this_ovf, NE_EXPR, l, cmp);
> > +             insert_before (g);
> > +             if (cmp_code == GT_EXPR)
> > +               {
> > +                 tree t = make_ssa_name (boolean_type_node);
> > +                 g = gimple_build_assign (t, BIT_IOR_EXPR, ovf, this_ovf);
> > +                 insert_before (g);
> > +                 this_ovf = t;
> > +               }
> > +             tree this_ovf2 = NULL_TREE;
> > +             if (!single_comparison)
> > +               {
> > +                 m_gsi = gsi_after_labels (e4->src);
> > +                 tree t = make_ssa_name (boolean_type_node);
> > +                 g = gimple_build_assign (t, NE_EXPR, rhs, cmp);
> > +                 insert_before (g);
> > +                 this_ovf2 = make_ssa_name (boolean_type_node);
> > +                 g = gimple_build_assign (this_ovf2, BIT_IOR_EXPR,
> > +                                          ovf, t);
> > +                 insert_before (g);
> > +               }
> > +             m_gsi = gsi_after_labels (e2->dest);
> > +             tree t;
> > +             if (i == 1 && ovf_out)
> > +               t = ovf_out;
> > +             else
> > +               t = make_ssa_name (boolean_type_node);
> > +             gphi *phi = create_phi_node (t, e2->dest);
> > +             add_phi_arg (phi, this_ovf, e2, UNKNOWN_LOCATION);
> > +             add_phi_arg (phi, ovf ? ovf
> > +                               : boolean_false_node, e3,
> > +                          UNKNOWN_LOCATION);
> > +             if (e4)
> > +               add_phi_arg (phi, this_ovf2, e4, UNKNOWN_LOCATION);
> > +             ovf = t;
> > +             if (!check_zero && cmp_code != GT_EXPR)
> > +               {
> > +                 t = cmp_out ? cmp_out : make_ssa_name (m_limb_type);
> > +                 phi = create_phi_node (t, e2->dest);
> > +                 add_phi_arg (phi, l, e2, UNKNOWN_LOCATION);
> > +                 add_phi_arg (phi, cmp, e3, UNKNOWN_LOCATION);
> > +                 if (e4)
> > +                   add_phi_arg (phi, cmp, e4, UNKNOWN_LOCATION);
> > +                 cmp = t;
> > +               }
> > +           }
> > +       }
> > +   }
> > +
> > +      if (var || obj)
> > +   {
> > +     if (tree_fits_uhwi_p (idx) && tree_to_uhwi (idx) >= prec_limbs)
> > +       ;
> > +     else if (!tree_fits_uhwi_p (idx)
> > +              && (unsigned) prec < (fin - (i == 0)) * limb_prec)
> > +       {
> > +         bool single_comparison
> > +           = (((unsigned) prec % limb_prec) == 0
> > +              || prec_limbs + 1 >= fin
> > +              || (prec_limbs & 1) == (i & 1));
> > +         g = gimple_build_cond (LE_EXPR, idx, size_int (prec_limbs - 1),
> > +                                NULL_TREE, NULL_TREE);
> > +         insert_before (g);
> > +         edge e1 = split_block (gsi_bb (m_gsi), g);
> > +         edge e2 = split_block (e1->dest, (gimple *) NULL);
> > +         edge e3 = make_edge (e1->src, e2->dest, EDGE_FALSE_VALUE);
> > +         edge e4 = NULL;
> > +         e3->probability = profile_probability::unlikely ();
> > +         e1->flags = EDGE_TRUE_VALUE;
> > +         e1->probability = e3->probability.invert ();
> > +         set_immediate_dominator (CDI_DOMINATORS, e2->dest, e1->src);
> > +         if (!single_comparison)
> > +           {
> > +             m_gsi = gsi_after_labels (e1->dest);
> > +             g = gimple_build_cond (LT_EXPR, idx,
> > +                                    size_int (prec_limbs - 1),
> > +                                    NULL_TREE, NULL_TREE);
> > +             insert_before (g);
> > +             e2 = split_block (gsi_bb (m_gsi), g);
> > +             basic_block bb = create_empty_bb (e2->dest);
> > +             add_bb_to_loop (bb, e2->dest->loop_father);
> > +             e4 = make_edge (e2->src, bb, EDGE_TRUE_VALUE);
> > +             set_immediate_dominator (CDI_DOMINATORS, bb, e2->src);
> > +             e4->probability = profile_probability::unlikely ();
> > +             e2->flags = EDGE_FALSE_VALUE;
> > +             e2->probability = e4->probability.invert ();
> > +             e4 = make_edge (bb, e3->dest, EDGE_FALLTHRU);
> > +             e2 = find_edge (e2->dest, e3->dest);
> > +           }
> > +         m_gsi = gsi_after_labels (e2->src);
> > +         tree l = limb_access (type, var ? var : obj, idx, true);
> > +         g = gimple_build_assign (l, rhs);
> > +         insert_before (g);
> > +         if (!single_comparison)
> > +           {
> > +             m_gsi = gsi_after_labels (e4->src);
> > +             l = limb_access (type, var ? var : obj,
> > +                              size_int (prec_limbs - 1), true);
> > +             if (!useless_type_conversion_p (TREE_TYPE (l),
> > +                                             TREE_TYPE (rhs)))
> > +               rhs = add_cast (TREE_TYPE (l), rhs);
> > +             g = gimple_build_assign (l, rhs);
> > +             insert_before (g);
> > +           }
> > +         m_gsi = gsi_after_labels (e2->dest);
> > +       }
> > +     else
> > +       {
> > +         tree l = limb_access (type, var ? var : obj, idx, true);
> > +         if (!useless_type_conversion_p (TREE_TYPE (l), TREE_TYPE (rhs)))
> > +           rhs = add_cast (TREE_TYPE (l), rhs);
> > +         g = gimple_build_assign (l, rhs);
> > +         insert_before (g);
> > +       }
> > +   }
> > +      m_first = false;
> > +      if (kind == bitint_prec_huge && i <= 1)
> > +   {
> > +     if (i == 0)
> > +       {
> > +         idx = make_ssa_name (sizetype);
> > +         g = gimple_build_assign (idx, PLUS_EXPR, idx_first,
> > +                                  size_one_node);
> > +         insert_before (g);
> > +       }
> > +     else
> > +       {
> > +         g = gimple_build_assign (idx_next, PLUS_EXPR, idx_first,
> > +                                  size_int (2));
> > +         insert_before (g);
> > +         g = gimple_build_cond (NE_EXPR, idx_next, size_int (fin),
> > +                                NULL_TREE, NULL_TREE);
> > +         insert_before (g);
> > +         m_gsi = gsi_for_stmt (final_stmt);
> > +       }
> > +   }
> > +    }
> > +
> > +  finish_arith_overflow (var, obj, type, ovf, lhs, orig_obj, stmt, code);
> > +}
> > +
> > +/* Lower a .MUL_OVERFLOW call with at least one large/huge _BitInt
> > +   argument or return type _Complex large/huge _BitInt.  */
> > +
> > +void
> > +bitint_large_huge::lower_mul_overflow (tree obj, gimple *stmt)
> > +{
> > +  tree arg0 = gimple_call_arg (stmt, 0);
> > +  tree arg1 = gimple_call_arg (stmt, 1);
> > +  tree lhs = gimple_call_lhs (stmt);
> > +  if (!lhs)
> > +    {
> > +      gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> > +      gsi_remove (&gsi, true);
> > +      return;
> > +    }
> > +  gimple *final_stmt = gsi_stmt (m_gsi);
> > +  tree type = TREE_TYPE (lhs);
> > +  if (TREE_CODE (type) == COMPLEX_TYPE)
> > +    type = TREE_TYPE (type);
> > +  int prec = TYPE_PRECISION (type), prec0, prec1;
> > +  arg0 = handle_operand_addr (arg0, stmt, NULL, &prec0);
> > +  arg1 = handle_operand_addr (arg1, stmt, NULL, &prec1);
> > +  int prec2 = ((prec0 < 0 ? -prec0 : prec0)
> > +          + (prec1 < 0 ? -prec1 : prec1)
> > +          + ((prec0 < 0) != (prec1 < 0)));
> > +  tree var = NULL_TREE;
> > +  tree orig_obj = obj;
> > +  bool force_var = false;
> > +  if (obj == NULL_TREE
> > +      && TREE_CODE (type) == BITINT_TYPE
> > +      && bitint_precision_kind (type) >= bitint_prec_large
> > +      && m_names
> > +      && bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
> > +    {
> > +      int part = var_to_partition (m_map, lhs);
> > +      gcc_assert (m_vars[part] != NULL_TREE);
> > +      obj = m_vars[part];
> > +      if (TREE_TYPE (lhs) == type)
> > +   orig_obj = obj;
> > +    }
> > +  else if (obj != NULL_TREE && DECL_P (obj))
> > +    {
> > +      for (int i = 0; i < 2; ++i)
> > +   {
> > +     tree arg = i ? arg1 : arg0;
> > +     if (TREE_CODE (arg) == ADDR_EXPR)
> > +       arg = TREE_OPERAND (arg, 0);
> > +     if (get_base_address (arg) == obj)
> > +       {
> > +         force_var = true;
> > +         break;
> > +       }
> > +   }
> > +    }
> > +  if (obj == NULL_TREE
> > +      || force_var
> > +      || TREE_CODE (type) != BITINT_TYPE
> > +      || bitint_precision_kind (type) < bitint_prec_large
> > +      || prec2 > (CEIL (prec, limb_prec) * limb_prec * (orig_obj ? 1 : 2)))
> > +    {
> > +      unsigned HOST_WIDE_INT nelts = CEIL (MAX (prec, prec2), limb_prec);
> > +      tree atype = build_array_type_nelts (m_limb_type, nelts);
> > +      var = create_tmp_var (atype);
> > +    }
> > +  tree addr = build_fold_addr_expr (var ? var : obj);
> > +  addr = force_gimple_operand_gsi (&m_gsi, addr, true,
> > +                              NULL_TREE, true, GSI_SAME_STMT);
> > +  tree sitype = lang_hooks.types.type_for_mode (SImode, 0);
> > +  gimple *g
> > +    = gimple_build_call_internal (IFN_MULBITINT, 6,
> > +                             addr, build_int_cst (sitype,
> > +                                                  MAX (prec2, prec)),
> > +                             arg0, build_int_cst (sitype, prec0),
> > +                             arg1, build_int_cst (sitype, prec1));
> > +  insert_before (g);
> > +
> > +  unsigned start, end;
> > +  bool check_zero;
> > +  tree ovf = arith_overflow (MULT_EXPR, type, prec, prec0, prec1, prec2,
> > +                        &start, &end, &check_zero);
> > +  if (ovf == NULL_TREE)
> > +    {
> > +      unsigned startlimb = start / limb_prec;
> > +      unsigned endlimb = (end - 1) / limb_prec;
> > +      unsigned cnt;
> > +      bool use_loop = false;
> > +      if (startlimb == endlimb)
> > +   cnt = 1;
> > +      else if (startlimb + 1 == endlimb)
> > +   cnt = 2;
> > +      else if ((end % limb_prec) == 0)
> > +   {
> > +     cnt = 2;
> > +     use_loop = true;
> > +   }
> > +      else
> > +   {
> > +     cnt = 3;
> > +     use_loop = startlimb + 2 < endlimb;
> > +   }
> > +      if (cnt == 1)
> > +   {
> > +     tree l = limb_access (NULL_TREE, var ? var : obj,
> > +                           size_int (startlimb), true);
> > +     g = gimple_build_assign (make_ssa_name (m_limb_type), l);
> > +     insert_before (g);
> > +     l = arith_overflow_extract_bits (start, end, gimple_assign_lhs (g),
> > +                                      startlimb, check_zero);
> > +     ovf = make_ssa_name (boolean_type_node);
> > +     if (check_zero)
> > +       g = gimple_build_assign (ovf, NE_EXPR, l,
> > +                                build_zero_cst (m_limb_type));
> > +     else
> > +       {
> > +         g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                                  PLUS_EXPR, l,
> > +                                  build_int_cst (m_limb_type, 1));
> > +         insert_before (g);
> > +         g = gimple_build_assign (ovf, GT_EXPR, gimple_assign_lhs (g),
> > +                                  build_int_cst (m_limb_type, 1));
> > +       }
> > +     insert_before (g);
> > +   }
> > +      else
> > +   {
> > +     basic_block edge_bb = NULL;
> > +     gimple_stmt_iterator gsi = m_gsi;
> > +     gsi_prev (&gsi);
> > +     edge e = split_block (gsi_bb (gsi), gsi_stmt (gsi));
> > +     edge_bb = e->src;
> > +     m_gsi = gsi_last_bb (edge_bb);
> > +     if (!gsi_end_p (m_gsi))
> > +       gsi_next (&m_gsi);
> > +
> > +     tree cmp = build_zero_cst (m_limb_type);
> > +     for (unsigned i = 0; i < cnt; i++)
> > +       {
> > +         tree idx, idx_next = NULL_TREE;
> > +         if (i == 0)
> > +           idx = size_int (startlimb);
> > +         else if (i == 2)
> > +           idx = size_int (endlimb);
> > +         else if (use_loop)
> > +           idx = create_loop (size_int (startlimb + 1), &idx_next);
> > +         else
> > +           idx = size_int (startlimb + 1);
> > +         tree l = limb_access (NULL_TREE, var ? var : obj, idx, true);
> > +         g = gimple_build_assign (make_ssa_name (m_limb_type), l);
> > +         insert_before (g);
> > +         l = gimple_assign_lhs (g);
> > +         if (i == 0 || i == 2)
> > +           l = arith_overflow_extract_bits (start, end, l,
> > +                                            tree_to_uhwi (idx),
> > +                                            check_zero);
> > +         if (i == 0 && !check_zero)
> > +           {
> > +             cmp = l;
> > +             g = gimple_build_assign (make_ssa_name (m_limb_type),
> > +                                      PLUS_EXPR, l,
> > +                                      build_int_cst (m_limb_type, 1));
> > +             insert_before (g);
> > +             g = gimple_build_cond (GT_EXPR, gimple_assign_lhs (g),
> > +                                    build_int_cst (m_limb_type, 1),
> > +                                    NULL_TREE, NULL_TREE);
> > +           }
> > +         else
> > +           g = gimple_build_cond (NE_EXPR, l, cmp, NULL_TREE, NULL_TREE);
> > +         insert_before (g);
> > +         edge e1 = split_block (gsi_bb (m_gsi), g);
> > +         e1->flags = EDGE_FALSE_VALUE;
> > +         edge e2 = make_edge (e1->src, gimple_bb (final_stmt),
> > +                              EDGE_TRUE_VALUE);
> > +         e1->probability = profile_probability::likely ();
> > +         e2->probability = e1->probability.invert ();
> > +         if (i == 0)
> > +           set_immediate_dominator (CDI_DOMINATORS, e2->dest, e2->src);
> > +         m_gsi = gsi_after_labels (e1->dest);
> > +         if (i == 1 && use_loop)
> > +           {
> > +             g = gimple_build_assign (idx_next, PLUS_EXPR, idx,
> > +                                      size_one_node);
> > +             insert_before (g);
> > +             g = gimple_build_cond (NE_EXPR, idx_next,
> > +                                    size_int (endlimb + (cnt == 1)),
> > +                                    NULL_TREE, NULL_TREE);
> > +             insert_before (g);
> > +             edge true_edge, false_edge;
> > +             extract_true_false_edges_from_block (gsi_bb (m_gsi),
> > +                                                  &true_edge,
> > +                                                  &false_edge);
> > +             m_gsi = gsi_after_labels (false_edge->dest);
> > +           }
> > +       }
> > +
> > +     ovf = make_ssa_name (boolean_type_node);
> > +     basic_block bb = gimple_bb (final_stmt);
> > +     gphi *phi = create_phi_node (ovf, bb);
> > +     edge e1 = find_edge (gsi_bb (m_gsi), bb);
> > +     edge_iterator ei;
> > +     FOR_EACH_EDGE (e, ei, bb->preds)
> > +       {
> > +         tree val = e == e1 ? boolean_false_node : boolean_true_node;
> > +         add_phi_arg (phi, val, e, UNKNOWN_LOCATION);
> > +       }
> > +     m_gsi = gsi_for_stmt (final_stmt);
> > +   }
> > +    }
> > +
> > +  finish_arith_overflow (var, obj, type, ovf, lhs, orig_obj, stmt, 
> > MULT_EXPR);
> > +}
> > +
> > +/* Lower REALPART_EXPR or IMAGPART_EXPR stmt extracting part of result from
> > +   .{ADD,SUB,MUL}_OVERFLOW call.  */
> > +
> > +void
> > +bitint_large_huge::lower_cplxpart_stmt (tree obj, gimple *stmt)
> > +{
> > +  tree rhs1 = gimple_assign_rhs1 (stmt);
> > +  rhs1 = TREE_OPERAND (rhs1, 0);
> > +  if (obj == NULL_TREE)
> > +    {
> > +      int part = var_to_partition (m_map, gimple_assign_lhs (stmt));
> > +      gcc_assert (m_vars[part] != NULL_TREE);
> > +      obj = m_vars[part];
> > +    }
> > +  if (TREE_CODE (rhs1) == SSA_NAME
> > +      && (m_names == NULL
> > +     || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
> > +    {
> > +      lower_call (obj, SSA_NAME_DEF_STMT (rhs1));
> > +      return;
> > +    }
> > +  int part = var_to_partition (m_map, rhs1);
> > +  gcc_assert (m_vars[part] != NULL_TREE);
> > +  tree var = m_vars[part];
> > +  unsigned HOST_WIDE_INT nelts
> > +    = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (obj))) / limb_prec;
> > +  tree atype = build_array_type_nelts (m_limb_type, nelts);
> > +  if (!useless_type_conversion_p (atype, TREE_TYPE (obj)))
> > +    obj = build1 (VIEW_CONVERT_EXPR, atype, obj);
> > +  tree off = build_int_cst (build_pointer_type (TREE_TYPE (var)),
> > +                       gimple_assign_rhs_code (stmt) == REALPART_EXPR
> > +                       ? 0 : nelts * m_limb_size);
> > +  tree v2 = build2 (MEM_REF, atype, build_fold_addr_expr (var), off);
> > +  gimple *g = gimple_build_assign (obj, v2);
> > +  insert_before (g);
> > +}
> > +
> > +/* Lower COMPLEX_EXPR stmt.  */
> > +
> > +void
> > +bitint_large_huge::lower_complexexpr_stmt (gimple *stmt)
> > +{
> > +  tree lhs = gimple_assign_lhs (stmt);
> > +  tree rhs1 = gimple_assign_rhs1 (stmt);
> > +  tree rhs2 = gimple_assign_rhs2 (stmt);
> > +  int part = var_to_partition (m_map, lhs);
> > +  gcc_assert (m_vars[part] != NULL_TREE);
> > +  lhs = m_vars[part];
> > +  unsigned HOST_WIDE_INT nelts
> > +    = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (rhs1))) / limb_prec;
> > +  tree atype = build_array_type_nelts (m_limb_type, nelts);
> > +  tree zero = build_zero_cst (build_pointer_type (TREE_TYPE (lhs)));
> > +  tree v1 = build2 (MEM_REF, atype, build_fold_addr_expr (lhs), zero);
> > +  tree v2;
> > +  if (TREE_CODE (rhs1) == SSA_NAME)
> > +    {
> > +      part = var_to_partition (m_map, rhs1);
> > +      gcc_assert (m_vars[part] != NULL_TREE);
> > +      v2 = m_vars[part];
> > +    }
> > +  else if (integer_zerop (rhs1))
> > +    v2 = build_zero_cst (atype);
> > +  else
> > +    v2 = tree_output_constant_def (rhs1);
> > +  if (!useless_type_conversion_p (atype, TREE_TYPE (v2)))
> > +    v2 = build1 (VIEW_CONVERT_EXPR, atype, v2);
> > +  gimple *g = gimple_build_assign (v1, v2);
> > +  insert_before (g);
> > +  tree off = fold_convert (build_pointer_type (TREE_TYPE (lhs)),
> > +                      TYPE_SIZE_UNIT (atype));
> > +  v1 = build2 (MEM_REF, atype, build_fold_addr_expr (lhs), off);
> > +  if (TREE_CODE (rhs2) == SSA_NAME)
> > +    {
> > +      part = var_to_partition (m_map, rhs2);
> > +      gcc_assert (m_vars[part] != NULL_TREE);
> > +      v2 = m_vars[part];
> > +    }
> > +  else if (integer_zerop (rhs2))
> > +    v2 = build_zero_cst (atype);
> > +  else
> > +    v2 = tree_output_constant_def (rhs2);
> > +  if (!useless_type_conversion_p (atype, TREE_TYPE (v2)))
> > +    v2 = build1 (VIEW_CONVERT_EXPR, atype, v2);
> > +  g = gimple_build_assign (v1, v2);
> > +  insert_before (g);
> > +}
> > +
> > +/* Lower a call statement with one or more large/huge _BitInt
> > +   arguments or large/huge _BitInt return value.  */
> > +
> > +void
> > +bitint_large_huge::lower_call (tree obj, gimple *stmt)
> > +{
> > +  gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
> > +  unsigned int nargs = gimple_call_num_args (stmt);
> > +  if (gimple_call_internal_p (stmt))
> > +    switch (gimple_call_internal_fn (stmt))
> > +      {
> > +      case IFN_ADD_OVERFLOW:
> > +      case IFN_SUB_OVERFLOW:
> > +      case IFN_UBSAN_CHECK_ADD:
> > +      case IFN_UBSAN_CHECK_SUB:
> > +   lower_addsub_overflow (obj, stmt);
> > +   return;
> > +      case IFN_MUL_OVERFLOW:
> > +      case IFN_UBSAN_CHECK_MUL:
> > +   lower_mul_overflow (obj, stmt);
> > +   return;
> > +      default:
> > +   break;
> > +      }
> > +  for (unsigned int i = 0; i < nargs; ++i)
> > +    {
> > +      tree arg = gimple_call_arg (stmt, i);
> > +      if (TREE_CODE (arg) != SSA_NAME
> > +     || TREE_CODE (TREE_TYPE (arg)) != BITINT_TYPE
> > +     || bitint_precision_kind (TREE_TYPE (arg)) <= bitint_prec_middle)
> > +   continue;
> > +      int p = var_to_partition (m_map, arg);
> > +      tree v = m_vars[p];
> > +      gcc_assert (v != NULL_TREE);
> > +      if (!types_compatible_p (TREE_TYPE (arg), TREE_TYPE (v)))
> > +   v = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (arg), v);
> > +      arg = make_ssa_name (TREE_TYPE (arg));
> > +      gimple *g = gimple_build_assign (arg, v);
> > +      gsi_insert_before (&gsi, g, GSI_SAME_STMT);
> > +      gimple_call_set_arg (stmt, i, arg);
> > +      if (m_preserved == NULL)
> > +   m_preserved = BITMAP_ALLOC (NULL);
> > +      bitmap_set_bit (m_preserved, SSA_NAME_VERSION (arg));
> > +    }
> > +  tree lhs = gimple_call_lhs (stmt);
> > +  if (lhs
> > +      && TREE_CODE (lhs) == SSA_NAME
> > +      && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> > +      && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
> > +    {
> > +      int p = var_to_partition (m_map, lhs);
> > +      tree v = m_vars[p];
> > +      gcc_assert (v != NULL_TREE);
> > +      if (!types_compatible_p (TREE_TYPE (lhs), TREE_TYPE (v)))
> > +   v = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (lhs), v);
> > +      gimple_call_set_lhs (stmt, v);
> > +      SSA_NAME_DEF_STMT (lhs) = gimple_build_nop ();
> > +    }
> > +  update_stmt (stmt);
> > +}
> > +
> > +/* Lower __asm STMT which involves large/huge _BitInt values.  */
> > +
> > +void
> > +bitint_large_huge::lower_asm (gimple *stmt)
> > +{
> > +  gasm *g = as_a <gasm *> (stmt);
> > +  unsigned noutputs = gimple_asm_noutputs (g);
> > +  unsigned ninputs = gimple_asm_ninputs (g);
> > +
> > +  for (unsigned i = 0; i < noutputs; ++i)
> > +    {
> > +      tree t = gimple_asm_output_op (g, i);
> > +      tree s = TREE_VALUE (t);
> > +      if (TREE_CODE (s) == SSA_NAME
> > +     && TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
> > +     && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
> > +   {
> > +     int part = var_to_partition (m_map, s);
> > +     gcc_assert (m_vars[part] != NULL_TREE);
> > +     TREE_VALUE (t) = m_vars[part];
> > +   }
> > +    }
> > +  for (unsigned i = 0; i < ninputs; ++i)
> > +    {
> > +      tree t = gimple_asm_input_op (g, i);
> > +      tree s = TREE_VALUE (t);
> > +      if (TREE_CODE (s) == SSA_NAME
> > +     && TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
> > +     && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
> > +   {
> > +     int part = var_to_partition (m_map, s);
> > +     gcc_assert (m_vars[part] != NULL_TREE);
> > +     TREE_VALUE (t) = m_vars[part];
> > +   }
> > +    }
> > +  update_stmt (stmt);
> > +}
> > +
> > +/* Lower statement STMT which involves large/huge _BitInt values
> > +   into code accessing individual limbs.  */
> > +
> > +void
> > +bitint_large_huge::lower_stmt (gimple *stmt)
> > +{
> > +  m_first = true;
> > +  m_lhs = NULL_TREE;
> > +  m_data.truncate (0);
> > +  m_data_cnt = 0;
> > +  m_gsi = gsi_for_stmt (stmt);
> > +  m_after_stmt = NULL;
> > +  m_bb = NULL;
> > +  m_init_gsi = m_gsi;
> > +  gsi_prev (&m_init_gsi);
> > +  m_preheader_bb = NULL;
> > +  m_upwards_2limb = 0;
> > +  m_var_msb = false;
> > +  m_loc = gimple_location (stmt);
> > +  if (is_gimple_call (stmt))
> > +    {
> > +      lower_call (NULL_TREE, stmt);
> > +      return;
> > +    }
> > +  if (gimple_code (stmt) == GIMPLE_ASM)
> > +    {
> > +      lower_asm (stmt);
> > +      return;
> > +    }
> > +  tree lhs = NULL_TREE, cmp_op1 = NULL_TREE, cmp_op2 = NULL_TREE;
> > +  tree_code cmp_code = comparison_op (stmt, &cmp_op1, &cmp_op2);
> > +  bool eq_p = (cmp_code == EQ_EXPR || cmp_code == NE_EXPR);
> > +  bool mergeable_cast_p = false;
> > +  bool final_cast_p = false;
> > +  if (gimple_assign_cast_p (stmt))
> > +    {
> > +      lhs = gimple_assign_lhs (stmt);
> > +      tree rhs1 = gimple_assign_rhs1 (stmt);
> > +      if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> > +     && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
> > +     && INTEGRAL_TYPE_P (TREE_TYPE (rhs1)))
> > +   mergeable_cast_p = true;
> > +      else if (TREE_CODE (TREE_TYPE (rhs1)) == BITINT_TYPE
> > +          && bitint_precision_kind (TREE_TYPE (rhs1)) >= bitint_prec_large
> > +          && INTEGRAL_TYPE_P (TREE_TYPE (lhs)))
> > +   {
> > +     final_cast_p = true;
> > +     if (TREE_CODE (rhs1) == SSA_NAME
> > +         && (m_names == NULL
> > +             || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
> > +       {
> > +         gimple *g = SSA_NAME_DEF_STMT (rhs1);
> > +         if (is_gimple_assign (g)
> > +             && gimple_assign_rhs_code (g) == IMAGPART_EXPR)
> > +           {
> > +             tree rhs2 = TREE_OPERAND (gimple_assign_rhs1 (g), 0);
> > +             if (TREE_CODE (rhs2) == SSA_NAME
> > +                 && (m_names == NULL
> > +                     || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs2))))
> > +               {
> > +                 g = SSA_NAME_DEF_STMT (rhs2);
> > +                 int ovf = optimizable_arith_overflow (g);
> > +                 if (ovf == 2)
> > +                   /* If .{ADD,SUB,MUL}_OVERFLOW has both REALPART_EXPR
> > +                      and IMAGPART_EXPR uses, where the latter is cast to
> > +                      non-_BitInt, it will be optimized when handling
> > +                      the REALPART_EXPR.  */
> > +                   return;
> > +                 if (ovf == 1)
> > +                   {
> > +                     lower_call (NULL_TREE, g);
> > +                     return;
> > +                   }
> > +               }
> > +           }
> > +       }
> > +   }
> > +    }
> > +  if (gimple_store_p (stmt))
> > +    {
> > +      tree rhs1 = gimple_assign_rhs1 (stmt);
> > +      if (TREE_CODE (rhs1) == SSA_NAME
> > +     && (m_names == NULL
> > +         || !bitmap_bit_p (m_names, SSA_NAME_VERSION (rhs1))))
> > +   {
> > +     gimple *g = SSA_NAME_DEF_STMT (rhs1);
> > +     m_loc = gimple_location (g);
> > +     lhs = gimple_assign_lhs (stmt);
> > +     if (is_gimple_assign (g) && !mergeable_op (g))
> > +       switch (gimple_assign_rhs_code (g))
> > +         {
> > +         case LSHIFT_EXPR:
> > +         case RSHIFT_EXPR:
> > +           lower_shift_stmt (lhs, g);
> > +         handled:
> > +           m_gsi = gsi_for_stmt (stmt);
> > +           unlink_stmt_vdef (stmt);
> > +           release_ssa_name (gimple_vdef (stmt));
> > +           gsi_remove (&m_gsi, true);
> > +           return;
> > +         case MULT_EXPR:
> > +         case TRUNC_DIV_EXPR:
> > +         case TRUNC_MOD_EXPR:
> > +           lower_muldiv_stmt (lhs, g);
> > +           goto handled;
> > +         case FIX_TRUNC_EXPR:
> > +           lower_float_conv_stmt (lhs, g);
> > +           goto handled;
> > +         case REALPART_EXPR:
> > +         case IMAGPART_EXPR:
> > +           lower_cplxpart_stmt (lhs, g);
> > +           goto handled;
> > +         default:
> > +           break;
> > +         }
> > +     else if (optimizable_arith_overflow (g) == 3)
> > +       {
> > +         lower_call (lhs, g);
> > +         goto handled;
> > +       }
> > +     m_loc = gimple_location (stmt);
> > +   }
> > +    }
> > +  if (mergeable_op (stmt)
> > +      || gimple_store_p (stmt)
> > +      || gimple_assign_load_p (stmt)
> > +      || eq_p
> > +      || mergeable_cast_p)
> > +    {
> > +      lhs = lower_mergeable_stmt (stmt, cmp_code, cmp_op1, cmp_op2);
> > +      if (!eq_p)
> > +   return;
> > +    }
> > +  else if (cmp_code != ERROR_MARK)
> > +    lhs = lower_comparison_stmt (stmt, cmp_code, cmp_op1, cmp_op2);
> > +  if (cmp_code != ERROR_MARK)
> > +    {
> > +      if (gimple_code (stmt) == GIMPLE_COND)
> > +   {
> > +     gcond *cstmt = as_a <gcond *> (stmt);
> > +     gimple_cond_set_lhs (cstmt, lhs);
> > +     gimple_cond_set_rhs (cstmt, boolean_false_node);
> > +     gimple_cond_set_code (cstmt, cmp_code);
> > +     update_stmt (stmt);
> > +     return;
> > +   }
> > +      if (gimple_assign_rhs_code (stmt) == COND_EXPR)
> > +   {
> > +     tree cond = build2 (cmp_code, boolean_type_node, lhs,
> > +                         boolean_false_node);
> > +     gimple_assign_set_rhs1 (stmt, cond);
> > +     lhs = gimple_assign_lhs (stmt);
> > +     gcc_assert (TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
> > +                 || (bitint_precision_kind (TREE_TYPE (lhs))
> > +                     <= bitint_prec_middle));
> > +     update_stmt (stmt);
> > +     return;
> > +   }
> > +      gimple_assign_set_rhs1 (stmt, lhs);
> > +      gimple_assign_set_rhs2 (stmt, boolean_false_node);
> > +      gimple_assign_set_rhs_code (stmt, cmp_code);
> > +      update_stmt (stmt);
> > +      return;
> > +    }
> > +  if (final_cast_p)
> > +    {
> > +      tree lhs_type = TREE_TYPE (lhs);
> > +      /* Add support for 3 or more limbs filled in from normal integral
> > +    type if this assert fails.  If no target chooses limb mode smaller
> > +    than half of largest supported normal integral type, this will not
> > +    be needed.  */
> > +      gcc_assert (TYPE_PRECISION (lhs_type) <= 2 * limb_prec);
> > +      gimple *g;
> > +      if (TREE_CODE (lhs_type) == BITINT_TYPE
> > +     && bitint_precision_kind (lhs_type) == bitint_prec_middle)
> > +   lhs_type = build_nonstandard_integer_type (TYPE_PRECISION (lhs_type),
> > +                                              TYPE_UNSIGNED (lhs_type));
> > +      m_data_cnt = 0;
> > +      tree rhs1 = gimple_assign_rhs1 (stmt);
> > +      tree r1 = handle_operand (rhs1, size_int (0));
> > +      if (!useless_type_conversion_p (lhs_type, TREE_TYPE (r1)))
> > +   r1 = add_cast (lhs_type, r1);
> > +      if (TYPE_PRECISION (lhs_type) > limb_prec)
> > +   {
> > +     m_data_cnt = 0;
> > +     m_first = false;
> > +     tree r2 = handle_operand (rhs1, size_int (1));
> > +     r2 = add_cast (lhs_type, r2);
> > +     g = gimple_build_assign (make_ssa_name (lhs_type), LSHIFT_EXPR, r2,
> > +                              build_int_cst (unsigned_type_node,
> > +                                             limb_prec));
> > +     insert_before (g);
> > +     g = gimple_build_assign (make_ssa_name (lhs_type), BIT_IOR_EXPR, r1,
> > +                              gimple_assign_lhs (g));
> > +     insert_before (g);
> > +     r1 = gimple_assign_lhs (g);
> > +   }
> > +      if (lhs_type != TREE_TYPE (lhs))
> > +   g = gimple_build_assign (lhs, NOP_EXPR, r1);
> > +      else
> > +   g = gimple_build_assign (lhs, r1);
> > +      gsi_replace (&m_gsi, g, true);
> > +      return;
> > +    }
> > +  if (is_gimple_assign (stmt))
> > +    switch (gimple_assign_rhs_code (stmt))
> > +      {
> > +      case LSHIFT_EXPR:
> > +      case RSHIFT_EXPR:
> > +   lower_shift_stmt (NULL_TREE, stmt);
> > +   return;
> > +      case MULT_EXPR:
> > +      case TRUNC_DIV_EXPR:
> > +      case TRUNC_MOD_EXPR:
> > +   lower_muldiv_stmt (NULL_TREE, stmt);
> > +   return;
> > +      case FIX_TRUNC_EXPR:
> > +      case FLOAT_EXPR:
> > +   lower_float_conv_stmt (NULL_TREE, stmt);
> > +   return;
> > +      case REALPART_EXPR:
> > +      case IMAGPART_EXPR:
> > +   lower_cplxpart_stmt (NULL_TREE, stmt);
> > +   return;
> > +      case COMPLEX_EXPR:
> > +   lower_complexexpr_stmt (stmt);
> > +   return;
> > +      default:
> > +   break;
> > +      }
> > +  gcc_unreachable ();
> > +}
> > +
> > +/* Helper for walk_non_aliased_vuses.  Determine if we arrived at
> > +   the desired memory state.  */
> > +
> > +void *
> > +vuse_eq (ao_ref *, tree vuse1, void *data)
> > +{
> > +  tree vuse2 = (tree) data;
> > +  if (vuse1 == vuse2)
> > +    return data;
> > +
> > +  return NULL;
> > +}
> > +
> > +/* Dominator walker used to discover which large/huge _BitInt
> > +   loads could be sunk into all their uses.  */
> > +
> > +class bitint_dom_walker : public dom_walker
> > +{
> > +public:
> > +  bitint_dom_walker (bitmap names, bitmap loads)
> > +    : dom_walker (CDI_DOMINATORS), m_names (names), m_loads (loads) {}
> > +
> > +  edge before_dom_children (basic_block) final override;
> > +
> > +private:
> > +  bitmap m_names, m_loads;
> > +};
> > +
> > +edge
> > +bitint_dom_walker::before_dom_children (basic_block bb)
> > +{
> > +  gphi *phi = get_virtual_phi (bb);
> > +  tree vop;
> > +  if (phi)
> > +    vop = gimple_phi_result (phi);
> > +  else if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun))
> > +    vop = NULL_TREE;
> > +  else
> > +    vop = (tree) get_immediate_dominator (CDI_DOMINATORS, bb)->aux;
> > +
> > +  auto_vec<tree, 16> worklist;
> > +  for (gimple_stmt_iterator gsi = gsi_start_bb (bb);
> > +       !gsi_end_p (gsi); gsi_next (&gsi))
> > +    {
> > +      gimple *stmt = gsi_stmt (gsi);
> > +      if (is_gimple_debug (stmt))
> > +   continue;
> > +
> > +      if (!vop && gimple_vuse (stmt))
> > +   vop = gimple_vuse (stmt);
> > +
> > +      tree cvop = vop;
> > +      if (gimple_vdef (stmt))
> > +   vop = gimple_vdef (stmt);
> > +
> > +      tree lhs = gimple_get_lhs (stmt);
> > +      if (lhs
> > +     && TREE_CODE (lhs) == SSA_NAME
> > +     && TREE_CODE (TREE_TYPE (lhs)) != BITINT_TYPE
> > +     && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large
> > +     && !bitmap_bit_p (m_names, SSA_NAME_VERSION (lhs)))
> > +   /* If lhs of stmt is large/huge _BitInt SSA_NAME not in m_names,
> > +      it means it will be handled in a loop or straight line code
> > +      at the location of its (ultimate) immediate use, so for
> > +      vop checking purposes check these only at the ultimate
> > +      immediate use.  */
> > +   continue;
> > +
> > +      ssa_op_iter oi;
> > +      use_operand_p use_p;
> > +      FOR_EACH_SSA_USE_OPERAND (use_p, stmt, oi, SSA_OP_USE)
> > +   {
> > +     tree s = USE_FROM_PTR (use_p);
> > +     if (TREE_CODE (TREE_TYPE (s)) == BITINT_TYPE
> > +         && bitint_precision_kind (TREE_TYPE (s)) >= bitint_prec_large)
> > +       worklist.safe_push (s);
> > +   }
> > +
> > +      while (worklist.length () > 0)
> > +   {
> > +     tree s = worklist.pop ();
> > +
> > +     if (!bitmap_bit_p (m_names, SSA_NAME_VERSION (s)))
> > +       {
> > +         FOR_EACH_SSA_USE_OPERAND (use_p, SSA_NAME_DEF_STMT (s),
> > +                                   oi, SSA_OP_USE)
> > +           {
> > +             tree s2 = USE_FROM_PTR (use_p);
> > +             if (TREE_CODE (TREE_TYPE (s2)) == BITINT_TYPE
> > +                 && (bitint_precision_kind (TREE_TYPE (s2))
> > +                     >= bitint_prec_large))
> > +               worklist.safe_push (s2);
> > +           }
> > +         continue;
> > +       }
> > +     if (!SSA_NAME_OCCURS_IN_ABNORMAL_PHI (s)
> > +         && gimple_assign_cast_p (SSA_NAME_DEF_STMT (s)))
> > +       {
> > +         tree rhs = gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s));
> > +         if (TREE_CODE (rhs) == SSA_NAME
> > +             && bitmap_bit_p (m_loads, SSA_NAME_VERSION (rhs)))
> > +           s = rhs;
> > +         else
> > +           continue;
> > +       }
> > +     else if (!bitmap_bit_p (m_loads, SSA_NAME_VERSION (s)))
> > +       continue;
> > +
> > +     ao_ref ref;
> > +     ao_ref_init (&ref, gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s)));
> > +     tree lvop = gimple_vuse (SSA_NAME_DEF_STMT (s));
> > +     unsigned limit = 64;
> > +     tree vuse = cvop;
> > +     if (vop != cvop
> > +         && is_gimple_assign (stmt)
> > +         && gimple_store_p (stmt)
> > +         && !operand_equal_p (lhs,
> > +                              gimple_assign_rhs1 (SSA_NAME_DEF_STMT (s)),
> > +                              0))
> > +       vuse = vop;
> > +     if (vuse != lvop
> > +         && walk_non_aliased_vuses (&ref, vuse, false, vuse_eq,
> > +                                    NULL, NULL, limit, lvop) == NULL)
> > +       bitmap_clear_bit (m_loads, SSA_NAME_VERSION (s));
> > +   }
> > +    }
> > +
> > +  bb->aux = (void *) vop;
> > +  return NULL;
> > +}
> > +
> > +}
> > +
> > +/* Replacement for normal processing of STMT in tree-ssa-coalesce.cc
> > +   build_ssa_conflict_graph.
> > +   The differences are:
> > +   1) don't process assignments with large/huge _BitInt lhs not in NAMES
> > +   2) for large/huge _BitInt multiplication/division/modulo process def
> > +      only after processing uses rather than before to make uses conflict
> > +      with the definition
> > +   3) for large/huge _BitInt uses not in NAMES mark the uses of their
> > +      SSA_NAME_DEF_STMT (recursively), because those uses will be sunk into
> > +      the final statement.  */
> > +
> > +void
> > +build_bitint_stmt_ssa_conflicts (gimple *stmt, live_track *live,
> > +                            ssa_conflicts *graph, bitmap names,
> > +                            void (*def) (live_track *, tree,
> > +                                         ssa_conflicts *),
> > +                            void (*use) (live_track *, tree))
> > +{
> > +  bool muldiv_p = false;
> > +  tree lhs = NULL_TREE;
> > +  if (is_gimple_assign (stmt))
> > +    {
> > +      lhs = gimple_assign_lhs (stmt);
> > +      if (TREE_CODE (lhs) == SSA_NAME
> > +     && TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> > +     && bitint_precision_kind (TREE_TYPE (lhs)) >= bitint_prec_large)
> > +   {
> > +     if (!bitmap_bit_p (names, SSA_NAME_VERSION (lhs)))
> > +       return;
> > +     switch (gimple_assign_rhs_code (stmt))
> > +       {
> > +       case MULT_EXPR:
> > +       case TRUNC_DIV_EXPR:
> > +       case TRUNC_MOD_EXPR:
> > +         muldiv_p = true;
> > +       default:
> > +         break;
> > +       }
> > +   }
> > +    }
> > +
> > +  ssa_op_iter iter;
> > +  tree var;
> > +  if (!muldiv_p)
> > +    {
> > +      /* For stmts with more than one SSA_NAME definition pretend all the
> > +    SSA_NAME outputs but the first one are live at this point, so
> > +    that conflicts are added in between all those even when they are
> > +    actually not really live after the asm, because expansion might
> > +    copy those into pseudos after the asm and if multiple outputs
> > +    share the same partition, it might overwrite those that should
> > +    be live.  E.g.
> > +    asm volatile (".." : "=r" (a) : "=r" (b) : "0" (a), "1" (a));
> > +    return a;
> > +    See PR70593.  */
> > +      bool first = true;
> > +      FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF)
> > +   if (first)
> > +     first = false;
> > +   else
> > +     use (live, var);
> > +
> > +      FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF)
> > +   def (live, var, graph);
> > +    }
> > +
> > +  auto_vec<tree, 16> worklist;
> > +  FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_USE)
> > +    if (TREE_CODE (TREE_TYPE (var)) == BITINT_TYPE
> > +   && bitint_precision_kind (TREE_TYPE (var)) >= bitint_prec_large)
> > +      {
> > +   if (bitmap_bit_p (names, SSA_NAME_VERSION (var)))
> > +     use (live, var);
> > +   else
> > +     worklist.safe_push (var);
> > +      }
> > +
> > +  while (worklist.length () > 0)
> > +    {
> > +      tree s = worklist.pop ();
> > +      FOR_EACH_SSA_TREE_OPERAND (var, SSA_NAME_DEF_STMT (s), iter, 
> > SSA_OP_USE)
> > +   if (TREE_CODE (TREE_TYPE (var)) == BITINT_TYPE
> > +       && bitint_precision_kind (TREE_TYPE (var)) >= bitint_prec_large)
> > +     {
> > +       if (bitmap_bit_p (names, SSA_NAME_VERSION (var)))
> > +         use (live, var);
> > +       else
> > +         worklist.safe_push (var);
> > +     }
> > +    }
> > +
> > +  if (muldiv_p)
> > +    def (live, lhs, graph);
> > +}
> > +
> > +/* Entry point for _BitInt(N) operation lowering during optimization.  */
> > +
> > +static unsigned int
> > +gimple_lower_bitint (void)
> > +{
> > +  small_max_prec = mid_min_prec = large_min_prec = huge_min_prec = 0;
> > +  limb_prec = 0;
> > +
> > +  unsigned int i;
> > +  tree vop = gimple_vop (cfun);
> > +  for (i = 0; i < num_ssa_names; ++i)
> > +    {
> > +      tree s = ssa_name (i);
> > +      if (s == NULL)
> > +   continue;
> > +      tree type = TREE_TYPE (s);
> > +      if (TREE_CODE (type) == COMPLEX_TYPE)
> > +   type = TREE_TYPE (type);
> > +      if (TREE_CODE (type) == BITINT_TYPE
> > +     && bitint_precision_kind (type) != bitint_prec_small)
> > +   break;
> > +      /* We need to also rewrite stores of large/huge _BitInt INTEGER_CSTs
> > +    into memory.  Such functions could have no large/huge SSA_NAMEs.  */
> > +      if (vop && SSA_NAME_VAR (s) == vop)
> 
> SSA_NAME_IS_VIRTUAL_OPERAND (s)

Ok.
> 
> > +   {
> > +     gimple *g = SSA_NAME_DEF_STMT (s);
> > +     if (is_gimple_assign (g) && gimple_store_p (g))
> > +       {
> 
> what about functions returning large _BitInt<N> where the ABI
> specifies it doesn't return by invisible reference?

When we have such a target with _BitInt support we'd see it in testsuite
coverage and I guess checking GIMPLE_RETURN stmts in a function shouldn't
be that hard (first check that the function returns large/huge _BitInt and
if it does, look for preds of EXIT block, or simply say all such functions
do have large/huge _BitInt if they return it).

> The other def not handled are ASMs ...

Indeed, ASMs is what I've realized I won't be able to find so cheaply like
the constant stores into memory.  I think it is more important to have the
pass cheap for non-_BitInt sources and so for asm with large/huge _BitInt
INTEGER_CST inputs I've dealt with it in expansion (and intentionally not
in a very optimized way by forcing it into memory, because I don't think
doing anything smarter is worth it for inline asm).

> > +      i = 0;
           ^^^^^^ here

> > +      FOR_EACH_VEC_ELT (switch_statements, j, stmt)
> > +   {
> > +     gswitch *swtch = as_a<gswitch *> (stmt);
> > +     tree_switch_conversion::switch_decision_tree dt (swtch);
> > +     expanded |= dt.analyze_switch_statement ();
> > +   }
> > +
> > +      if (expanded)
> > +   {
> > +     free_dominance_info (CDI_DOMINATORS);
> > +     free_dominance_info (CDI_POST_DOMINATORS);
> > +     mark_virtual_operands_for_renaming (cfun);
> > +     cleanup_tree_cfg (TODO_update_ssa);
> > +   }
> > +    }
> > +
> > +  struct bitint_large_huge large_huge;
> > +  bool has_large_huge_parm_result = false;
> > +  bool has_large_huge = false;
> > +  unsigned int ret = 0, first_large_huge = ~0U;
> > +  bool edge_insertions = false;
> > +  for (; i < num_ssa_names; ++i)
> 
> the above SSA update could end up re-using a smaller SSA name number,
> so I wonder if you can really avoid starting at 1 again.

I do that above.  And similarly if I try to "deoptimize" ABS/ABSU/MIN/MAX
or rotates etc., I reset first_large_huge to 0 so the loop after that starts
at 0.

> > +  FOR_EACH_BB_REVERSE_FN (bb, cfun)
> 
> is reverse in any way important?  (not visiting newly created blocks?)

Yeah, that was so that I don't visit the newly created blocks.
The loop continues to iterate with prev which is computed before the
lowering, so if the lowering splits blocks etc. it will continue in the
original block before the code added during the lowering.

> > --- gcc/lto-streamer-in.cc.jj       2023-07-17 09:07:42.078283882 +0200
> > +++ gcc/lto-streamer-in.cc  2023-07-27 15:03:24.255234159 +0200
> > @@ -1888,7 +1888,7 @@ lto_input_tree_1 (class lto_input_block
> >  
> >        for (i = 0; i < len; i++)
> >     a[i] = streamer_read_hwi (ib);
> > -      gcc_assert (TYPE_PRECISION (type) <= MAX_BITSIZE_MODE_ANY_INT);
> > +      gcc_assert (TYPE_PRECISION (type) <= WIDE_INT_MAX_PRECISION);
> 
> OK to push separately.

Ok.

> > +   else
> > +     {
> > +       SET_TYPE_MODE (type, BLKmode);
> > +       cnt = CEIL (TYPE_PRECISION (type), GET_MODE_PRECISION (limb_mode));
> > +     }
> > +   TYPE_SIZE (type) = bitsize_int (cnt * GET_MODE_BITSIZE (limb_mode));
> > +   TYPE_SIZE_UNIT (type) = size_int (cnt * GET_MODE_SIZE (limb_mode));
> > +   SET_TYPE_ALIGN (type, GET_MODE_ALIGNMENT (limb_mode));
> 
> so when a target allows say TImode we don't align to that larger mode?
> Might be worth documenting in the target hook that the alignment
> which I think is part of the ABI is specified by the limb mode.

Right now there is just x86-64 psABI finalized, which says roughly that
what fits into {,un}signed {char,short,int,long,long long} is passed/laid
out like that, everything else is handled like structure containing n
unsigned long long limbs, so indeed
alignof (__int128) > alignof (_BitInt(128)) there.
Now, e.g. the ARM people don't really like that and are contemplating
to say the limb_mode is TImode for 64-bit code, that would mean that
even _BitInt(128) would be a bitint_small_prec there, no bitint_middle_prec
and _BitInt(129) and above would have 128-bit alignment.
The problem with that is that the double-word support in GCC isn't very good
as you know, tons of operations need libgcc and the implementation using
128-bit limbs in libgcc would be terrible.  So, maybe we'll want to split
info.limb_mode into info.abi_limb_mode and info.limb_mode, where the former
would be used just in a few spots for ABI purposes (e.g. the alignment and
sizing), while a smaller info.limb_mode could be used what is used
internally for the loops and semi-internally (GCC ABI) in the libgcc APIs.
Of course precondition would be that the _BitInt endianity matches the
target endianity, otherwise there is no way to do that.
So, AArch64 could then say _BitInt(256) is 128-bit aligned and
_BitInt(257) has same size as _BitInt(384), but still handle it internally
using 64-bit limbs and expect the libgcc APIs to be passed arrays of 64-bit
limbs (with 64-bit alignment).

> Are arrays of _BitInt a thing?  _BitInt<8>[10] would have quite some
> padding then which might be unexpected?

Sure, _BitInt(8)[10] is a thing, after all, the testsuite contains tons
of examples of that.  In the x86-64 psABI, _BitInt(8) has same
alignment/size as signed char, so there is no padding, but sure,
_BitInt(9)[10] does have a padding, it is like array of 10 unsigned shorts
with 7 bits of padding in each of them.  Similarly,
_BitInt(575)[10] is an array with 72 bytes long elements with 1 padding bit
in each.

> > +/* Target properties of _BitInt(N) type.  _BitInt(N) is to be represented
> > +   as series of limb_mode CEIL (N, GET_MODE_PRECISION (limb_mode)) limbs,
> > +   ordered from least significant to most significant if !big_endian,
> > +   otherwise from most significant to least significant.  If extended is
> > +   false, the bits above or equal to N are undefined when stored in a 
> > register
> > +   or memory, otherwise they are zero or sign extended depending on if
> > +   it is unsigned _BitInt(N) or _BitInt(N) / signed _BitInt(N).  */
> > +
> 
> I think this belongs to tm.texi (or duplicated there)

Ok.

> > @@ -6969,8 +6970,14 @@ eliminate_dom_walker::eliminate_stmt (ba
> >           || !DECL_BIT_FIELD_TYPE (TREE_OPERAND (lhs, 1)))
> >       && !type_has_mode_precision_p (TREE_TYPE (lhs)))
> >     {
> > -     if (TREE_CODE (lhs) == COMPONENT_REF
> > -         || TREE_CODE (lhs) == MEM_REF)
> > +     if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
> > +         && (TYPE_PRECISION (TREE_TYPE (lhs))
> > +             > (targetm.scalar_mode_supported_p (TImode)
> > +                ? GET_MODE_PRECISION (TImode)
> > +                : GET_MODE_PRECISION (DImode))))
> > +       lookup_lhs = NULL_TREE;
> 
> What's the reason for this?  You allow non-mode precision
> stores, if you wanted to disallow BLKmode I think the better
> way would be to add != BLKmode above or alternatively
> build a limb-size _BitInt type (instead of 
> build_nonstandard_integer_type)?

This was just a quick hack to fix some ICEs.  I'm afraid once some people
try csmith on _BitInt we'll get more such spots, and sure, it might be able
to deal with it better, just not too familiar with this to know what that
would be.
> > +      this_low = const_unop (NEGATE_EXPR, TREE_TYPE (this_low), this_low);
> > +      g = gimple_build_assign (make_ssa_name (TREE_TYPE (index_expr)),
> > +                          PLUS_EXPR, index_expr, this_low);
> > +      gimple_set_location (g, loc);
> > +      gsi_insert_after (&gsi, g, GSI_NEW_STMT);
> > +      index_expr = gimple_assign_lhs (g);
> 
> I suppose using gimple_convert/gimple_build with a sequence would be
> easier to follow.

Guess I could try to use them here, but as I said earlier, changing the
lowering pass to use those everywhere would mean rewriting half of those
6000 lines.
> > --- gcc/ubsan.cc.jj 2023-05-20 15:31:09.240660915 +0200
> > +++ gcc/ubsan.cc    2023-07-27 15:03:24.260234089 +0200
> > @@ -50,6 +50,8 @@ along with GCC; see the file COPYING3.
> >  #include "gimple-fold.h"
> >  #include "varasm.h"
> >  #include "realmpfr.h"
> > +#include "target.h"
> > +#include "langhooks.h"
> 
> Sanitizer support into a separate patch?

Ok.

> > @@ -1717,12 +1717,11 @@ simplify_using_ranges::simplify_internal
> >      g = gimple_build_assign (gimple_call_lhs (stmt), subcode, op0, op1);
> >    else
> >      {
> > -      int prec = TYPE_PRECISION (type);
> >        tree utype = type;
> >        if (ovf
> >       || !useless_type_conversion_p (type, TREE_TYPE (op0))
> >       || !useless_type_conversion_p (type, TREE_TYPE (op1)))
> > -   utype = build_nonstandard_integer_type (prec, 1);
> > +   utype = unsigned_type_for (type);
> >        if (TREE_CODE (op0) == INTEGER_CST)
> >     op0 = fold_convert (utype, op0);
> >        else if (!useless_type_conversion_p (utype, TREE_TYPE (op0)))
> 
> Phew.  That was big.

Sorry, I hoped it wouldn't take me almost 3 months and would be much shorter
as well, but clearly I'm not good at estimating stuff...

> A lot of it looks OK (I guess nearly all of it).  For the overall
> picture I'm unsure esp. how/if we need to keep the distinction for
> small _BitInt<>s and if we maybe want to lower them earlier even?

The reason for current location was to have a few cleanup passes after IPA,
so that e.g. value ranges can be propagated and computed (something that
helps a lot e.g. for multiplications/divisions and __builtin_*_overflow).
Once lowered, ranger is out of luck with these.

        Jakub


Reply via email to