On Tue, Nov 05, 2024 at 01:43:53PM +0100, Richard Biener wrote:
> The following moves the two virtual operands into the gimple_ops
> array, thereby allowing those to be allocated on-demand.  In
> particular the patch goes the simple route never allocating those
> for GIMPLE_UNARY_RHS, GIMPLE_BINARY_RHS or GIMPLE_TERNARY_RHS.

Nice.

> I did not clean up the class hierarchy, I did not look for adjustments
> needed for gcall or gasm, gimple_size is now wrong for those and
> I didn't quickly find a "nice" solution here.
> 
> Various passes need adjustment to where they do gimple_assign_set_rhs_*
> and not handle the (previously impossible) case where the stmt gets
> re-allocated.  For example _1 = _2 + 1 requires 3 ops but
> _1 = _3 requires 4 as it is GIMPLE_SINGLE_RHS.  I didn't try to
> optimize further by changing allocation based on subcode
> (we also have those GENERIC tcc_reference non-memory operations
> like REALPART_EXPR or VIEW_CONVERT_EXPR).

That wouldn't work, because subcode SSA_NAME or INTEGER_CST etc. can still
mean a store which needs vdef/vuse.

> @@ -141,13 +141,13 @@ inline use_operand_p
>  gimple_vuse_op (const gimple *g)
>  {
>    struct use_optype_d *ops;
> -  const gimple_statement_with_memory_ops *mem_ops_stmt =
> -     dyn_cast <const gimple_statement_with_memory_ops *> (g);
> -  if (!mem_ops_stmt)
> +  const gimple_statement_with_ops_base *mem_ops_stmt =
> +     dyn_cast <const gimple_statement_with_ops_base *> (g);

While touching it, I'd move the trailing =s

>  /* Return true if GIMPLE statement G has memory operands.  */
>  
>  inline bool
>  gimple_has_mem_ops (const gimple *g)
>  {
> -  return gimple_code (g) >= GIMPLE_ASSIGN && gimple_code (g) <= 
> GIMPLE_RETURN;
> +  return ((gimple_code (g) >= GIMPLE_ASSIGN && gimple_code (g) <= 
> GIMPLE_RETURN)
> +       && (gimple_code (g) != GIMPLE_ASSIGN
> +           || gimple_assign_rhs_class (g) == GIMPLE_SINGLE_RHS));

Wouldn't it be better to say
  (gimple_code (g) == GIMPLE_ASSIGN
   && gimple_assign_rhs_class (g) == GIMPLE_SINGLE_RHS)
  || (gimple_code (g) > GIMPLE_ASSIGN && gimple_code (g) <= GIMPLE_RETURN);
?

Anyway, I think the other option would be not to add the vuse/vdef at the
end of ops, but keep it as before before the ops, keep the structures as is
and just remove the implication that GIMPLE_ASSIGN implies always
GSS_WITH_MEM_OPS; let it be GSS_WITH_MEM_OPS if it is GIMPLE_SINGLE_RHS
and GSS_WITH_OPS otherwise.
As we have hundreds of tests for GIMPLE_ASSIGN or case GIMPLE_ASSIGN, having
separate GIMPLE_ASSIGN_WITH_MEMOPS or similar would be quite a large change,
but perhaps we could just change
gimple_statement_structure (handle the GIMPLE_ASSIGN GIMPLE_SINGLE_RHS
special case there) and the other gss_for_code user is gimple_size,
where we really need to pass extra info because just code and num_ops
isn't everything anymore.  We could simply keep GIMPLE_ASSIGN to have
GSS_WITH_MEM_OPS as the conservatively larger case and turn
non-GIMPLE_SINGLE_RHS in gimple_statement_structure into GSS_WITH_OPS,
and add gimple_size with 3 arguments which would supply the subcode.
And gimple_alloc too.

        Jakub

Reply via email to