VAX has interlocked branch instructions used for atomic operations and
we want to have them wrapped in UNSPEC_VOLATILE so as not to have code
carried across.  This however breaks with jump optimization and leads
to an ICE in the build of libbacktrace like:

.../libbacktrace/mmap.c:190:1: internal compiler error: in fixup_reorder_chain, 
at cfgrtl.c:3934
  190 | }
      | ^
0x1087d46b fixup_reorder_chain
        .../gcc/cfgrtl.c:3934
0x1087f29f cfg_layout_finalize()
        .../gcc/cfgrtl.c:4447
0x1087c74f execute
        .../gcc/cfgrtl.c:3662

on RTL like:

(jump_insn 18 17 150 4 (unspec_volatile [
            (set (pc)
                (if_then_else (eq (zero_extract:SI (mem/v:SI (reg/f:SI 23 [ _2 
]) [-1  S4 A32])
                            (const_int 1 [0x1])
                            (const_int 0 [0]))
                        (const_int 1 [0x1]))
                    (label_ref 20)
                    (pc)))
            (set (zero_extract:SI (mem/v:SI (reg/f:SI 23 [ _2 ]) [-1  S4 A32])
                    (const_int 1 [0x1])
                    (const_int 0 [0]))
                (const_int 1 [0x1]))
        ] 101) ".../libbacktrace/mmap.c":135:14 158 {jbbssisi}
     (nil)
 -> 20)

when those branches are enabled with a follow-up change.  Also showing
with:

FAIL: gcc.dg/pr61756.c (internal compiler error)

Handle branches wrapped in UNSPEC_VOLATILE then and, for consistency,
also in UNSPEC.  The presence of UNSPEC_VOLATILE will prevent such
branches from being removed as they won't be accepted by `onlyjump_p',
we just need to let them through.

        gcc/
        * jump.c (pc_set): Also accept a jump wrapped in UNSPEC or
        UNSPEC_VOLATILE.
        (any_uncondjump_p, any_condjump_p): Update comment accordingly.
---
On Fri, 20 Nov 2020, Jeff Law wrote:

> > Handle branches wrapped in UNSPEC_VOLATILE then and, for consistency,
> > also in UNSPEC.  The presence of UNSPEC_VOLATILE will prevent such
> > branches from being removed as they won't be accepted by `onlyjump_p',
> > we just need to let them through.
> >
> >     gcc/
> >     * jump.c (pc_set): Also accept a jump wrapped in UNSPEC or
> >     UNSPEC_VOLATILE.
> >     (any_uncondjump_p, any_condjump_p): Update comment accordingly.
> I've got some concerns that there may be users of pc_set where handling
> UNSPECs would be undesirable.  For example the uses in cfgcleanup.

 I've gone through the use of `pc_set' and `any_condjump_p' in detail 
then.  Mind that I wouldn't claim expertise with this stuff, just common 
sense backed with documentation and source code available.

 It appears safe to me though, as the really dangerous cases where a jump 
is to be removed, in `thread_jump' and `outgoing_edges_match', are guarded 
with calls to `onlyjump_p' (the reference to which from the description of 
`any_condjump_p' making a promise it will guard the relevant cases is what 
made me pretty confident with my change being technically correct), which 
will reject any jumps wrapped into UNSPEC_VOLATILE or even UNSPEC (though 
arguably the latter case might be an overkill, and we could loosen that 
restriction) on the premise of failing `single_set', which only accepts a 
SET, possibly wrapped into a PARALLEL.

 Similarly with branch deletion in `cfg_layout_redirect_edge_and_branch' 
and attempted transformations using `cond_exec_get_condition'.

 Those `pc_set' calls that you mention are then only used once the 
containing code has been qualified with `onlyjump_p', so they won't be 
ever called with branches wrapped into UNSPEC_VOLATILE.  Likewise the 
`pc_set' calls in `cprop_jump' and `bypass_block'.

 The calls in `try_optimize_cfg' trying to convert a conditional branch 
into a corresponding conditional return (necessary to analyse of course 
though not relevant for the VAX; oh, how I long for the Z80) both rely on 
`redirect_jump' and possibly `invert_jump', and they're supposed to fail 
if a suitably modified original rtx cannot be matched with any RTL pattern 
(though AFAICS `redirect_exp_1' will just fail due to the lack of explicit 
UNSPEC_VOLATILE/UNSPEC support and so will `invert_jump' as it relies on 
it too).

 Except for one case the use in icvt appear safe to me: all legs except 
for ones calling into `dead_or_predicable' refer to `onlyjump_p'.  Then in 
`dead_or_predicable' we have two cases, NCE and CE.  The NCE one there is 
safe due to `can_move_insns_across' rejecting the move in the case of any 
UNSPEC_VOLATILE insn in either block.  The CE one isn't because ultimately 
it will delete the jump without validating it with `onlyjump_p' AFAICT.

 This will not affect VAX as it doesn't have conditional execution, and is 
not a fault in my change.  I think it needs fixing though, and I will post 
a patch separately, along with a minor clean-up in this area.

 In `force_nonfallthru_and_redirect' we have `gcc_assert (redirected)' and 
this may well trigger sometime.  Perhaps we need to make `redirect_jump' 
understand UNSPEC_VOLATILE/UNSPEC?  WDYT?

 Finally I feel like we ought to avoid loop unrolling with the relevant 
jumps as pasting the body of a loop multiple times within is semantically 
equivalent to jump deletion (between the iterations unrolled) even though 
we do not physically delete the RTL insn, so it needs to be guarded with 
`onlyjump_p', and to me it does not appear to.  I'll post a proposed 
change along with the other ones in a small series for easier reference.

 I suspect (though I'm not strongly confident) this change may not affect 
any actual code produced across our targets.  If it does, then I think we 
need to analyse individual cases and see how to proceed.  Otherwise I 
think we want the change for the sake of internal consistency.

 I won't be getting into further details of each case I have analysed.  I 
have ended up with five patches that add missing `onlyjump_p' calls, also 
for `any_uncondjump_p', which I think needs to be protected likewise, and 
actually already is in several places, making me feel pretty confident the 
remaining two are merely oversights.  I left cases across backend code 
intact under the assumption that they know what they are doing, e.g. 
because the respective backends have no jumps with side effects defined.

 With these extra patches I think we can safely proceed with my original 
change if not for an update to `redirect_jump' mentioned above.  The more 
I think about it the more I am convinced it would be a reasonable update 
to make.  I'll experiment with it, but I don't think it should ultimately 
be a showstopper for the change being reviewed here.

> Would it make sense to have the handling of UNSPECs be conditional so
> that we don't get unexpected jump threading, cross jumping, etc?

 So for the sake of this consideration I actually made such a change (for 
`pc_set' and `any_condjump_p'), applying the extra condition mechanically 
throughut our code and making it `true' for the equivalent of the original 
change right away where it was obvious to me it did not matter.  Then I 
went through the remaining ~50 cases in detail flipping the condition to 
`true' as I ticked off cases found harmless and finally came up with the 
conclusion above.  If any new dangerous cases arise, then I think they 
need to be guarded as appropriate, with `onlyjump_p' for deletions and 
`can_move_insns_across' for reordering.

 My overall view is with jumps wrapped into plain UNSPECs the middle end 
ought to be allowed to do whatever it wishes, it's an annotation really if 
the rtx within can be usefully interpreted.  While with UNSPEC_VOLATILEs 
obviously the usual rules for a barrier apply, i.e. the jump must not be 
deleted or code carried across it.  I think however that the target label 
is allowed to be redirected if we can prove (which I believe we do) that 
the resulting code is semantically equivalent, e.g. a conditional jump to 
an unconditional one.  And the condition is allowed to be reversed as long 
as, obviously, the backend has provided a definition for a corresponding 
reverse UNSPEC_VOLATILE branch.  Physical duplication (cf. `rotate_loop') 
is allowed too.

 I'll post the six patches mentioned shortly once I have written their 
descriptions.  Here's an update to 08/31 with a clarification WRT 
`onlyjump_p' for `any_uncondjump_p'; no functional change.

  Maciej

Changes from v1:

- Update `any_uncondjump_p' description.
---
 gcc/jump.c |   24 +++++++++++++++++-------
 1 file changed, 17 insertions(+), 7 deletions(-)

Index: gcc/gcc/jump.c
===================================================================
--- gcc.orig/gcc/jump.c
+++ gcc/gcc/jump.c
@@ -850,9 +850,17 @@ pc_set (const rtx_insn *insn)
   pat = PATTERN (insn);
 
   /* The set is allowed to appear either as the insn pattern or
-     the first set in a PARALLEL.  */
-  if (GET_CODE (pat) == PARALLEL)
-    pat = XVECEXP (pat, 0, 0);
+     the first set in a PARALLEL, UNSPEC or UNSPEC_VOLATILE.  */
+  switch (GET_CODE (pat))
+    {
+    case PARALLEL:
+    case UNSPEC:
+    case UNSPEC_VOLATILE:
+      pat = XVECEXP (pat, 0, 0);
+      break;
+    default:
+      break;
+    }
   if (GET_CODE (pat) == SET && GET_CODE (SET_DEST (pat)) == PC)
     return pat;
 
@@ -860,7 +868,9 @@ pc_set (const rtx_insn *insn)
 }
 
 /* Return true when insn is an unconditional direct jump,
-   possibly bundled inside a PARALLEL.  */
+   possibly bundled inside a PARALLEL, UNSPEC or UNSPEC_VOLATILE.
+   The instruction may have various other effects so before removing the jump
+   you must verify onlyjump_p.  */
 
 int
 any_uncondjump_p (const rtx_insn *insn)
@@ -876,9 +886,9 @@ any_uncondjump_p (const rtx_insn *insn)
 }
 
 /* Return true when insn is a conditional jump.  This function works for
-   instructions containing PC sets in PARALLELs.  The instruction may have
-   various other effects so before removing the jump you must verify
-   onlyjump_p.
+   instructions containing PC sets in PARALLELs, UNSPECs or UNSPEC_VOLATILEs.
+   The instruction may have various other effects so before removing the jump
+   you must verify onlyjump_p.
 
    Note that unlike condjump_p it returns false for unconditional jumps.  */
 

Reply via email to