Apologies for the prolonged silence Richard, it is a bit of an obscure topic, and I was unsure I'd be able to handle any complications in a timely manner. I'm ready to revisit it now, please see below.
On Mon, 17 Jan 2022, Richard Biener wrote: > On Fri, Jan 14, 2022 at 7:21 PM Alexander Monakov <amona...@ispras.ru> wrote: > > > > A returns_twice call may have associated abnormal edges that correspond > > to the "second return" from the call. If the call is duplicated, the > > copies of those edges also need to be abnormal, but e.g. tracer does not > > enforce that. Just prohibit the (unlikely to be useful) duplication. > > The general CFG copying routines properly duplicate those edges, no? No (in fact you say so in the next paragraph). In general I think they cannot, abnormal edges are a special case, so it should be the responsibility of the caller. > Tracer uses duplicate_block so it should also get copies of all successor > edges of that block. It also only traces along normal edges. What it might > miss is abnormal incoming edges - is that what you are referring to? Yes (I think its entire point is to build a "trace" of duplicated blocks that does not have incoming edges in the middle, abnormal or not). > That would be a thing we don't handle in duplicate_block on its own but > that callers are expected to do (though I don't see copy_bbs doing that > either). I wonder if we can trigger this issue for some testcase? Oh yes (in fact my desire to find a testcase delayed this quite a bit). When compiling the following testcase with -O2 -ftracer: __attribute__((returns_twice)) int rtwice_a(int), rtwice_b(int); int f(int *x) { volatile unsigned k, i = (*x); for (k = 1; (i = rtwice_a(i)) * k; k = 2); for (; (i = rtwice_b(i)) * k; k = 4); return k; } tracer manages to eliminate the ABNORMAL_DISPATCHER block completely, so the possibility of transferring control back to rtwice_a from rtwice_b is no longer modeled in the IR. I could spend some time "upgrading" this to an end-to-end miscompilation, but I hope you agree this is quite broken already. > The thing to check would be incoming abnormal edges in > can_duplicate_block_p, not (only) returns twice functions? Unfortunately not, abnormal edges are also used for computed gotos, which are less magic than returns_twice edges and should not block tracer I think. This implies patch 1/3 [1] unnecessary blocks sinking to computed goto targets. [1] https://gcc.gnu.org/pipermail/gcc-patches/2022-January/588498.html How would you like to proceed here? Is my initial patch ok? Alexander > > Richard. > > > gcc/ChangeLog: > > > > * tree-cfg.c (gimple_can_duplicate_bb_p): Reject blocks with > > calls that may return twice. > > --- > > gcc/tree-cfg.c | 7 +++++-- > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c > > index b7fe313b7..a99f1acb4 100644 > > --- a/gcc/tree-cfg.c > > +++ b/gcc/tree-cfg.c > > @@ -6304,12 +6304,15 @@ gimple_can_duplicate_bb_p (const_basic_block bb) > > { > > gimple *g = gsi_stmt (gsi); > > > > - /* An IFN_GOMP_SIMT_ENTER_ALLOC/IFN_GOMP_SIMT_EXIT call must be > > + /* Prohibit duplication of returns_twice calls, otherwise associated > > + abnormal edges also need to be duplicated properly. > > + An IFN_GOMP_SIMT_ENTER_ALLOC/IFN_GOMP_SIMT_EXIT call must be > > duplicated as part of its group, or not at all. > > The IFN_GOMP_SIMT_VOTE_ANY and IFN_GOMP_SIMT_XCHG_* are part of > > such a > > group, so the same holds there. */ > > if (is_gimple_call (g) > > - && (gimple_call_internal_p (g, IFN_GOMP_SIMT_ENTER_ALLOC) > > + && (gimple_call_flags (g) & ECF_RETURNS_TWICE > > + || gimple_call_internal_p (g, IFN_GOMP_SIMT_ENTER_ALLOC) > > || gimple_call_internal_p (g, IFN_GOMP_SIMT_EXIT) > > || gimple_call_internal_p (g, IFN_GOMP_SIMT_VOTE_ANY) > > || gimple_call_internal_p (g, IFN_GOMP_SIMT_XCHG_BFLY) > > -- > > 2.33.1 > > >