https://gcc.gnu.org/bugzilla/show_bug.cgi?id=124406
Richard Biener <rguenth at gcc dot gnu.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |NEW
Ever confirmed|0 |1
Last reconfirmed| |2026-03-11
Summary|[16 Regression] LTO |[16 Regression] LTO
|profiledbootstrap fails on |profiledbootstrap with
|s390x-linux due to coverage |bootstrap-lto-lean fails on
|mismatch |s390x-linux due to coverage
| |mismatch
--- Comment #11 from Richard Biener <rguenth at gcc dot gnu.org> ---
So the differences start to appear in 056t.local-pure-const1 where basic block
indices start to diverge (those are used to compute the CFG hash):
ipa_icf::sem_item_optimizer::fixup_points_to_sets (this_218(D));
-;; succ: 106 [always] count:12581363 (estimated locally, freq
0.3500)
(FALLTHRU,EXECUTABLE)
+;; succ: 108 [always] count:12581363 (estimated locally, freq
0.3500)
(FALLTHRU,EXECUTABLE)
where the difference might be exposed because:
;; Function ipa_icf::sem_item_optimizer::merge_classes
(_ZN7ipa_icf18sem_item_optimizer13merge_classesEjj, funcdef_no=7612,
decl_uid=199459, cgraph_uid=5252, symbol_order=5251)
-Function is interposable; not analyzing.
+ neither
+
+
+ local analysis of ipa_icf::sem_item_optimizer::merge_classes(unsigned int,
unsigned int)/5252
...
+Function is locally looping.
+Function can locally free.
+Function found to be nothrow: ipa_icf::sem_item_optimizer::merge_classes
+fix_loop_structure: fixing up loops for function
which means we're not analyzing it w/o -flto but do with and the CFG
changes. Likely the execute_fixup_cfg () we do at the end causes the
divergence, possibly fixing up loops is solely responsible - maybe we
should unconditionally do that at least. The following modref1 pass
might have similar issues, and different visibility might also lead to
different early inlining decisions.
In the end it's a side-effect of using bootstrap-lto-lean.mk which does
not enable LTO for stagetrain like bootstrap-lto does.
So maybe this shows mixed -flto/-fno-lto training/feedback does not work
without -fprofile-correction? (which we could add to said config I guess)
I did not yet do further analysis, IMO that availability differs is enough
indication of possible issues?