Hi, the patch below fixes various issues in function update_specialized_profile. The main is removal of the assert which is bogus in the case of recursive cloning. The division of unexplained counts is guesswork, which then leads to updates of counts of recursive edges, which then can be redirected to the new clone and their count subtracted from the count and there simply may not be enough left in the count of the original node - especially when we clone a lot because of using --param ipa-cp-eval-threshold=1.
The other issue was omission to drop the count of the original node to ipa count. And when calculating the remainder, we should use lenient_count_portion_handling to account for partial train runs. Finally, the patch adds dumping of the original count which I think is useful. Profiled-LTO-bootstrapped on its own and also normally bootstrapped and tested together with the subsequent patch on an x86_64-linux. OK for master and the 12 branch - assuming it is also affected? Thanks, Martin gcc/ChangeLog: 2023-02-17 Martin Jambor <mjam...@suse.cz> PR ipa/107925 * ipa-cp.cc (update_specialized_profile): Drop orig_node_count to ipa count, remove assert, lenient_count_portion_handling, dump also orig_node_count. --- gcc/ipa-cp.cc | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/gcc/ipa-cp.cc b/gcc/ipa-cp.cc index 4b8dedc0c51..5a6b41cf2d6 100644 --- a/gcc/ipa-cp.cc +++ b/gcc/ipa-cp.cc @@ -5093,22 +5093,24 @@ update_specialized_profile (struct cgraph_node *new_node, profile_count redirected_sum) { struct cgraph_edge *cs; - profile_count new_node_count, orig_node_count = orig_node->count; + profile_count new_node_count, orig_node_count = orig_node->count.ipa (); if (dump_file) { fprintf (dump_file, " the sum of counts of redirected edges is "); redirected_sum.dump (dump_file); + fprintf (dump_file, "\n old ipa count of the original node is "); + orig_node_count.dump (dump_file); fprintf (dump_file, "\n"); } if (!(orig_node_count > profile_count::zero ())) return; - gcc_assert (orig_node_count >= redirected_sum); - new_node_count = new_node->count; new_node->count += redirected_sum; - orig_node->count -= redirected_sum; + orig_node->count + = lenient_count_portion_handling (orig_node->count - redirected_sum, + orig_node); for (cs = new_node->callees; cs; cs = cs->next_callee) cs->count += cs->count.apply_scale (redirected_sum, new_node_count); -- 2.39.1