The testcase in this PR shows very slow IDF compute: tree SSA rewrite : 76.99 ( 31%) 24.78% 243663 cc1plus cc1plus [.] compute_idf
which can be mitigated to some extent by refactoring the bitmap operations to simpler variants. With the patch below this becomes tree SSA rewrite : 15.23 ( 8%) when not optimizing and in addition to that tree SSA incremental : 181.52 ( 30%) to tree SSA incremental : 24.09 ( 6%) when optimizing. Bootstrap and regtest running on x86_64-unknown-linux-gnu. OK if that succeeds? Thanks, Richard. PR middle-end/114480 * cfganal.cc (compute_idf): Use simpler bitmap iteration, touch work_set only when phi_insertion_points changed. --- gcc/cfganal.cc | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/gcc/cfganal.cc b/gcc/cfganal.cc index 432775decf1..5ef629f677e 100644 --- a/gcc/cfganal.cc +++ b/gcc/cfganal.cc @@ -1701,8 +1701,7 @@ compute_idf (bitmap def_blocks, bitmap_head *dfs) on earlier blocks first is better. ??? Basic blocks are by no means guaranteed to be ordered in optimal order for this iteration. */ - bb_index = bitmap_first_set_bit (work_set); - bitmap_clear_bit (work_set, bb_index); + bb_index = bitmap_clear_first_set_bit (work_set); /* Since the registration of NEW -> OLD name mappings is done separately from the call to update_ssa, when updating the SSA @@ -1712,12 +1711,9 @@ compute_idf (bitmap def_blocks, bitmap_head *dfs) gcc_checking_assert (bb_index < (unsigned) last_basic_block_for_fn (cfun)); - EXECUTE_IF_AND_COMPL_IN_BITMAP (&dfs[bb_index], phi_insertion_points, - 0, i, bi) - { + EXECUTE_IF_SET_IN_BITMAP (&dfs[bb_index], 0, i, bi) + if (bitmap_set_bit (phi_insertion_points, i)) bitmap_set_bit (work_set, i); - bitmap_set_bit (phi_insertion_points, i); - } } return phi_insertion_points; -- 2.35.3