https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102943

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
   Last reconfirmed|2022-01-18 00:00:00         |2022-3-10

--- Comment #35 from Richard Biener <rguenth at gcc dot gnu.org> ---
So I've re-measured -Ofast -march=znver2 -flto on todays trunk with release
checking (built with GCC 7, not bootstrapped) and the largest LTRANS (ltrans22
at the moment) unit still has

 tree VRP                           :  15.52 ( 20%)   0.03 (  5%)  15.57 ( 20%)
   28M (  4%)
 backwards jump threading           :  16.17 ( 21%)   0.00 (  0%)  16.15 ( 21%)
 1475k (  0%)
 TOTAL                              :  77.29          0.59         77.92       
  744M

and the 2nd largest (ltrans86 at the moment)

 alias stmt walking                 :   7.70 ( 16%)   0.03 (  8%)   7.70 ( 16%)
  703k (  0%)
 tree VRP                           :   8.25 ( 18%)   0.01 (  3%)   8.27 ( 17%)
   14M (  3%)
 backwards jump threading           :   8.79 ( 19%)   0.00 (  0%)   8.82 ( 19%)
 1645k (  0%)
 TOTAL                              :  46.97          0.38         47.38       
  438M

so it's still by far jump-threading/VRP dominating compile-times (I wonder
if we should separate "old" and "new" [E]VRP timevars).  Given that VRP
shows up as well it's more likely the underlying ranger infrastructure?

perf thrown on ltrans22 shows

Samples: 302K of event 'cycles', Event count (approx.): 331301505627            
Overhead       Samples  Command      Shared Object     Symbol                   
  10.34%         31299  lto1-ltrans  lto1              [.]
bitmap_get_aligned_chunk
   7.44%         22540  lto1-ltrans  lto1              [.] bitmap_bit_p
   3.17%          9593  lto1-ltrans  lto1              [.]
get_immediate_dominator
   2.87%          8668  lto1-ltrans  lto1              [.]
determine_value_range
   2.36%          7143  lto1-ltrans  lto1              [.]
ranger_cache::propagate_cache
   2.32%          7031  lto1-ltrans  lto1              [.] bitmap_set_bit
   2.20%          6664  lto1-ltrans  lto1              [.]
operand_compare::operand_equal_p
   1.88%          5692  lto1-ltrans  lto1              [.]
bitmap_set_aligned_chunk
   1.79%          5390  lto1-ltrans  lto1              [.]
number_of_iterations_exit_assumptions
   1.66%          5048  lto1-ltrans  lto1              [.]
get_continuation_for_phi

callgraph info in perf is a mixed bag, but maybe it helps to pinpoint things:

-   10.20%    10.18%         30364  lto1-ltrans  lto1              [.]
bitmap_get_aligned_chunk                                                       
                                                                               
                                                  #
   - 10.18% 0xffffffffffffffff                                                 
                                                                               
                                                                               
                                         #
      + 9.16% ranger_cache::propagate_cache                                    
                                                                               
                                                                               
                                         #
      + 1.01% ranger_cache::fill_block_cache               

-    7.84%     7.83%         23509  lto1-ltrans  lto1              [.]
bitmap_bit_p                                                                   
                                                                               
                                                  #
   - 6.20% 0xffffffffffffffff                                                  
                                                                               
                                                                               
                                         #
      + 1.85% fold_using_range::range_of_range_op                              
                                                                               
                                                                               
                                         #
      + 1.64% ranger_cache::range_on_edge                                      
                                                                               
                                                                               
                                         #
      + 1.29% gimple_ranger::range_of_expr            

and the most prominent get_immediate_dominator calls are from
back_propagate_equivalences which does

  FOR_EACH_IMM_USE_FAST (use_p, iter, lhs)
...
      /* Profiling has shown the domination tests here can be fairly
         expensive.  We get significant improvements by building the
         set of blocks that dominate BB.  We can then just test
         for set membership below.

         We also initialize the set lazily since often the only uses
         are going to be in the same block as DEST.  */
      if (!domby)
        {
          domby = BITMAP_ALLOC (NULL);
          basic_block bb = get_immediate_dominator (CDI_DOMINATORS, dest);
          while (bb)
            {
              bitmap_set_bit (domby, bb->index);
              bb = get_immediate_dominator (CDI_DOMINATORS, bb);
            }
        }

      /* This tests if USE_STMT does not dominate DEST.  */
      if (!bitmap_bit_p (domby, gimple_bb (use_stmt)->index))
        continue;

I think that "optimization" is flawed - a dominance check is cheap if
the DFS numbers are up-to-date:

bool
dominated_by_p (enum cdi_direction dir, const_basic_block bb1,
const_basic_block bb2)
{       
  unsigned int dir_index = dom_convert_dir_to_idx (dir);
  struct et_node *n1 = bb1->dom[dir_index], *n2 = bb2->dom[dir_index];

  gcc_checking_assert (dom_computed[dir_index]);

  if (dom_computed[dir_index] == DOM_OK)
    return (n1->dfs_num_in >= n2->dfs_num_in
            && n1->dfs_num_out <= n2->dfs_num_out);

  return et_below (n1, n2);
}

it's just the fallback that is not.  Also recoding _all_ dominators of
'dest' is expensive for a large CFG but you'll only ever need
dominators up to the definition of 'lhs' which we know will dominate
all use_stmt so if that does _not_ dominate e->dest no use will
(but I think that's always the case in the current code).  Note
the caller iterates over simple equivalences on an edge so this
bitmap is populated multiple times (but if we cache it we cannot
prune from the top).  For FP we have usually multiple equivalences
so caching pays off more than pruning for WRF.  Note this is only
a minor part of the slowness, I'm testing a patch for this part.
Note for WRF always going the "slow" dominated_by_p way is as fast
as caching.

Reply via email to