https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90078

--- Comment #7 from Martin Liška <marxin at gcc dot gnu.org> ---
(In reply to bin cheng from comment #6)
> (In reply to Martin Liška from comment #5)
> > (In reply to bin cheng from comment #4)
> > > In get_scaled_computation_cost_at, we have very big ratio between
> > > bb_count/loop_count:
> > > 
> > > (gdb) p data->current_loop->latch->count                   
> > > $50 = {static n_bits = 61, static max_count = 2305843009213693950, static
> > > uninitialized_count = 2305843009213693951, m_val = 158483, m_quality =
> > > profile_guessed_local}
> > > (gdb) p gimple_bb(at)->count
> > > $51 = {static n_bits = 61, static max_count = 2305843009213693950, static
> > > uninitialized_count = 2305843009213693951, m_val = 1569139790, m_quality =
> > > profile_guessed_local}
> > > (gdb) p 1569139790 / 158483
> > > $52 = 9900
> > > (gdb) p cost
> > > $53 = {cost = 20, complexity = 2, scratch = 1}
> > > (gdb) p 19 * 9900
> > > $54 = 188100
> > > 
> > > as a result, sum_cost soon reaches to overflow of infinite_cost.  Shall we
> > > cap the ratio so that it doesn't grow too quick?  Of course, some 
> > > benchmark
> > > data is needed for this heuristic tuning.
> > 
> > I would implement the capping in comp_cost struct where each individual
> > operator
> > can cap to infinite. What do you think Bin?
> Implementing the capping in comp_cost::operators to infinite_cost is less
> invasive.  OTOH, capping bb_freq/loop_freq has its own advantages, because:
> Once cost reaches to infinite, it becomes meaningless in comparison as well
> as candidate choosing;  capping bb_freq/loop_freq can still express hotness
> of code to some extend.
> Let's fix the issue by capping comp_cost::operators first for this stage 4
> and revisit the idea capping bb_freq/loop_freq with more benchmark data in
> next Stage 1.  How about that?
> 
> Thanks.

Sounds good. Can you please work on that Bin?

Reply via email to