http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55674
--- Comment #20 from Teresa Johnson <tejohnson at google dot com> 2012-12-21 16:26:17 UTC --- On Fri, Dec 21, 2012 at 8:15 AM, hubicka at ucw dot cz <gcc-bugzi...@gcc.gnu.org> wrote: > > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55674 > > --- Comment #19 from Jan Hubicka <hubicka at ucw dot cz> 2012-12-21 16:15:34 > UTC --- >> As another data point, in our internal benchmarks I had tried a few >> values and 99.9% gave the best performance. Just going down to 99.0% >> reduced the inlining too much, even compared to the old static cutoff >> count, missing some key inlines and reducing performance. > this really should not happen too much. I still think something along the > following lines > is desirable. Does it helps setting more resonable threshold? I'll give this patch a try and let you know how it affects the performance I see. But unrolling shouldn't affect inlining, since all unrolling is after inlining, right? Thanks, Teresa > > Honza > > Index: predict.c > =================================================================== > *** predict.c (revision 194655) > --- predict.c (working copy) > *************** maybe_hot_count_p (struct function *fun, > *** 145,151 **** > { > ws = find_working_set (PARAM_VALUE (HOT_BB_COUNT_WS_PERMILLE)); > gcc_assert (ws); > ! min_count = ws->min_counter; > } > return (count >= min_count); > } > --- 145,156 ---- > { > ws = find_working_set (PARAM_VALUE (HOT_BB_COUNT_WS_PERMILLE)); > gcc_assert (ws); > ! > ! /* We want all counters above ws->min_counter * profile_info->runs > ! to be safely identified as hot regions. This may be spoiled > ! by optimizations such as unrolling that reduce counts of the > ! body, thus divide by 32. */ > ! min_count = ws->min_counter / 32; > } > return (count >= min_count); > } > > -- > Configure bugmail: http://gcc.gnu.org/bugzilla/userprefs.cgi?tab=email > ------- You are receiving this mail because: ------- > You are on the CC list for the bug.