Kenneth Zadeck wrote on 11/04/06 15:17:

1) defining the set of optimizations that need to be skipped. 2)
defining the set of functions that trigger the special processing.

 This seems too simplistic.  Number of variables/blocks/statements is a
factor, but they may interact in ways that are difficult or not possible
to compute until after the optimization has started (it may depend on
how many blocks have this or  that property, in/out degree, number of
variables referenced in statements, grouping of something or other, etc).

So, in my view, each pass should be responsible for throttling itself.
The pass gate functions already give us the mechanism for on/off.  I
agree that we need more graceful throttles.  And then we have components
of the pipeline that cannot really be turned on/off (like alias
analysis) but could throttle themselves based on size (working on that).

The compilation manager could then look at the options, in particular
 the -O level and perhaps some new options to indicate that this is a
 small machine or in the other extreme "optimize all functions come
hell or high water!!" and skip those passes which will cause
performance problems.

All this information is already available to the gate functions.  There
isn't a lot here that the pass manager needs to do.  We already know
compilations options, target machine features, and overall optimization
level.

What we do need is for each pass to learn to throttle itself and/or turn
itself off.  Turning the pass off statically and quickly could be done
in the gating function.  A quick analysis of the CFG made by the pass
itself may be enough to decide.

We could provide a standard group of heuristics with standard metrics
that lazy passes could use.  Say a 'cfg_too_big_p' or 'cfg_too_jumpy_p'
that passes could call and decide not to run, or set internal flags that would partially disable parts of the pass (much like DCE can work with or without control-dependence information).

Reply via email to