Joe Buck wrote:
> The problem with using time as a cutoff is that you then get results that
> can't be reproduced reliably.  Better to count something that is a feature
> of the algorithm, e.g. number of executions of some inner loop, number of
> nodes visited, or the like, 

On the other hand, it is not based on such features that you'll be
able to provide a watermark on time and space... Having guarantees on
compile time and space is probably what some users will want instead
of yet another bunch of --param max-foo-nodes.

I'd like to ask GCC users in general: how many are using these params?

Why not having instead a set of flags that limit the resources allowed
for each "unnecessary" (to be defined...) part of the compiler?  For
example, I'd like a guarantee that any tree level optimizer will stop
after at most 5 seconds and at most 300M of garbage: you'd say,
-fbudget-time=5 and -fbudget-space=300M instead of having to deal with
some obscure params.

> so that all users get the same results.

I see your point: we'll have bug reports that will be difficult to
reproduce.  I have not yet thought at a solution for this one, but
there should be some practical way to make bugs deterministic again,
otherwise we'll just step into a Schrodinger box, and that's a Bad
Thing.

seb

Reply via email to