Recursive descent is slow, and it produces bad error messages unless you are very careful with your <commit> calls, but it is very flexible and (locally) predictable. Predictive parsing is faster and produces fantastic error messages, and is fairly flexible. Bottom up parsing is very flexible, but is very fast.
These are all trade-offs that someone writing a parser can (but shouldn't have to) make. I'd like to make it possible for someone writing rules in perl to make these trade-offs, by separating the semantics and the evaluation strategy of rules. I'm not proposing that a dumb rule call (if /foo/ {...}) do anything other than recursive descent. I propose that a "conforming" rule engine must abide by these laws (it is perfectly possible to have a non-conformist rule engine, but rules can't be guaranteed to behave "correctly" when using such an engine). I'll try to define it in terms of the recursive descent strategy instead of going back to context-free fundamentals. I'll handwave instead of being formal if being formal is too complicated: * In the absence of side effects, a rule engine must match a string exactly when a recursive-descent strategy would match. * It is okay for an engine to match or to fail matching when the recursive-descent strategy would diverge (infinite loop), but of course it must be consistent with the defined grammar. * If the engine matches the rule through a given "path" of alternations, all side-effects along that path must have been executed in the proper order, and references to hypotheticals must be correct. (Side-effects may be executed multiple times, as long as if A precedes B in the path, then A is executed at least once before B is exectued). If a side-effect in a given path calls "fail", then that path must not be a matching path. Notice that there is no constraint on alternation ordering. This all means that if you don't do any global side-effects in your rule processing (i.e. keep everything hypothetical), it will behave the same as recursive descent. Luke