On 10/30/22 3:25 PM, Martin D Kealey wrote:
How much faster do you think it can be made?
I don't know, irrelevant though.
The problem is not that individual steps are slow, but rather that it
takes at least a higher-order-polynomial number of steps, possibly
more (such as exponential or factorial).
Speeding up the individual steps will make no practical difference,
while pin-hole optimisations may dramatically speed up some common
cases, but still leave the most general cases catastrophically slow.
These are technical details; no user cares about them.
The purpose of my suggestions was to /minimize/ the complexity that
becomes part of Bash's codebase, while leaving as few pathological
cases as possible - preferably none.
I meant complexity of the language, not the codebase.
In my opinion "make the existing extglob code faster" is a wasted
effort if it doesn't get us to "run in at-most quadratic time" and
preferably "run in regular (linear) time", and so that basically
amounts to "write our own regex state machine compiler and regex
engine". This is a non-trivial task, and would fairly obviously add
*more* complex code into Bash's codebase than any of my suggested
alternatives.
extglobs are already a part of the bash language. All of your suggested
alternatives involve expanding the language in question. That's why I
disagree with all of them.
(Even my options of "postprocess the codebase" or "modify an existing
regex compiler" would leave their execution components untouched; only
the compilation phase would be modified, and a modified regex compiler
would at least stand a chance of existing as a stand-alone library
project.)
If you mean bash should start shipping a huge library like pcre for
solving an edge case, I don't think that's reasonable at all; why take
on such a burden when you already have something that works fine in
practice?