On Mon, Nov 27, 2000 at 05:03:05PM -0500, Dan Sugalski wrote:
> At 04:50 PM 11/27/00 -0500, Kurt D. Starsinic wrote:
> >On Mon, Nov 27, 2000 at 04:41:34PM -0500, Dan Sugalski wrote:
> > In current perl, we do something _like_ that to disambiguate certain
> >situations. Grep the sources for `expectation'. I wouldn't be surprised
> >if something like this also goes on with, e.g., multi-line regexen.
> >
> > Oh, you said `reasonable'.
>
> :)
Making
%hash = map { "\L$_", 1 } @array
not a syntax error. Currently the guesser gets to ", and goes "hmm, the {}
is an anon hash, EXPR", and then the parser gets upset that there's no ,
after } A backtracking parser could get to the "missing" , and realise that
it guessed wrong, return to the matched opening { and re-parse as a block.
> The big reason I'm thinking about this is I'm trying to decide how much
> text we need to buffer, and at what point we can throw down a marker and
> declare "I don't care about anything before this--it's done". Mainly
> because I'm contemplating how to deal with both perl-as-shell and perl
> source with no determinate length (like if we're reading in perl code from
> a socket or something, where we might not want to wait until it's all in
> before we start chewing on it)
Currently we don't - we chew by line, and break lines out from blocks
However, are you envisaging mmap or async IO as a pleasant scenario
where we can "pretend" to slurp the file and stop worrying about nibbling
a bit more when needed without the inefficiency of actually having to wait
for it all to arrive in memory before the parser starts?
[I'm assuming that you're implying that regular files (determinate length,
seekable) are easy so we don't worry about optimising them until we make the
harder stuff work. So we forget I ever sent that last paragraph for some
months]
Nicholas Clark