It is my impression that most practical GLR grammars have parses that
collapse pretty quickly after they split,
For the info which I would find useful, this would be an important part --
i.e.
glrinfo: split level 1, split at state 216 (somename: someothername, …)
Bison doesn't know how much
seq0: '0' seq0 | '0' ;
seq1: '1' seq1 | '1' ;
and '1' and to resolve it by the shift:
%right '0' '1'
but it would mask any other conflicts involving 0 and 1, so I'd use the
expect so you'll see any other unintended conflicts.
If you have a classic flex/bison setup and want th
You might have to resolve that using GLR, or the hack that a
reduce/reduce conflict is resolved in favor of the earlier rule in the
bison script:
whilekwd: 'w' 'h' 'i' 'l' 'e' ;
ifkwd: 'i' 'f' ;
thenkwd: 't' 'h' 'e' 'n' ;
elsekwd: 'e' 'l' 's' 'e' ;
identifier: letter | identifier letter ;
letter
A more important point is that the time spent in the parser is never
significant. If your compiler is simple, the bulk of the time is
in the lexer since it has to touch each character in the input. If
your compiler is sophisticated, it'll spend a its time in analysis and
optimization.
For scri
I'm wondering how much of that is the parser and how much is the lexer.
Disclaimer: I'm totally new at lexers and parsers; all my other work
is in other areas of the Ruby VM.
I just tried uninlining the yylex function in Ruby but roughly 1/2 to
2/3 of that time is still in yyparse (aka ruby_yyp