Simon Cozens <[EMAIL PROTECTED]> wrote:
> > Simon (?) brought up the problem that we might end up with a
monolithic
> > beastie
>
> I don't recall saying anything about it being a problem. :)
Ok, it scared somebody. That much I remember.
> > Reading what you say, "parser/lexer/tokenizer" (multiple things)
"part"
> > (one thing). That's got to be a stumbling block of some kind.
>
> Why?
Because what is the parser/lexer/tokenizer parsing? Perl? Pythonic?
Javanese? All of them? Thinking of just the parser as a single entity
seems to me to be headed into trouble unless we can define in advance what
type of role these dialects will play in the language, and at what point
they merge into a single entity and how. Certainly, we won't want to try
to make one huge parser/lexer/tokenizer parse everything. There is either
a prefilter of some sort, or multiple parsers (or worse, multiple
"parser/lexer/tokenizer single-entity parts"... meaning redundant
duplication of extra effort over and over again repeatedly).
I'll go over my reading list again to make sure my words are in place, but
apprentice or not, I see what I believe to be a potential for something to
sneak out and nab us if we don't plan for it in advance at this level.
Or, perhaps a more direct question. Has anyone given any thought about how
this multiple-input-style thingy is going to work? Can work? Should work?
p