Gabriel Petrovay schrieb: > Hi Tom, > > So in your solution I would have to do something like this: > > name1: Name1 | KEYWORD1 | KEYWORD2 | ... KEYWORDM; > name2: Name1 | KEYWORD1 | KEYWORD2 | ... KEYWORDM; > ... > nameN: NameN | KEYWORD1 | KEYWORD2 | ... KEYWORDM; > > This is what I meant when I said scalable. If you have 50 keywords, I > think that the production for each name rule is simply too large. This > comes with the cost of very expensive parsing. Isn't it? > > Regards, > Gabriel >
Why would you need n times m entries? What do you mean by name1 ... N anyway? If you want to model a variable context of names seen so far, you cannot do it like this. Btw, your Name lexer rule should disallow empty strings. Dont be scared by the number of keywords. I have a parser that recognizes 800 keywords, and it works. You may encounter strange strings like time-outs when trying to compute the prediction DFA, though, if the language is ambiguous (which is the case with xquery, it seems). But if you are careful, it should work. Andreas List: http://www.antlr.org/mailman/listinfo/antlr-interest Unsubscribe: http://www.antlr.org/mailman/options/antlr-interest/your-email-address --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "il-antlr-interest" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/il-antlr-interest?hl=en -~----------~----~----~----~------~----~------~--~---
