On 05/20/2010 10:18 AM, Paolo Bonzini wrote:
On 05/19/2010 11:43 PM, Anthony Liguori wrote:

4. Lexer expects a 'terminal' char to process a token

Which means clients must send a sort of end of line char, so that we
    process their input.

    Maybe I'm missing something here, but I thought that the whole
    point of writing our own parser was to avoid this.

If the lexer gets:

"abc"

It has no way of knowing if that's a token or if we're going to get:

"abcd"

Only } and ] are valid characters at the end of a JSON object, and neither requires lookahead.

Having look ahead operate differently for different states really complicates the lexer. I don't see this as a big problem in practice.

Regards,

Anthony Liguori

Paolo


Reply via email to