On Dec 14, 8:02 am, [EMAIL PROTECTED] wrote:
>
> Lex is very crude. I've found that it takes about half a day to
> organize your token definitions and another half day to write a
> tokenizer by hand. What's the point of the second half-day's work?
>
As someone who has earned a BS in Physics, I hav
[EMAIL PROTECTED] a écrit :
>
> Bruno Desthuilliers wrote:
>
>>Then the first move is to carefully eval existing solutions:
>>http://wiki.python.org/moin/LanguageParsing
>
>
> Always good advice, Bruno. How did you come by that list address?
> Google or is there something special known to Pytho
Bruno Desthuilliers wrote:
> Then the first move is to carefully eval existing solutions:
> http://wiki.python.org/moin/LanguageParsing
Always good advice, Bruno. How did you come by that list address?
Google or is there something special known to Python experts?
--
http://mail.python.org/mailm
[EMAIL PROTECTED] a écrit :
> Most unclear. My apologies.
>
> I'm trying to structure a tokenizer. The stupid concatenations are
> just placeholders for the actual tokenizing work. By rebuilding the
> input they demonstrate that the framework correctly processes all the
> input.
>
> I'm currently
[EMAIL PROTECTED] a écrit :
> Jonathan Garnder said:
>
>> Well, if using something like PLY ( http://www.dabeaz.com/ply/ ) is
>> considered more Pythonic than writing your own parser and lexer...
>
> Lex is very crude.
Possibly. Anyway, there are quite a few other parser generators :
http://wik
Jonathan Garnder said:
> Well, if using something like PLY ( http://www.dabeaz.com/ply/ ) is
> considered more Pythonic than writing your own parser and lexer...
Lex is very crude. I've found that it takes about half a day to
organize your token definitions and another half day to write a
tokeniz
Most unclear. My apologies.
I'm trying to structure a tokenizer. The stupid concatenations are
just placeholders for the actual tokenizing work. By rebuilding the
input they demonstrate that the framework correctly processes all the
input.
I'm currently using a C-style design (my own pointers int
On Dec 14, 6:32 am, [EMAIL PROTECTED] wrote:
> Thanks to a lot of help, I've got the outer framework for my tokenizer
> down to this:
>
> for line_number, line in enumerate(text):
> output = ''
>
> for char_number, char in enumerate(line):
> output += char
>
>
On Dec 13, 11:32 am, [EMAIL PROTECTED] wrote:
> Is there a pythonic design I'm overlooking?
Well, if using something like PLY ( http://www.dabeaz.com/ply/ ) is
considered more Pythonic than writing your own parser and lexer...
Python doesn't have all of life's answers unfortunately.
--
http://m
Thanks to a lot of help, I've got the outer framework for my tokenizer
down to this:
for line_number, line in enumerate(text):
output = ''
for char_number, char in enumerate(line):
output += char
print 'At ' + str(line_number) + ', '+ str(char_number) + ':
10 matches
Mail list logo