John Machin schrieb:

Single-character tokens like "<" may be more efficiently handled by
doing a dict lookup after failing to find a match in the list of
(name, regex) tuples.

Yes, I will keep that in mind. For the time being, I will use only regexes to keep the code simpler. Later, or when the need for a speedup arises, I can use your suggestion for optimizing my code. Depending on the tokens, it might even be better to use the dict lookup right away and the regex as a secondary means for more complex stuff.

[Mutually exclusive tokens = no ambiguities = same input -> same output]
So what? That is useless knowledge.

For the lexer, perhaps. Not for the user. An ambiguous lexer will be of no use.

It is the ambiguous cases that you
need to be concerned with.

Exactly. In which way does that contradict what I said?

[Using dict]
No, not at all. The point is that you were not *using* any of the
mapping functionality of the dict object, only ancillary methods like
iteritems -- hence, you should not have been using a dict at all.

I /could/ have done it with a list of tuples. I use no functionality that /only/ a dict can do. So using a dict here is like using a truck for transporting a single sheet of paper?

Greetings,
Thomas

--
Ce n'est pas parce qu'ils sont nombreux à avoir tort qu'ils ont raison!
(Coluche)
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to