On Sun, 18 Nov 2007 21:46:49 -0800, MonkeeSage wrote:
> As I see it, just as a matter of common sense, there will be no way to
> match the performance of the backend eval() with any interpreted code.
> At best, performance-wise, a preprocessor for the built-in eval() would
> be in order, filtering
En Sun, 18 Nov 2007 22:24:39 -0300, greg <[EMAIL PROTECTED]>
escribi�:
> Importing the names from tokenize that you use repeatedly
> should save some time, too.
>from tokenize import STRING, NUMBER
>
> If you were willing to indulge in some default-argument abuse, you
> could also do
>
>def
As I see it, just as a matter of common sense, there will be no way to
match the performance of the backend eval() with any interpreted code.
At best, performance-wise, a preprocessor for the built-in eval()
would be in order, filtering out the "unsafe" cases and passing the
rest through. But what
On Nov 18, 8:24 pm, greg <[EMAIL PROTECTED]> wrote:
> Tor Erik Sønvisen wrote:
> > Comments, speedups, improvements in general, etc are appreciated.
>
> You're doing a lot of repeated indexing of token[0]
> and token[1] in your elif branches. You might gain some
> speed by fetching these into loca
Tor Erik Sønvisen wrote:
> Comments, speedups, improvements in general, etc are appreciated.
You're doing a lot of repeated indexing of token[0]
and token[1] in your elif branches. You might gain some
speed by fetching these into locals before entering the
elif chain.
Also you could try ordering