embedding jython in CPython...

2005-01-22 Thread Jim Hargrave
I've read that it is possible to compile jython to native code using 
GCJ. PyLucene uses this approach, they then use SWIG to create a Python 
wrapper around the natively compiled (java) Lucene. Has this been done 
before for with jython?

Another approach would be to use JPype to call the jython jar directly.
My goal is to be able to script Java code using Jython - but with the 
twist of using Cpython as a glue layer. This would allow mixing of Java 
and non-Java resources - but stil do it all in Python (Jython and Cpython).

I'd appreciate any pointers to this topic and pros/cons of the various 
methods.

--
http://mail.python.org/mailman/listinfo/python-list


Comments in configuration files

2005-01-22 Thread Pierre Quentel
Bonjour,
I am developing an application and I have a configuration file with a 
lot of comments to help the application users understand what the 
options mean

I would like it to be editable, through a web browser or a GUI 
application. With ConfigParser I can read the configuration file and 
edit the options, but when I write the result all the comments are lost

Are there modules that work on the same kind of ini files (for the needs 
of my application, I prefer this format to XML or YAML) and don't remove 
the comments ?

TIA,
Pierre
--
http://mail.python.org/mailman/listinfo/python-list


Re: default value in a list

2005-01-22 Thread Alex Martelli
TB <[EMAIL PROTECTED]> wrote:

> Is there an elegant way to assign to a list from a list of unknown
> size?  For example, how could you do something like:
> 
> >>>  a, b, c = (line.split(':'))
> if line could have less than three fields?

import itertools as it

a, b, c = it.islice(
  it.chain(
  line.split(':'), 
  it.repeat(some_default),
  ), 
  3)

I find itertools-based solutions to be generally quite elegant. 

This one assumes you want to assign some_default to variables in the LHS
target beyond the length of the RHS list.  If what you want is to repeat
the RHS list over and over instead, this simplifies the first argument
of islice:

a, b, c = it.islice(it.cycle(line.split(':')), 3)

Of course, you can always choose to write your own generator instead of
building it up with itertools.  itertools solutions tend to be faster,
and I think it's good to get familiar with that precious modules, but
without such familiarity many readers may find a specially coded
generator easier to follow.  E.g.:

def pad_with_default(N, iterable, default=None):
it = iter(iterable)
for x in it:
if N<=0: break
yield x
N -= 1
while N>0:
yield default
N -= 1

a, b, c = pad_with_default(3, line.split(':'))


The itertools-based solution hinges on a higher level of abstraction,
glueing and tweaking iterators as "atoms"; the innards of a custom coded
generator tend to be programmed at a lower level of abstraction,
reasoning in item-by-item mode.  There are pluses and minuses to each
approach; I think in the long range higher abstraction pays off, so it's
worth the investment to train yourself to use itertools.


In the Python Cookbook new 2nd edition, due out in a couple months,
we've added a whole new chapter about iterators and generators, since
it's such a major subfield in today's Python (as evidenced by the wealth
of recipes submitted to Activestate's online cookbook sites on the
subject).  A couple of recipes have do with multiple unpacking
assignment -- one of them, in particular, is an evil hack which peers
into the caller's bytecode to find out how many items are on the LHS, so
you don't have to pass that '3' explicitly.  I guess that might be
considered "elegant", for suitably contorted meanings of "elegant"...
it's on the site, too, but I don't have the URL at hand.  It's
instructive, anyway, but I wouldn't suggest actually using it in any
kind of production code...


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Alex Martelli
Dave Benjamin <[EMAIL PROTECTED]> wrote:

> Can we get a show of hands for all of those who have written or are 
> currently maintaining code that uses the leaky listcomp "feature"?

"Have written": guilty -- basically to show how NOT to do things.
"Currently maintaining": you _gotta_ be kidding!-)


> I guess I've been peripherally aware of it, but I almost always use 
> names like "x" for my loop variables, and never refer to them 
> afterwards. If Python were to change in this regard, I don't think it
> would break any Python code that I've ever written or maintained...

If it changed the semantics of for-loops in general, that would be quite
inconvenient to me -- once in a while I do rely on Python's semantics
(maintaining the loop control variable after a break; I don't recall if
I ever used the fact that the variable is also maintained upon normal
termination).

(musing...): I think the reason there's no real use case for using a
listcomp's control variable afterwards is connected to this distinction:
listcomps have no `break'...


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: list unpack trick?

2005-01-22 Thread Alex Martelli
Fredrik Lundh <[EMAIL PROTECTED]> wrote:
   ...
> or (readable):
> 
> if len(list) < n:
> list.extend((n - len(list)) * [item])

I find it just as readable without the redundant if guard -- just:

alist.extend((n - len(alist)) * [item])

of course, this guard-less version depends on N*[x] being the empty list
when N<=0, but AFAIK that's always been the case in Python (and has
always struck me as a nicely intuitive semantics for that * operator).

itertools-lovers may prefer:

alist.extend(itertools.repeat(item, n-len(alist)))

a bit less concise but nice in its own way (itertools.repeat gives an
empty iterator when its 2nd argument is <=0, of course).


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reload Tricks

2005-01-22 Thread Alex Martelli
Kamilche <[EMAIL PROTECTED]> wrote:

> I want my program to be able to reload its code dynamically. I have a
> large hierarchy of objects in memory. The inheritance hierarchy of
> these objects are scattered over several files.

Michael Hudson has a nice custom metaclass for that in Activestate's
online cookbook -- I made some enhancements to it as I edited it for the
forthcoming 2nd edition of the cookbook (due out in a couple of months),
but the key ideas are in the online version too (sorry, no URL at hand).


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: default value in a list

2005-01-22 Thread Peter Otten
Paul McGuire wrote:

>> Is there an elegant way to assign to a list from a list of unknown
>> size?  For example, how could you do something like:
>>
>> >>>  a, b, c = (line.split(':'))
>> if line could have less than three fields?

> I asked a very similar question a few weeks ago, and from the various
> suggestions, I came up with this:
> 
> line = ":BBB"
> expand = lambda lst,default,minlen : (lst + [default]*minlen)[0:minlen]
> a,b,c = expand( line.split(":"), "", 3 )

Here is an expand() variant that is not restricted to lists but works with
arbitrary iterables:

from itertools import chain, repeat, islice

def expand(iterable, length, default=None):
return islice(chain(iterable, repeat(default)), length)

Peter




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-22 Thread Alex Martelli
Francis Girard <[EMAIL PROTECTED]> wrote:
   ...
> But besides the fact that generators are either produced with the new "yield"
> reserved word or by defining the __new__ method in a class definition, I
> don't know much about them.

Having __new__ in a class definition has nothing much to do with
generators; it has to do with how the class is instantiated when you
call it.  Perhaps you mean 'next' (and __iter__)?  That makes instances
of the class iterators, just like iterators are what you get when you
call a generator.

> In particular, I don't know what Python constructs does generate a generator.

A 'def' of a function whose body uses 'yield', and in 2.4 the new genexp
construct.

> I know this is now the case for reading lines in a file or with the new
> "iterator" package.

Nope, besides the fact that the module you're thinking of is named
'itertools': itertools uses a lot of C-coded special types, which are
iterators but not generators.  Similarly, a file object is an iterator
but not a generator.

> But what else ?

Since you appear to conflate generators and iterators, I guess the iter
built-in function is the main one you missed.  iter(x), for any x,
either raises an exception (if x's type is not iterable) or else returns
an iterator.

> Does Craig Ringer answer mean that list 
> comprehensions are lazy ?

Nope, those were generator expressions.

> Where can I find a comprehensive list of all the 
> lazy constructions built in Python ?

That's yet a different question -- at least one needs to add the
built-in xrange, which is neither an iterator nor a generator but IS
lazy (a historical artefact, admittedly).

But fortunately Python's built-ins are not all THAT many, so that's
about it.

> (I think that to easily distinguish lazy 
> from strict constructs is an absolute programmer need -- otherwise you always
> end up wondering when is it that code is actually executed like in Haskell).

Encapsulation doesn't let you "easily distinguish" issues of
implementation.  For example, the fact that a file is an iterator (its
items being its lines) doesn't tell you if that's internally implemented
in a lazy or eager way -- it tells you that you can code afile.next() to
get the next line, or "for line in afile:" to loop over them, but does
not tell you whether the code for the file object is reading each line
just when you ask for it, or whether it reads all lines before and just
keeps some state about the next one, or somewhere in between.

The answer for the current implementation, BTW, is "in between" -- some
buffering, but bounded consumption of memory -- but whether that tidbit
of pragmatics is part of the file specs, heh, that's anything but clear
(just as for other important tidbits of Python pragmatics, such as the
facts that list.sort is wickedly fast, 'x in alist' isn't, 'x in adict'
IS...).


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


introducing a newbie to newsgroups

2005-01-22 Thread Reed L. O'Brien
Super Sorry for the extra traffic. ;-)
--
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-22 Thread Alex Martelli
Nick Coghlan <[EMAIL PROTECTED]> wrote:

> 5. Several builtin functions return iterators rather than lists, specifically
> xrange(), enumerate() and reversed(). Other builtins that yield sequences
> (range(), sorted(), zip()) return lists.

Yes for enumerate and reversed, no for xrange:

>>> xx=xrange(7)
>>> xx.next()
Traceback (most recent call last):
  File "", line 1, in ?
AttributeError: 'xrange' object has no attribute 'next'
>>> 

it SHOULD return an iterator, no doubt, but it doesn't (can't, for
backwards compatibility reasons).  Neither does it return a list: it
returns "an `xrange' object", a specialized type that's not an iterator,
though it's iterable.  It's a type, btw:

>>> xrange

>>> 

so it's not surprising that calling it returns instances of it
(enumerate and reversed are also types, but *WITH* 'next'...).


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: default value in a list

2005-01-22 Thread Peter Otten
Peter Otten wrote:

> Paul McGuire wrote:
> 
>>> Is there an elegant way to assign to a list from a list of unknown
>>> size?  For example, how could you do something like:
>>>
>>> >>>  a, b, c = (line.split(':'))
>>> if line could have less than three fields?
> 
>> I asked a very similar question a few weeks ago, and from the various
>> suggestions, I came up with this:
>> 
>> line = ":BBB"
>> expand = lambda lst,default,minlen : (lst + [default]*minlen)[0:minlen]
>> a,b,c = expand( line.split(":"), "", 3 )
> 
> Here is an expand() variant that is not restricted to lists but works with
> arbitrary iterables:
> 
> from itertools import chain, repeat, islice
> 
> def expand(iterable, length, default=None):
> return islice(chain(iterable, repeat(default)), length)

Also nice, IMHO, is allowing individual defaults for different positions in
the tuple:

>>> def expand(items, defaults):
... if len(items) >= len(defaults):
... return items[:len(defaults)]
... return items + defaults[len(items):]
...
>>> expand((1, 2, 3), (10, 20, 30, 40))
(1, 2, 3, 40)
>>> expand((1, 2, 3), (10, 20))
(1, 2)

Peter

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Nick Craig-Wood
Paul Rubin  wrote:
>  Here's the message I had in mind:
> 
>   http://groups-beta.google.com/group/comp.lang.python/msg/adfbec9f4d7300cc
> 
>  It came from someone who follows Python crypto issues as closely as
>  anyone, and refers to a consensus on python-dev.  I'm not on python-dev
>  myself but I feel that the author of that message is credible and is
>  not just "anyone".

And here is the relevant part...

"A.M. Kuchling" wrote:
> On Fri, 27 Feb 2004 11:01:08 -0800 Trevor Perrin wrote:
> > Are you and Paul still looking at adding ciphers to stdlib? That would
> > make me really, really happy :-)
> 
> No, unfortunately; the python-dev consensus was that encryption raised
> export control issues, and the existing rotor module is now on its way to
> being removed.

I'm sure thats wrong now-a-days.  Here are some examples of open
source software with strong crypto

  Linux kernel: http://www.kernel.org/
  GNU crypto project: http://www.gnu.org/software/gnu-crypto/
  TryCrypt: http://truecrypt.sourceforge.net/
  OpenSSL: http://www.openssl.org/
  AEScrypt: http://aescrypt.sourceforge.net/
  

Note that some of these are being worked on at sourceforge just like
python.

Surely it must be possible to add a few simple crypto modules to
python?

That said
a) IANAL
b) 'apt-get install python-crypto' works for me ;-)

-- 
Nick Craig-Wood <[EMAIL PROTECTED]> -- http://www.craig-wood.com/nick
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-22 Thread Craig Ringer
On Sat, 2005-01-22 at 10:10 +0100, Alex Martelli wrote:

> The answer for the current implementation, BTW, is "in between" -- some
> buffering, but bounded consumption of memory -- but whether that tidbit
> of pragmatics is part of the file specs, heh, that's anything but clear
> (just as for other important tidbits of Python pragmatics, such as the
> facts that list.sort is wickedly fast, 'x in alist' isn't, 'x in adict'
> IS...).

A particularly great example when it comes to unexpected buffering
effects is the file iterator. Take code that reads a header from a file
using an (implicit) iterator, then tries to read() the rest of the file.
Taking the example of reading an RFC822-like message into a list of
headers and a body blob:

.>>> inpath = '/tmp/msg.eml'
.>>> infile = open(inpath)
.>>> for line in infile:
if not line.strip():
break
headers.append(tuple(line.split(':',1)))
.>>> body = infile.read()


(By the way, if you ever implement this yourself for real, you should
probably be hurt - use the 'email' or 'rfc822' modules instead. For one
thing, reinventing the wheel is rarely a good idea. For another, the
above code is horrid - in particular it doesn't handle malformed headers
at all, isn't big on readability/comments, etc.)

If you run the above code on a saved email message, you'd expect 'body'
to contain the body of the message, right? Nope. The iterator created
from the file when you use it in that for loop does internal read-ahead
for efficiency, and has already read in the entire file or at least a
chunk more of it than you've read out of the iterator. It doesn't
attempt to hide this from the programmer, so the file position marker is
further into the file (possibly at the end on a smaller file) than you'd
expect given the data you've actually read in your program.

I'd be interested to know if there's a better solution to this than:

.>>> inpath = '/tmp/msg.eml'
.>>> infile = open(inpath)
.>>> initer = iter(infile)
.>>> headers = []
.>>> for line in initer:
 if not line.strip():
 break
 headers.append(tuple(line.split(':',1)))
.>>> data = ''.join(x for x in initer)

because that seems like a pretty ugly hack (and please ignore the
variable names). Perhaps a way to get the file to seek back to the point
last read from the iterator when the iterator is destroyed?

-- 
Craig Ringer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-22 Thread Francis Girard
Le samedi 22 Janvier 2005 10:10, Alex Martelli a ÃcritÂ:
> Francis Girard <[EMAIL PROTECTED]> wrote:
>...
>
> > But besides the fact that generators are either produced with the new
> > "yield" reserved word or by defining the __new__ method in a class
> > definition, I don't know much about them.
>
> Having __new__ in a class definition has nothing much to do with
> generators; it has to do with how the class is instantiated when you
> call it.  Perhaps you mean 'next' (and __iter__)?  That makes instances
> of the class iterators, just like iterators are what you get when you
> call a generator.
>

Yes, I meant "next".


> > In particular, I don't know what Python constructs does generate a
> > generator.
>
> A 'def' of a function whose body uses 'yield', and in 2.4 the new genexp
> construct.
>

Ok. I guess I'll have to update to version 2.4 (from 2.3) to follow the 
discussion.

> > I know this is now the case for reading lines in a file or with the new
> > "iterator" package.
>
> Nope, besides the fact that the module you're thinking of is named
> 'itertools': itertools uses a lot of C-coded special types, which are
> iterators but not generators.  Similarly, a file object is an iterator
> but not a generator.
>
> > But what else ?
>
> Since you appear to conflate generators and iterators, I guess the iter
> built-in function is the main one you missed.  iter(x), for any x,
> either raises an exception (if x's type is not iterable) or else returns
> an iterator.
>

You're absolutly right, I take the one for the other and vice-versa. If I 
understand correctly, a "generator" produce something over which you can 
iterate with the help of an "iterator". Can you iterate (in the strict sense 
of an "iterator") over something not generated by a "generator" ?


> > Does Craig Ringer answer mean that list
> > comprehensions are lazy ?
>
> Nope, those were generator expressions.
>
> > Where can I find a comprehensive list of all the
> > lazy constructions built in Python ?
>
> That's yet a different question -- at least one needs to add the
> built-in xrange, which is neither an iterator nor a generator but IS
> lazy (a historical artefact, admittedly).
>
> But fortunately Python's built-ins are not all THAT many, so that's
> about it.
>
> > (I think that to easily distinguish lazy
> > from strict constructs is an absolute programmer need -- otherwise you
> > always end up wondering when is it that code is actually executed like in
> > Haskell).
>
> Encapsulation doesn't let you "easily distinguish" issues of
> implementation.  For example, the fact that a file is an iterator (its
> items being its lines) doesn't tell you if that's internally implemented
> in a lazy or eager way -- it tells you that you can code afile.next() to
> get the next line, or "for line in afile:" to loop over them, but does
> not tell you whether the code for the file object is reading each line
> just when you ask for it, or whether it reads all lines before and just
> keeps some state about the next one, or somewhere in between.
>

You're right. I was much more talking (mistakenly) about lazy evaluation of 
the arguments to a function (i.e. the function begins execution before its 
arguments get evaluated) -- in such a case I think it should be specified 
which arguments are "strict" and which are "lazy" -- but I don't think 
there's such a thing in Python (... well not yet as Python get more and more 
akin to FP).

> The answer for the current implementation, BTW, is "in between" -- some
> buffering, but bounded consumption of memory -- but whether that tidbit
> of pragmatics is part of the file specs, heh, that's anything but clear
> (just as for other important tidbits of Python pragmatics, such as the
> facts that list.sort is wickedly fast, 'x in alist' isn't, 'x in adict'
> IS...).
>
>
> Alex

Thank you

Francis Girard
FRANCE

--
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-22 Thread Craig Ringer
On Sat, 2005-01-22 at 17:46 +0800, I wrote:

> I'd be interested to know if there's a better solution to this than:
> 
> .>>> inpath = '/tmp/msg.eml'
> .>>> infile = open(inpath)
> .>>> initer = iter(infile)
> .>>> headers = []
> .>>> for line in initer:
>  if not line.strip():
>  break
>  headers.append(tuple(line.split(':',1)))
> .>>> data = ''.join(x for x in initer)
> 
> because that seems like a pretty ugly hack (and please ignore the
> variable names). Perhaps a way to get the file to seek back to the point
> last read from the iterator when the iterator is destroyed?

And now, answering my own question (sorry):

Answer: http://tinyurl.com/6avdt

so we can slightly simplify the above to:

.>>> inpath = '/tmp/msg.eml'
.>>> infile = open(inpath)
.>>> headers = []
.>>> for line in infile:
 if not line.strip():
 break
 headers.append(tuple(line.split(':',1)))
.>>> data = ''.join(x for x in infile)

at the cost of making it less clear what's going on and having someone
later go "duh, why isn't he using read() here instead" but can't seem to
do much more than that.

Might it be worth providing a way to have file objects seek back to the
current position of the iterator when read() etc are called? If not, I
favour the suggestion in the referenced post - file should probably fail
noisily, or at least emit a warning.

What are others thoughts on this?

-- 
Craig Ringer

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: list unpack trick?

2005-01-22 Thread Fredrik Lundh
Alex Martelli wrote:

>> or (readable):
>>
>> if len(list) < n:
>> list.extend((n - len(list)) * [item])
>
> I find it just as readable without the redundant if guard -- just:
>
>alist.extend((n - len(alist)) * [item])

the guard makes it obvious what's going on, also for a reader that doesn't
know exactly how "*" behaves for negative counts.  once you've seen the
"compare length to limit" and "extend", you don't have to parse the rest of
the statement.

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: finding name of instances created

2005-01-22 Thread Alex Martelli
André Roberge <[EMAIL PROTECTED]> wrote:

> alex = CreateRobot()
> anna = CreateRobot()
> 
> alex.move()
> anna.move()

H -- while I've long since been identified as a 'bot, I can assure
you that my wife Anna isn't!


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: default value in a list

2005-01-22 Thread Nick Craig-Wood
TB <[EMAIL PROTECTED]> wrote:
>  Is there an elegant way to assign to a list from a list of unknown
>  size?  For example, how could you do something like:
> 
> >>>  a, b, c = (line.split(':'))
>  if line could have less than three fields?

You could use this old trick...

  a, b, c = (line+"::").split(':')[:3]

Or this version if you want something other than "" as the default

  a, b, b = (line.split(':') + 3*[None])[:3]

BTW This is a feature I miss from perl...

-- 
Nick Craig-Wood <[EMAIL PROTECTED]> -- http://www.craig-wood.com/nick
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class introspection and dynamically determining function arguments

2005-01-22 Thread Bengt Richter
On Fri, 21 Jan 2005 20:23:58 -0500, "Mike C. Fletcher" <[EMAIL PROTECTED]> 
wrote:

>
>>On Thu, 20 Jan 2005 11:24:12 -, "Mark English" <[EMAIL PROTECTED]> wrote:
>>
>>  
>>
>>>I'd like to write a Tkinter app which, given a class, pops up a
>>>window(s) with fields for each "attribute" of that class. The user could
>>>enter values for the attributes and on closing the window would be
>>>returned an instance of the class. The actual application I'm interested
>>>in writing would either have simple type attributes (int, string, etc.),
>>>or attributes using types already defined in a c-extension, although I'd
>>>prefer not to restrict the functionality to these requirements.
>>>
>>>
>Hmm, I missed the original post, but I'll jump in anyway:
>
>This sounds a heck of a lot like a property-editing system.  When
>creating a property-modeled system, the best approach is normally to
>use something that actually models the properties, rather than
>trying to guess at the metadata involved by poking around in an
>arbitrarily structured object.
I agree that "poking around in an arbitrarily structured object" is not
a likely road to satisfaction, but sometimes the arbitrary can be taken out
an something pretty simple can be done ;-)

OTOH, I probably should have mentioned that there are ways to view these
kinds of problems from a longer perspective. E.g., I googled for
"model view controller" and found a nice wiki page at

http://wact.sourceforge.net/index.php/ModelViewController

that may be worth reading for the OP, just for ideas. There is interesting 
discussion
of many MVC-related issues, but I don't know anything about the associated 
project.

>
>My BasicProperty system allows for this kind of interaction
>(property-sheets) using wxPython (rather than Tkinter) when using
>wxoo.  You declare classes as having a set of data-properties (which
>can have defaults or not, constraints or not, restricted data-types
>or not, friendly names or not, documentation or not).  Normally you
>create these classes as subclasses of a class that knows how to
>automatically assign __init__ parameters to properties, and knows
>how to tell (e.g.) wxoo about the properties of the class.
Does the BasicProperty base class effectively register itself as an observer
of subclass properties and automatically update widgets etc., a la Delphi
data-driven visual components? I've thought of doing a light-weight form
extension class that would use a text (maybe CSV) definition to control
contruction, and easy programmatic manipulation by python of the definition
parameters, like a stripped-down version of the text view of Delphi forms.
It could also be done via Tkinter, to prototype it. It would be interesting
to allow dragging widgets and edges around in Tkinter and round-trip the 
parameter
changes automatically into the text representation. A little (well, ok, a fair 
amount ;-)
further and you'd have a drag-n-drop GUI design tool. But don't hold your 
breath ;-)

>
>Those same property classes also allow for editing properties of
>database rows in PyTable, but that isn't likely relevant to your
>case.  We've also used them internally to create a rather large
>web-based property-editing mechanism (applied to such databases),
>but again, not really relevant to the case at hand.
Who knows, the OP may only be revealing his concerns about a small part of
his great tapestry ;-)

>
>Anyway, if you aren't interested in BasicProperty for this task; another 
>project on which I work, PyDispatcher provides fairly robust mechanism 
>(called robustApply) for providing a set of possible arguments and using 
>inspect to pick out which names match the parameters for a function in 
>order to pass them in to the function/method/callable object.  That 
>said, doing this for __init__'s with attribute values from an object's 
>dictionary doesn't really seem like the proper way to approach the problem.
Sounds like a workaround for parameter passing that maybe should have been
keyword-based?

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


re Insanity

2005-01-22 Thread Tim Daneliuk
For some reason, I am having the hardest time doing something that should
be obvious.  (Note time of posting ;)
Given an arbitrary string, I want to find each individual instance of
text in the form:  "[PROMPT:optional text]"
I tried this:
y=re.compile(r'\[PROMPT:.*\]')
Which works fine when the text is exactly "[PROMPT:whatever]" but
does not match on:
   "something [PROMPT:foo] something [PROMPT:bar] something ..."
The overall goal is to identify the beginning and end of each [PROMPT...]
string in the line.
Ideas anyone?
--

Tim Daneliuk [EMAIL PROTECTED]
PGP Key: http://www.tundraware.com/PGP/
--
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-22 Thread Alex Martelli
Francis Girard <[EMAIL PROTECTED]> wrote:
   ...
> > A 'def' of a function whose body uses 'yield', and in 2.4 the new genexp
> > construct.
> 
> Ok. I guess I'll have to update to version 2.4 (from 2.3) to follow the
> discussion.

It's worth upgrading even just for the extra speed;-).


> > Since you appear to conflate generators and iterators, I guess the iter
> > built-in function is the main one you missed.  iter(x), for any x,
> > either raises an exception (if x's type is not iterable) or else returns
> > an iterator.
> 
> You're absolutly right, I take the one for the other and vice-versa. If I
> understand correctly, a "generator" produce something over which you can
> iterate with the help of an "iterator". Can you iterate (in the strict sense
> of an "iterator") over something not generated by a "generator" ?

A generator function (commonly known as a generator), each time you call
it, produces a generator object AKA a generator-iterator.  To wit:

>>> def f(): yield 23
... 
>>> f

>>> x = f()
>>> x

>>> type(x)


A generator expression (genexp) also has a result which is a generator
object:

>>> x = (23 for __ in [0])
>>> type(x)



Iterators need not be generator-iterators, by any means.  Generally, the
way to make sure you have an iterator is to call iter(...) on something;
if the something was already an iterator, NP, then iter's idempotent:

>>> iter(x) is x
True


That's what "an iterator" means: some object x such that x.next is
callable without arguments and iter(x) is x.

Since iter(x) tries calling type(x).__iter__(x) [[slight simplification
here by ignoring custom metaclasses, see recent discussion on python-dev
as to why this is only 99% accurate, not 100% accurate]], one way to
code an iterator is as a class.  For example:

class Repeater(object):
def __iter__(self): return self
def next(self): return 23

Any instance of Repeater is an iterator which, as it happens, has just
the same behavior as itertools.repeat(23), which is also the same
behavior you get from iterators obtained by calling:

def generepeat():
while True: yield 23

In other words, after:

a = Repeater()
b = itertools.repeat(23)
c = generepeat()

the behavior of a, b and c is indistinguishable, though you can easily
tell them apart by introspection -- type(a) != type(b) != type(c).


Python's penchant for duck typing -- behavior matters more, WAY more
than implementation details such as type() -- means we tend to consider
a, b and c fully equivalent.  Focusing on ``generator'' is at this level
an implementation detail.


Most often, iterators (including generator-iterators) are used in a for
statement (or equivalently a for clause of a listcomp or genexp), which
is why one normally doesn't think about built-in ``iter'': it's called
automatically by these ``for'' syntax-forms.  In other words,

for x in <<>>:
   ...body...

is just like:

__tempiter = iter(<<>>)
while True:
try: x = __tempiter.next()
except StopIteration: break
...body...

((Simplification alert: the ``for'' statement has an optional ``else''
which this allegedly "just like" form doesn't mimic exactly...))


> You're right. I was much more talking (mistakenly) about lazy evaluation of
> the arguments to a function (i.e. the function begins execution before its
> arguments get evaluated) -- in such a case I think it should be specified
> which arguments are "strict" and which are "lazy" -- but I don't think
> there's such a thing in Python (... well not yet as Python get more and more
> akin to FP).

Python's strict that way.  To explicitly make some one argument "lazy",
sorta, you can put a "lambda:" in front of it at call time, but then you
have to "call the argument" to get it evaluated; a bit of a kludge.
There's a PEP out to allow a ``prefix :'' to mean just the same as this
"lambda:", but even though I co-authored it I don't think it lowers the
kludge quotient by all that much.

Guido, our beloved BDFL, is currently musing about optional typing of
arguments, which might perhaps open a tiny little crack towards letting
some arguments be lazy.  I don't think Guido wants to go there, though.

My prediction is that even Python 3000 will be strict.  At least this
makes some things obvious at each call-site without having to study the
way a function is defined, e.g., upon seeing
f(a+b, c*d)
you don't have to wonder, or study the ``def f'', to find out when the
addition and the multiplication happen -- they happen before f's body
gets a chance to run, and thus, in particular, if either operation
raises an exception, there's nothing f can do about it.

And that's a misunderstanding I _have_ seen repeatedly even in people
with a pretty good overall grasp of Python, evidenced in code such as:
self.assertRaises(SomeError, f(23))
with astonishment that -- if f(23) does indeed raise SomeError -- this
exception propagates, NOT caught by assertRaises; and if mistakenly
f(23) does NOT raise, you typically get a TypeError about Non

Re: Comments in configuration files

2005-01-22 Thread Tim Daneliuk
Pierre Quentel wrote:
Bonjour,
I am developing an application and I have a configuration file with a 
lot of comments to help the application users understand what the 
options mean

I would like it to be editable, through a web browser or a GUI 
application. With ConfigParser I can read the configuration file and 
edit the options, but when I write the result all the comments are lost

Are there modules that work on the same kind of ini files (for the needs 
of my application, I prefer this format to XML or YAML) and don't remove 
the comments ?

TIA,
Pierre

The latest incarnation of 'tconfpy' I just released will get you close:
 http://www.tundraware.com/Software/tconfpy
This program can read a configuration either from memory (a list) or a file.
So you could:
1) Read the file into an in-memory list (including the comments).
2) Pass the list to the parser, which would return a populated symbol
   table.
3) Use your application to read the current values of the symbols from
   the symbol table.
4) Modify the values in the symbol table as desired.
5) Map the new values in the symbol table back into the original list where
   the values were set intitially (this is the part that would take some work).
6) Write the list back to disk.
Note that the semantics of feature of 'tconfpy' are substantially different
than 'ConfigParser', and the languages recognized by each are quite different
as well.  It is probably fair to say that 'tconfpy' recognizes a superset
of the language recognized by 'ConfigParser'.  But you have to be
careful because the semantics are somewhat different.

--

Tim Daneliuk [EMAIL PROTECTED]
PGP Key: http://www.tundraware.com/PGP/
--
http://mail.python.org/mailman/listinfo/python-list


Re: list unpack trick?

2005-01-22 Thread Alex Martelli
Fredrik Lundh <[EMAIL PROTECTED]> wrote:

> Alex Martelli wrote:
> 
> >> or (readable):
> >>
> >> if len(list) < n:
> >> list.extend((n - len(list)) * [item])
> >
> > I find it just as readable without the redundant if guard -- just:
> >
> >alist.extend((n - len(alist)) * [item])
> 
> the guard makes it obvious what's going on, also for a reader that doesn't
> know exactly how "*" behaves for negative counts.

It does, but it's still redundant, like, say,
if x < 0:
x = abs(x)
makes things obvious even for a reader not knowing exactly how abs
behaves for positive arguments.  Redundancy in the code to try and
compensate for a reader's lack of Python knowledge is not, IMHO, a
generally very productive strategy.  I understand perfectly well that
you and others may disagree, but I just thought it worth stating my
personal opinion in the matter.

>  once you've seen the
> "compare length to limit" and "extend", you don't have to parse the rest of
> the statement.

Sorry, I don't get this -- it seems to me that I _do_ still have to
"parse the rest of the statement" to understand exactly what alist is
being extended by.


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Insanity

2005-01-22 Thread Fredrik Lundh
Tim Daneliuk wrote:

> Given an arbitrary string, I want to find each individual instance of
> text in the form:  "[PROMPT:optional text]"
>
> I tried this:
>
> y=re.compile(r'\[PROMPT:.*\]')
>
> Which works fine when the text is exactly "[PROMPT:whatever]"

didn't you leave something out here?  "compile" only compiles that pattern;
it doesn't match it against your string...

> but does not match on:
>
>"something [PROMPT:foo] something [PROMPT:bar] something ..."
>
> The overall goal is to identify the beginning and end of each [PROMPT...]
> string in the line.

if the pattern can occur anywhere in the string, you need to use "search",
not "match".  if you want multiple matches, you can use "findall" or, better
in this case, "finditer":

import re

s = "something [PROMPT:foo] something [PROMPT:bar] something"

for m in re.finditer(r'\[PROMPT:[^]]*\]', s):
print m.span(0)

prints

(10, 22)
(33, 45)

which looks reasonably correct.

(note the "[^x]*x" form, which is an efficient way to spell "non-greedy match"
for cases like this)

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: re Insanity

2005-01-22 Thread Duncan Booth
Tim Daneliuk wrote:

> 
> I tried this:
> 
>  y=re.compile(r'\[PROMPT:.*\]')
> 
> Which works fine when the text is exactly "[PROMPT:whatever]" but
> does not match on:
> 
> "something [PROMPT:foo] something [PROMPT:bar] something ..."
> 
> The overall goal is to identify the beginning and end of each [PROMPT...]
> string in the line.
> 

The answer sort of depends on exactly what can be in your optional text:

>>> import re
>>> s =  "something [PROMPT:foo] something [PROMPT:bar] something ..."
>>> y=re.compile(r'\[PROMPT:.*\]')
>>> y.findall(s)
['[PROMPT:foo] something [PROMPT:bar]']
>>> y=re.compile(r'\[PROMPT:.*?\]')
>>> y.findall(s)
['[PROMPT:foo]', '[PROMPT:bar]']
>>> y=re.compile(r'\[PROMPT:[^]]*\]')
>>> y.findall(s)
['[PROMPT:foo]', '[PROMPT:bar]']
>>> 

.* will match as long a string as possible.

.*? will match as short a string as possible. By default this won't match 
any newlines.

[^]]* will match as long a string that doesn't contain ']' as possible. 
This will match newlines.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-22 Thread Alex Martelli
Craig Ringer <[EMAIL PROTECTED]> wrote:

> .>>> data = ''.join(x for x in infile)

Maybe ''.join(infile) is a better way to express this functionality?
Avoids 2.4 dependency and should be faster as well as more concise.


> Might it be worth providing a way to have file objects seek back to the
> current position of the iterator when read() etc are called? If not, I

It's certainly worth doing a patch and see what the python-dev crowd
thinks of it, I think; it might make it into 2.5.

> favour the suggestion in the referenced post - file should probably fail
> noisily, or at least emit a warning.

Under what conditions, exactly, would you want such an exception?


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class introspection and dynamically determining function arguments

2005-01-22 Thread Alex Martelli
Diez B. Roggisch <[EMAIL PROTECTED]> wrote:

> Nick Coghlan wrote:
> > 
> > If this only has to work for classes created for the purpose (rather than
> > for an arbitrary class):
> 
> Certainly a step into the direction I meant - but still missing type
> declarations. And that's what at least I'd like to see - as otherwise you
> don't know what kind of editing widget to use for a property.

Though it may be overkill for your needs, you'll be interested in
Enthought's "Traits", I think; see, for example,
.  Facilitating such
presentation tasks (no doubt including editing) appears to be a major
driving force for Traits.


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: default value in a list

2005-01-22 Thread Alex Martelli
Nick Craig-Wood <[EMAIL PROTECTED]> wrote:
   ...
> Or this version if you want something other than "" as the default
> 
>   a, b, b = (line.split(':') + 3*[None])[:3]

Either you mean a, b, c -- or you're being subtler than I'm grasping.


> BTW This is a feature I miss from perl...

Hmmm, I understand missing the ``and all the rest goes here'' feature
(I'd really love it if the rejected
a, b, *c = whatever
suggestion had gone through, ah well), but I'm not sure what exactly
you'd like to borrow instead -- blissfully by now I've forgotten a lot
of the perl I used to know... care to clarify?


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dynamic call of a function

2005-01-22 Thread kishore
Hi Luigi Ballabio,

Thankyou very much for your reply,
it worked well.

Kishore.

Luigi Ballabio wrote:
> At 10:37 AM 10/19/01 +0200, anthony harel wrote:
> >Is it possible to make dynamic call of a function whith python ?
> >
> >I have got a string that contains the name of the function I
> >want to call but I don't want to do something like this :
> >
> >if ch == "foo" :
> > self.foo( )
> >elif ch == "bar"
> > self.bar( )
> >
>
> Anthony,
>  here are two ways to do it---I don't know which is the best,
nor
> whether the best is yet another. Also, you might want to put in some
error
> checking.
>
> class Test:
>  def foo(self):
>  print 'Norm!'
>  def bar(self):
>  print 'Dum-de-dum'
>  def dynCall1(self,methodName):
>  eval('self.%s()' % methodName)
>  def dynCall2(self,methodName):
>  method = vars(self.__class__)[methodName]
>  method(self)
>
>  >>> t = Test()
>  >>> t.dynCall1('foo')
> Norm!
>  >>> t.dynCall2('bar')
> Dum-de-dum
> 
> Bye,
>  Luigi

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-22 Thread Craig Ringer
On Sat, 2005-01-22 at 12:20 +0100, Alex Martelli wrote:
> Craig Ringer <[EMAIL PROTECTED]> wrote:
> 
> > .>>> data = ''.join(x for x in infile)
> 
> Maybe ''.join(infile) is a better way to express this functionality?
> Avoids 2.4 dependency and should be faster as well as more concise.

Thanks - for some reason I hadn't clicked to that. Obvious in hindsight,
but I just completely missed it.

> > Might it be worth providing a way to have file objects seek back to the
> > current position of the iterator when read() etc are called? If not, I
> 
> It's certainly worth doing a patch and see what the python-dev crowd
> thinks of it, I think; it might make it into 2.5.

I'll certainly look into doing so. I'm not dumb enough to say "Sure,
I'll do that" before looking into the code involved and thinking more
about what issues could pop up. Still, it's been added to my
increasingly frightening TODO list.

> > favour the suggestion in the referenced post - file should probably fail
> > noisily, or at least emit a warning.
> 
> Under what conditions, exactly, would you want such an exception?

When read() or other methods suffering from the same issue are called
after next() without an intervening seek(). It'd mean another state flag
for the file to keep track of - but I doubt it'd make any detectable
difference in performance given that there's disk I/O involved.

I'd be happier to change the behaviour so that a warning isn't
necessary, though, and I suspect it can be done without introducing
backward compatibility issues. Well, so long as nobody is relying on the
undefined file position after using an iterator - and I'm not dumb
enough to say nobody would ever do that.

I've really got myself into hot water now though - I'm going to have to
read over the `file' source code before impulsively saying anything
REALLY stupid.

-- 
Craig Ringer

-- 
http://mail.python.org/mailman/listinfo/python-list


[perl-python] 20050121 file reading & writing

2005-01-22 Thread Xah Lee
# -*- coding: utf-8 -*-
# Python

# to open a file and write to file
# do

f=open('xfile.txt','w')
# this creates a file "object" and name it f.

# the second argument of open can be
# 'w' for write (overwrite exsiting file)
# 'a' for append (ditto)
# 'r' or read only


# to actually print to file or read from
# file, one uses methods of file
# objects. e.g.

# reading entire file
# text = f.read()

# reading the one line
# line = f.realine()

# reading entire file as a list, of lines
# mylist = f.readlines()

# to write to file, do
f.write('yay, first line!\n')

# when you are done, close the file
f.close()

# closing files saves memory and is
# proper in large programs.

# see
# http://python.org/doc/2.3.4/tut/node9.html

# or in Python terminal,
# type help() then topic FILES

# try to write a program that read in a
# file and print it to a new file.


# in perl, similar functionality exists.
# their construct is quite varied.

# example of reading in file
# and print it out
# (first, save this file as x.pl)
open(f,") {print $line}
close(f) or die "error: $!";
print "am printing myself\n";

# the above is a so called "idiom"
# meaning that it is the way such is
# done in a particular language, as in
# English.

# note, the f really should be F in Perl
# by some references, but can also be
# lower case f or even "f". All are not
# uncommon. There is no clear reason for
# why or what should be or what
# is the difference. Usually it's
# not worthwhile to question in
# Perl. ">x.pl" would be for write to
# file. The  tells perl the file
# object, and when Perl sees t=<> it
# reads a line. (usually, but technically
# depending on some predefined
# variables...) The f they call "file handle".
# ... see
# perldoc -tf open
# to begin understanding.


Note: this post is from the Perl-Python a-day mailing list at
http://groups.yahoo.com/group/perl-python/
to subscribe, send an email to [EMAIL PROTECTED]
if you are reading it on a web page, program examples may not run
because html conversion often breaks the code.
Xah
  [EMAIL PROTECTED]
  http://xahlee.org/PageTwo_dir/more.html

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: circular iteration

2005-01-22 Thread Alex Martelli
Simon Brunning <[EMAIL PROTECTED]> wrote:
   ...
> > is there a faster way to build a circular iterator in python that by
doing this:
> > 
> > c=['r','g','b','c','m','y','k']
> > 
> > for i in range(30):
> > print c[i%len(c)]
> 
> I don''t know if it's faster, but:
> 
> >>> import itertools
> >>> c=['r','g','b','c','m','y','k']
> >>> for i in itertools.islice(itertools.cycle(c), 30):
> ...   print i

Whenever you're using itertools, the smart money's on "yes, it's
faster";-).

E.g., on a slow, old iBook...:

kallisti:~ alex$ python -mtimeit -s'c="rgbcmyk"' 'for i in range(30):
c[i%len(c)]'
1 loops, best of 3: 47 usec per loop

kallisti:~ alex$ python -mtimeit -s'c="rgbcmyk"; import itertools as it'
'for i in it.islice(it.cycle(c),30): i'
1 loops, best of 3: 26.4 usec per loop

Of course, if you do add back the print statements they'll take orders
of magnitude more time than the cyclic access, so /F's point on
premature optimization may well be appropriate.  But, if you're doing
something VERY speedy with each item you access, maybe roughly halving
the overhead for the cyclic access itself MIGHT be measurable (maybe
not; it IS but a few microseconds, after all).

I like itertools' approach because it's higher-abstraction and more
direct.  Its blazing speed is just a trick to sell it to conservative
curmudgeons who don't see abstraction as an intrinsic good -- some of
those are swayed by microseconds;-)


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unbinding multiple variables

2005-01-22 Thread Bengt Richter
On 21 Jan 2005 11:13:20 -0800, "Johnny Lin" <[EMAIL PROTECTED]> wrote:

>thanks everyone for the replies!
>
>John Hunter, yep, this is Johnny Lin in geosci :).
>
>re using return:  the problem i have is somewhere in my code there's a
>memory leak.  i realize return is supposed to unbind all the local
>variables, but since the memory leak is happening despite return, i
>thought it might help me track down the leak if i unbound everything
>explicitly that i had defined in local scope before i returned.  or if
>anyone has recomm. on plugging leaks, would be thankful for any
>pointers there too.

It helps to clue people into what your real goal is ;-) (Your initial post
said nothing about memory leaks).

Step 1: How do you know you have a memory leak? Python retains some memory
in internal free pools rather than returning it to the OS, so you might not
have a memory leak at all, in the true sense.

If you are having a real memory leak, look first at any C extensions you've
written yourself, then at other's alpha/beta stuff you may be using. Core 
CPython
is probably the last place to look ;-)

If you are creating reference loops, I think some may be uncollectable.

I'm not sure how you are detecting "memory leaks," but whatever the method,
if you can write a test harness that will create a zillion of each suspect
thing and delete them in turn, and print out your detection data -- even if
you have to run separate processes to do it, that might narrow down your search.
E.g., if you write a little test.py that takes a command line argument to choose
which object to create zillions of, and print out leak evidence, then you could
run that systematically via popen etc. Or just run test.py by hand if you don't
have that many suspects (hopefully the case ;-)

>
>my understanding about locals() from the nutshell book was that i
>should treat that dictionary as read-only.  is it safe to use it to
>delete entries?
Well, it's not read-only, but it doesn't write through to the actual locals.
Think of it as a temp dict object with copies of the local name:value bindings,
but changing anything in it only changes the temp dict object in the usual way. 
UIAM ;-)

(OTOH, deletions of actual local bindings do seem to propagate back into a 
previously
bound value of locals on exit, and a new call to locals() seems to return the 
same identical
object as before, so I'm not sure I believe the , unless it has a 
special slot
and it is automatically updated at exit. But a local bare name assignment or 
deletion doesn't
immediately propagate. But it does on exit. So the  returned by 
locals() has
a special relationship to the function it reflects, if it is otherwise a normal 
dict:

 >>> def foo(x):
 ...d = locals()
 ...print '1:',id(d), type(d), d
 ...del x
 ...print '2:',id(d), type(d), d
 ...del d['x']
 ...print '3:',id(d), type(d), d
 ...y = 123
 ...print '4:',id(d), type(d), d
 ...d['z'] = 'zee'
 ...print '5:',id(d), type(d), d
 ...return d, locals()
 ...
 >>> dret, endinglocals = foo('arg passed to foo')
 1: 49234780  {'x': 'arg passed to foo'}
 2: 49234780  {'x': 'arg passed to foo'}
 3: 49234780  {}
 4: 49234780  {}
 5: 49234780  {'z': 'zee'}
 >>> dret
 {'y': 123, 'z': 'zee', 'd': {...}}
 >>> endinglocals
 {'y': 123, 'z': 'zee', 'd': {...}}
 >>> dret is endinglocals
 True
 >>> dret['d'] is dret
 True
(the {...} is an indication of the recursive reference)

Note that at 2: the del x was not reflected, nor did the y = 123 show at 4:
But the d['z'] showed up immediately, as you might expect ... but d['z'] also
in that last returned locals(), which you might not expect, since there was
no assignment to bare z. But they are apparently the same dict object, so you
would expect it. So maybe there is some kind of finalization at exit like 
closure
building. Anyway, d = dict(locals()) would probably behave differently, but I'm
going to leave to someone else ;-)

Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Paul Rubin
Dave Benjamin <[EMAIL PROTECTED]> writes:
> Can we get a show of hands for all of those who have written or are
> currently maintaining code that uses the leaky listcomp "feature"?

It's really irrelevant whether anyone is using a feature or not.  If
the feature is documented as being available, it means that removing
it is an incompatible change that can break existing code which
currently conforms to the spec.  If the "feature" is described as the
bug that it is, anything that relies on it is nonconformant.

I do remember seeing some cute tricks (i.e. capable of becoming
idioms) that depend on the leakage.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: finding name of instances created

2005-01-22 Thread André

Jeremy Bowers wrote:
> On Fri, 21 Jan 2005 21:01:00 -0400, André Roberge wrote:
> > etc. Since I want the user to learn Python's syntax, I don't want
to
> > require him/her to write
> > alex = CreateRobot(name = 'alex')
> > to then be able to do
> > alex.move()
>
> This is just my opinion, but I've been involved with teaching new
> programmers before so I think it is an informed one. I don't think
you
> teach a language by hiding how the language works, no matter what the
> intentions.
>
> You should be *minimizing* the magic. You're going to train your
students
> that Python objects have names (they don't) and that's going to mess
them
> up later. Actually, it's going to mess them up almost right away,
because
> how can they have a list of robots like this:
>
> for robot in robots:
>   robot.turn_left()

They will not be able to do that.

>
> That can't work, right, the command inside the loop can only affect
the
> robot named "robot", right?
>
> You can't teach Python if what you're actually teaching them is a
variant
> that you have created that is used nowhere else on Earth, and is
> internally inconsistent to boot (see loop above, the *real* Python
> variable semantics conflict with the semantics you are teaching).
>

I think you misunderstood my intentions, possibly because I explain
things too superficially.

> Considering that not a month goes by where someone doesn't post a
question
> related to this, and it has been a FAQ entry for as long as I've used
> Python, I think you are doing a major disservice to your "users" by
> training them that objects magically gets the name when assigned. I
> strongly urge you to do absolutely no pre-processing of any kind to
the
> programs they generate. (By which I mean changes to the code; running
> pychecker on it would be OK; and I'd urge you to resist the
temptation to
> process its output, either. Provide a "translation table" if you need
to,
> but they need to learn to read the real output, too.)
>
> Programming is hard enough with burdening people with "pleasant
> falsehoods". Trust your students to handle the truth (and of course
> rationally minimize the truth they have to handle, and by using
Python
> you're off to a great start there). If they can't handle the truth,
with
> time, effort, and support, they *sure* as hell can't handle lies!

The environment in which students (my kids first, others later :-) will
learn is based on Richard Pattis's  "Karel the Robot"   (adapted for
Python in "Guido van Robot").  They are presented with a robot that can
do four basic actions, as I described in a previous post.  It's been
used successfully in many places.

The students learn first the procedural aspect of python. Here's a
quick example of a program that they can write:

.def move_and_turn():
.move()
.turn_left()
.
.def draw_square():
.for i in range(4):
.move_and_turn()
.
.draw_square()
==
At this point, they don't know anything about objects and methods; but
they will have learned about functions and variables.  This is where
'Guido van Robot (GvR)', which has been used succesfully to teach
programming using a syntax somewhat similar to python, but not quite,
stops.  (Actually, you can't use variables in GvR).

I want to move beyond that and introduce objects and classes.

So, students will be able to write:
pete = CreateRobot(2, 3)
pete.move()

learning about objects and methods.

As for things like
for robot in robots:
do stuff

that will be for my use only: drawing robots on the screen, updating
the 'world' when robots pick stuff up, etc.  My intention is that the
students will use the EXACT python syntax, so that they don't know that
*I* have given a *name* to their robot(s) behind the scene.
I have to cut this short; I hope it clarifies my intentions.

André

--
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Paul Rubin
[EMAIL PROTECTED] (Alex Martelli) writes:
> If it changed the semantics of for-loops in general, that would be quite
> inconvenient to me -- once in a while I do rely on Python's semantics
> (maintaining the loop control variable after a break; I don't recall if
> I ever used the fact that the variable is also maintained upon normal
> termination).

Some languages let you say things like:
   for (var x = 0; x < 10; x++) 
  do_something(x);
and that limits the scope of x to the for loop.  That seems like a
reasonable way to offer for-loops that don't leak.

> (musing...): I think the reason there's no real use case for using a
> listcomp's control variable afterwards is connected to this distinction:
> listcomps have no `break'...

Of course you can still break out of listcomps:

class oops: pass
def f(x):
   if x*x % 11 == 3: raise oops
   return x*x
try:
  lcomp = [f(x) for x in range(10)]
except oops: pass
print x

prints "5"
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Fredrik Lundh
Paul Rubin wrote:

> Some languages let you say things like:
>   for (var x = 0; x < 10; x++)
>  do_something(x);
> and that limits the scope of x to the for loop.

depending on the compiler version, compiler switches, IDE settings, etc.

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Paul Rubin
Nick Craig-Wood <[EMAIL PROTECTED]> writes:
> > No, unfortunately; the python-dev consensus was that encryption raised
> > export control issues, and the existing rotor module is now on its way to
> > being removed.
> 
> I'm sure thats wrong now-a-days.  Here are some examples of open
> source software with strong crypto

There's tons of such examples, but python-dev apparently reached
consensus that the Python maintainers were less willing than the
maintainers of those other packages to deal with those issues.

You're right that this specifically says export control.  I'm now
thinking I saw some other messages, again from knowledgeable posters,
saying that there was a bigger concern that including crypto in the
distribution could make trouble for users in countries where having
crypto at all was restricted.  I'll see if I can find those.

Martin, do you know more about this?  I remember being disappointed
about the decisions since I had done some work on a new block cipher
API and I had wanted to submit an implementation to the distro.  But
when I heard there was no hope of including it, I stopped working on
it.  If there's an interest in it again, I can do some more with it.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Paul Rubin
"Fredrik Lundh" <[EMAIL PROTECTED]> writes:
> > Some languages let you say things like:
> >   for (var x = 0; x < 10; x++)
> >  do_something(x);
> > and that limits the scope of x to the for loop.
> 
> depending on the compiler version, compiler switches, IDE settings, etc.

Huh?  I'm not sure what you're talking about.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [perl-python] 20050121 file reading & writing

2005-01-22 Thread Gian Mario Tagliaretti
Xah Lee wrote:

> # the second argument of open can be
> # 'w' for write (overwrite exsiting file)
> # 'a' for append (ditto)
> # 'r' or read only

are you sure you didn't forget something?

> # reading the one line
> # line = f.realine()

wrong

> [...]

Maybe you didn't get the fact the you won't see a flame starting between
python people and perl friends?

throw yourself somewhere and... Xah.flush()
-- 
Gian Mario Tagliaretti
PyGTK GUI programming
http://www.parafernalia.org/pygtk/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Alex Martelli
Paul Rubin  wrote:

> [EMAIL PROTECTED] (Alex Martelli) writes:
> > If it changed the semantics of for-loops in general, that would be quite
> > inconvenient to me -- once in a while I do rely on Python's semantics
> > (maintaining the loop control variable after a break; I don't recall if
> > I ever used the fact that the variable is also maintained upon normal
> > termination).
> 
> Some languages let you say things like:
>for (var x = 0; x < 10; x++) 
>   do_something(x);
> and that limits the scope of x to the for loop.  That seems like a
> reasonable way to offer for-loops that don't leak.

Yes, that's how C++ introduced it, for example.  But note that, after
waffling quite a bit in various betas of VC++, Microsoft ended up having
this form *not* limit the scope, for years, in two major releases; I'm
not privy to their reasons for accepting the syntax but rejecting its
key semantic point, and I think they've finally broken with that in the
current VC++ (don't know for sure, haven't used MS products for a
while).  But it sure made for quite a long transition period.

It's far from clear to me that it's worth complicating Python by
introducing a third form of loop, next to normal while and for ones, to
mean "a for loop whose control variables are hyperlocalized" (plural, in
general, of course -- ``for n, v in d.iteritems():'' etc).


> > (musing...): I think the reason there's no real use case for using a
> > listcomp's control variable afterwards is connected to this distinction:
> > listcomps have no `break'...
> 
> Of course you can still break out of listcomps:

You can abort them by having an exception raised -- that's quite a
different issue.

> class oops: pass
> def f(x):
>if x*x % 11 == 3: raise oops
>return x*x
> try:
>   lcomp = [f(x) for x in range(10)]
> except oops: pass
> print x
> 
> prints "5"

This way, you don't get anything assigned to lcomp.  break is quite
different from raise, which aborts the whole caboodle up to the handler.


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Alex Martelli
Paul Rubin  wrote:

> Dave Benjamin <[EMAIL PROTECTED]> writes:
> > Can we get a show of hands for all of those who have written or are
> > currently maintaining code that uses the leaky listcomp "feature"?
> 
> It's really irrelevant whether anyone is using a feature or not.  If
> the feature is documented as being available, it means that removing
> it is an incompatible change that can break existing code which
> currently conforms to the spec.  If the "feature" is described as the
> bug that it is, anything that relies on it is nonconformant.
> 
> I do remember seeing some cute tricks (i.e. capable of becoming
> idioms) that depend on the leakage.

Sure, ``if [mo for mo in [myre.search(line)] if mo]: use(mo)`` and the
like, used to simulate assign-and-test.  No doubt maintaining backwards
compat with this kind of horrors means listcomps need to keep leaking
until Python 3.0, alas.  But hopefully no farther...


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Fredrik Lundh
Paul Rubin wrote:

>> > Some languages let you say things like:
>> >   for (var x = 0; x < 10; x++)
>> >  do_something(x);
>> > and that limits the scope of x to the for loop.
>>
>> depending on the compiler version, compiler switches, IDE settings, etc.
>
> Huh?  I'm not sure what you're talking about.

guess you haven't used some languages that do this long enough.

in some early C++ compilers, the scope for "x" was limited to the scope
containing the for loop, not the for loop itself.  some commercial compilers
still default to that behaviour.

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: default value in a list

2005-01-22 Thread Bengt Richter
On Fri, 21 Jan 2005 17:04:11 -0800, Jeff Shannon <[EMAIL PROTECTED]> wrote:

>TB wrote:
>
>> Hi,
>> 
>> Is there an elegant way to assign to a list from a list of unknown
>> size?  For example, how could you do something like:
>> 
>> 
> a, b, c = (line.split(':'))
>> 
>> if line could have less than three fields?
>
>(Note that you're actually assigning to a group of local variables, 
>via tuple unpacking, not assigning to a list...)
>
>One could also do something like this:
>
> >>> l = "a:b:c".split(':')
> >>> a, b, c, d, e = l + ([None] * (5 - len(l)))
> >>> print (a, b, c, d, e)
>('a', 'b', 'c', None, None)
> >>>

Or
 >>> a, b, c, d, e = ('a:b:c'.split(':')+[None]*4)[:5]
 >>> print (a, b, c, d, e)
 ('a', 'b', 'c', None, None)

You could even be profligate and use *5 in place of that *4,
if that makes an easier idiom ;-)

Works if there's too many too:

 >>> a, b = ('a:b:c'.split(':')+[None]*2)[:2]
 >>> print (a, b)
 ('a', 'b')



>
>Personally, though, I can't help but think that, if you're not certain 
>how many fields are in a string, then splitting it into independent 
>variables (rather than, say, a list or dict) *cannot* be an elegant 
>solution.  If the fields deserve independent names, then they must 
>have a definite (and distinct) meaning; if they have a distinct 
>meaning (as opposed to being a series of similar items, in which case 
>you should keep them in a list), then which field is it that's 
>missing?  Are you sure it's *always* the last fields?  This feels to 
>me like the wrong solution to any problem.
>
>Hm, speaking of fields makes me think of classes.
>
> >>> class LineObj:
>... def __init__(self, a=None, b=None, c=None, d=None, e=None):
>... self.a = a
>... self.b = b
>... self.c = c
>... self.d = d
>... self.e = e
>...
> >>> l = "a:b:c".split(':')
> >>> o = LineObj(*l)
> >>> o.__dict__
>{'a': 'a', 'c': 'c', 'b': 'b', 'e': None, 'd': None}
> >>>
>
>This is a bit more likely to be meaningful, in that there's almost 
>certainly some logical connection between the fields of the line 
>you're splitting and keeping them as a class demonstrates that 
>connection, but it still seems a bit smelly to me.
>
That gives me an idea:

 >>> def foo(a=None, b=None, c=None, d=None, e=None, *ignore):
 ... return a, b, c, d, e
 ...
 >>> a, b, c, d, e = foo(*'a:b:c'.split(':'))
 >>> print (a, b, c, d, e)
 ('a', 'b', 'c', None, None)

But then, might as well do:

 >>> def bar(nreq, *args):
 ... if nreq <= len(args): return args[:nreq]
 ... return args+ (nreq-len(args))*(None,)
 ...
 >>> a, b, c, d, e = bar(5, *'a:b:c'.split(':'))
 >>> print (a, b, c, d, e)
 ('a', 'b', 'c', None, None)
 >>> a, b = bar(2, *'a:b:c'.split(':'))
 >>> print (a, b)
 ('a', 'b')
 >>> a, b, c = bar(3, *'a:b:c'.split(':'))
 >>> print (a, b, c)
 ('a', 'b', 'c')

But usually, I would like n + tail, where I know n is a safe bet, e.g.,

 >>> def ntail(n, *args):
 ... return args[:n]+(args[n:],)
 ...
 >>> a, b, t = ntail(2, *'a:b:c:d:e'.split(':'))
 >>> print (a, b, t)
 ('a', 'b', ('c', 'd', 'e'))
 >>> a, b, t = ntail(2, *'a:b'.split(':'))
 >>> print (a, b, t)
 ('a', 'b', ())


People have asked to be able to spell that as

 a, b, *t = 'a:b:c:d:e'.split(':')



Regards,
Bengt Richter
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Fredrik Lundh
Alex Martelli wrote:

>> I do remember seeing some cute tricks (i.e. capable of becoming
>> idioms) that depend on the leakage.
>
> Sure, ``if [mo for mo in [myre.search(line)] if mo]: use(mo)`` and the
> like, used to simulate assign-and-test.

here's an old favourite:

lambda x: ([d for d in [{}]], [d.setdefault(k.text or "", unmarshal(v)) for 
(k, v) in x], d)[2]

which, when all keys are unique, is a 2.1-compatible version of

lambda x: dict([(k.text or "", unmarshal(v)) for k, v in x])

which illustrates why some lambdas are better written as functions.

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Fredrik Lundh
Reinhold Birkenfeld wrote:

> Agreed. If you just want to use it, you don't need the spec anyway.

but the guy who wrote the parser you're using had to read it, and understand it.
judging from the number of crash reports you see in this thread, chances are 
that
he didn't.

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread A.M. Kuchling
On 22 Jan 2005 04:50:30 -0800, 
Paul Rubin  wrote:
> Martin, do you know more about this?  I remember being disappointed
> about the decisions since I had done some work on a new block cipher

It was discussed in this thread:
http://mail.python.org/pipermail/python-dev/2003-April/034959.html

Guido and M.-A. Lemburg were leaning against including crypto; everyone else
was positive.  But Guido's the BDFL, so I interpreted his vote as being the
critical one.

--amk
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Fredrik Lundh
Paul Rubin wrote:

> Martin, do you know more about this?  I remember being disappointed
> about the decisions since I had done some work on a new block cipher
> API and I had wanted to submit an implementation to the distro.  But
> when I heard there was no hope of including it, I stopped working on
> it.

"I'll only work on stuff if I'm sure it's going right into the core" isn't 
exactly
a great way to develop good Python software.  I recommend the "would
anyone except me have any use for this?" approach.

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Arthur
On 21 Jan 2005 20:32:46 -0800, Paul Rubin
 wrote:

>Of course in that case, since the absence of lexical scope was a wart
>in its own right, fixing it had to have been on the radar.  So turning
>the persistent listcomp loop var into a documented feature, instead of
>describing it in the docs as a wart that shouldn't be relied on,
>wasn't such a hot idea.  Adding lexical scope and listcomps at the
>same time might have also been a good way to solve the issue.


Most of us go about our business without a fine grained reading of the
documentation.  I don't think I would be unusual in having learned
about listcomp lakkage by tracing a hard to fathom bug to its source.

I can't say I understood at that point that for loops *do* leak.  But
apparently *did* understand intutively that I shouldn't rely on it not
doing so. because I chose my local variable names in a way that I
couldn't get hurt in any case  My intution apparently felt safe in
doing otherwise in the vicinity of a list comp, which is how I got
burnt. The fact is there was carelessness in the code that surrounded
the bug. But it was carelessness of the kind that would, in the
absence of list com leakage,  have led to an easy to recgnize error
message and a quick fix.

Nobody likes to hear coulda, shoulda s for done deals. 

But the issue is an a basic one, in some sense, it seems to me.

The question around decorators did not seem too dissimilar.  There was
those would thought that a better solution to the problem being
addressed would most likely naturally present itself further down the
road as Python evolved. And that inaction at this time was the best
action.

 Or that a solution should be found that addressed that issue,
toigether with related issues, in a way that was less of a "best to to
done under the cirucmstances as of now", before a trigger was pulled.

woulda, coulda shoulda.

Art  


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Steve Holden
Bengt Richter wrote:
On Fri, 21 Jan 2005 12:04:10 -0600, "A.M. Kuchling" <[EMAIL PROTECTED]> wrote:

On Fri, 21 Jan 2005 18:30:47 +0100, 
	rm <[EMAIL PROTECTED]> wrote:

Nowadays, people are trying to create binary XML, XML databases, 
graphics in XML (btw, I'm quite impressed by SVG), you have XSLT, you 
have XSL-FO, ... .
Which is an argument in favor of XML -- it's where the activity is, so it's
quite likely you'll encounter the need to know XML. Few projects use YAML,
so the chance of having to know its syntactic details is small.  


I thought XML was a good idea, but IMO requiring quotes around
even integer attribute values was an unfortunate decision. I don't buy
their rationale of keeping parsing simple -- as if extracting a string
with no embedded space from between an equal sign and terminating white
space were that much harder than extracting the same delimited by double quotes.
It isn't that much harder, but if there are two ways to do the same 
thing then effectively one of them has to become a special case, thereby 
complicating the code that has to handle it (in this case the parser).

"There should be one (and preferably only one) ..." should be a familiar 
mantra around here :-)

The result is cluttering SVG with needless cruft around numerical graphics 
parameters.

It seems to me the misunderstanding here is that XML was ever intended 
to be generated directly by typing in a text editor. It was rather 
intended (unless I'm mistaken) as a process-to-process data interchange 
metalanguage that would be *human_readable*.

Tools that *create* XML are perfectly at liberty not to require quotes 
around integer values.

OTOH, I think the HTML XML spec is very readable, and nicely designed.
At least the version 1.0 spec I snagged from W3C a long time ago.
 I see the third edition at http://www.w3.org/TR/REC-xml/ is differently 
styled,
(I guess new style sheets) but still pretty readable (glancing at it now).
Regards,
Bengt Richter
regards
 Steve
--
Steve Holden   http://www.holdenweb.com/
Python Web Programming  http://pydish.holdenweb.com/
Holden Web LLC  +1 703 861 4237  +1 800 494 3119
--
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Paul Rubin
"Fredrik Lundh" <[EMAIL PROTECTED]> writes:
> "I'll only work on stuff if I'm sure it's going right into the core"
> isn't exactly a great way to develop good Python software.  I
> recommend the "would anyone except me have any use for this?"
> approach.

1. Crypto is an important "battery" for many security applications.
As a crypto activist I like to spread crypto, and I therefore think it
would be useful if crypto were in the core.  That is the reason I was
willing to do the work of writing a suitable module.  To have it go
into the core and further my goal of spreading crypto.  That's as good
a reason as any to write a crypto module.

2. "Would anyone except me have any use for this?" shows a lack of
understanding of how Python is used.  Some users (call them
"application users" or AU's) use Python to run Python applications for
whatever purpose.  Some other users (call them "developers") use
Python to develop applications that are intended to be run by AU's.

Now we're talking about an extension module written in C.  There is no
way to write AES for Python any other way and still have reasonable
perfomance.

Modules written in C and distributed separately from the core are a
pain in the neck to download and install.  You need compilers, which
not everyone has access to.  AU's often use Windows, which doesn't
come with any compilers, so many AU's have no compilers.  Developers
generally have access to compilers for the platforms they develop on,
but usually won't have compilers for every target platform that every
AU in their audience might want to run their app on.  Even AU's with
compilers need to be able to install extension modules before they can
run them, which isn't always possible, for example if they're using
Python at a web hosting service.

What I'm getting at here is that C modules are considerably more
useful to AU's if they're in the core than if they're outside it, and
the effect is even larger for developers.  For developers, extension
modules are practically useless unless they're in the core.  Depending
on extension modules that have to be installed by the AU severely
limits the audience for the developer's app.

The module we're discussing was intended for developers.  "Would
anyone except me have any use for this, [even if it doesn't go in the
core]?" is a bizarre question.  The whole purpose of the module was to
let developers ship Python crypto apps that don't making the AU load
external C modules.  If it's not in the core, it doesn't meet its
usefulness criterion.  Your proposed question amounts to asking "is
this worth doing even if its usefulness is severely limited?".  I
aleady asked myself that question and the answer was no.  I was only
interested in the higher-usefulness case, which means putting the
module in the core.  I don't see anything unreasonable about that.  I
can only work on a limited number of things, so I pick the most useful
ones.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Paul Rubin
"A.M. Kuchling" <[EMAIL PROTECTED]> writes:
> It was discussed in this thread:
> http://mail.python.org/pipermail/python-dev/2003-April/034959.html
> 
> Guido and M.-A. Lemburg were leaning against including crypto; everyone else
> was positive.  But Guido's the BDFL, so I interpreted his vote as being the
> critical one.

That's interesting, so it's an export issue after all.  But export
from the US is handled by sending an email to the DOC, and Martin
mentions that's already been done for some Python modules.  I had been
under the impression was that the concern was over causing possible
problems for users in some destination countries, and possibly having
to maintain separate distros for the sake of users like that.  But
maybe I was wrong about that.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Andrew Koenig
"Fredrik Lundh" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]

> in some early C++ compilers, the scope for "x" was limited to the scope
> containing the for loop, not the for loop itself.  some commercial 
> compilers
> still default to that behaviour.

Indeed--and the standards committee dithered far too long before correcting 
it.

The argument that finally swayed them was this:

If you change it, you will be ridiculed for a few years.

If you do not change it, you will be ridiculed for the rest of your 
careers.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Andrew Koenig
"Paul Rubin"  wrote in message 
news:[EMAIL PROTECTED]

> It's really irrelevant whether anyone is using a feature or not.  If
> the feature is documented as being available, it means that removing
> it is an incompatible change that can break existing code which
> currently conforms to the spec.  If the "feature" is described as the
> bug that it is, anything that relies on it is nonconformant.

In this case, I think the right solution to the problem is two-fold:

1) from __future__ import lexical_comprehensions

2) If you don't import the feature, and you write a program that depends 
on
a list-comprehension variable remaining in scope, the compiler should
issue a diagnostic along the lines of

Warning: This program should be taken out and shot.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Paul Rubin
"Andrew Koenig" <[EMAIL PROTECTED]> writes:
> In this case, I think the right solution to the problem is two-fold:
> 
> 1) from __future__ import lexical_comprehensions
> 
> 2) If you don't import the feature, and you write a program that depends 
> on a list-comprehension variable remaining in scope, the compiler
> should issue a diagnostic along the lines of
> 
> Warning: This program should be taken out and shot.

It's not obvious to me how the compiler can tell.  Consider:

x = 3
if frob():
   frobbed = True
   squares = [x*x for x in range(9)]
if blob():
   z = x

Should the compiler issue a warning saying the program should be taken
out and shot?  With lexical comprehensions, the program is perfectly
valid and sets z to 3 if blob() is true.  The whole point of lexical
comprhensions is to make Python safe for such programs.

Without lexical comprehensions, the program still doesn't depend on
the listcomp leakage if frob() and blob() aren't simultaneously true
(envision "assert not frobbed" before the "z = x").  So "should be
taken out and shot" is maybe a little bit extreme.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Andrew Koenig
"Paul Rubin"  wrote in message 
news:[EMAIL PROTECTED]

> It's not obvious to me how the compiler can tell.  Consider:
>
>x = 3
>if frob():
>   frobbed = True
>   squares = [x*x for x in range(9)]
>if blob():
>   z = x
>
> Should the compiler issue a warning saying the program should be taken
> out and shot?  With lexical comprehensions, the program is perfectly
> valid and sets z to 3 if blob() is true.  The whole point of lexical
> comprhensions is to make Python safe for such programs.
>
> Without lexical comprehensions, the program still doesn't depend on
> the listcomp leakage if frob() and blob() aren't simultaneously true
> (envision "assert not frobbed" before the "z = x").  So "should be
> taken out and shot" is maybe a little bit extreme.

Actually, I don't think so.  If you intend for it to be impossible for "z = 
x" to refer to the x in the list comprehension, you shouldn't mind putting 
in "from __future__ import lexical_comprehensions."  If you don't intend for 
it to be impossible, then the program *should* be taken out and shot.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Reload Tricks

2005-01-22 Thread Michael Spencer
Kamilche wrote:
I want my program to be able to reload its code dynamically. I have a
large hierarchy of objects in memory. The inheritance hierarchy of
these objects are scattered over several files.
Michael Spencer wrote:
An alternative approach (with some pros and cons) is to modify the class in 
place, using something like:
 >>> def reclass(cls, to_cls):
 ... """Updates attributes of cls to match those of to_cls"""
 ...
 ... DONOTCOPY = ("__name__","__bases__","__base__",
 ... "__dict__", "__doc__","__weakref__") 
etc...
Kamilche wrote:
Would it be possible to just not copy any attribute that starts and
ends with '__'? Or are there some important attributes being copied?

Possible?  of course, it's Python ;-)
But there are many 'magic' attributes for behavior that you probably do want to 
copy:

e.g., __getitem__, __setitem__ etc...
See: http://docs.python.org/ref/specialnames.html
Michael Hudson's recipe: 	 
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/160164
does auto-reloading "automatically", at the price of changing the type of the 
classes you want to manage.  It's a very convenient approach for interactive 
development (which is the recipe's stated purpose).  It works by tracking 
instances and automatically updating their class.  If your program relies on 
class identity, you may run into problems.

Michael
--
http://mail.python.org/mailman/listinfo/python-list


Re: Zen of Python

2005-01-22 Thread Paul Rubin
"Andrew Koenig" <[EMAIL PROTECTED]> writes:
> Actually, I don't think so.  If you intend for it to be impossible for "z = 
> x" to refer to the x in the list comprehension, you shouldn't mind putting 
> in "from __future__ import lexical_comprehensions."  If you don't intend for 
> it to be impossible, then the program *should* be taken out and shot.

I shouldn't have to say "from __future__ import "
unless I actually intend to use the feature.   The program I posted
neither uses lexical comprehensions nor depends on their absence.  So
it shouldn't have to import anything from __future__.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: finding name of instances created

2005-01-22 Thread Scott David Daniels
Andrà Roberge wrote:
Craig Ringer wrote:
On Fri, 2005-01-21 at 16:13 -0800, Andrà wrote:
Short version of what I am looking for:
Given a class "public_class" which is instantiated a few times e.g.
a = public_class()
b = public_class()
c = public_class()
I would like to find out the name of the instances so that I could
create a list of them e.g.
['a', 'b', 'c']
...
Behind the scene, I have something like:
robot_dict = { 'robot' = CreateRobot( ..., name = 'robot') }
and have mapped move() to correspond to robot_dict['robot'].move()
(which does lots of stuff behind the scene.)
...[good explanation]...
> Does this clarify what I am trying to do and why?
Yup.  Would something like this help?
parts = globals().copy()
parts.update(locals())
names = [name for name, value in parts.iteritems()
 if isinstance(value, Robot)] # actual class name here
Note, however, that
a = b = CreateRobot()
will give two different names to the same robot.
And even:
Karl = CreateRobot()
Freidrich = CreateRobot()
for robot in (Karl, Freidrich):
robot.move()
Will have two names for "Freidrich" -- Freidrich and robot
--Scott David Daniels
[EMAIL PROTECTED]
--
http://mail.python.org/mailman/listinfo/python-list


Re: [OT] Good C++ book for a Python programmer

2005-01-22 Thread Ville Vainio
> "Rick" == rick [EMAIL PROTECTED] com <[EMAIL PROTECTED]> writes:

Rick> I was wondering whether anyone could recommend a good C++
Rick> book, with "good" being defined from the perspective of a
Rick> Python programmer. I

A good C++ book from the perspective of a Python programmer would be
one proclaiming that C++ is deprecated as a language, and it has
become illegal to develop software with it.

Rick> realize that there isn't a book titled "C++ for Python
Rick> Programmers", but has anyone found one that they think goes
Rick> particularly well with the Python way?

I don't think that's possible, considering the nature of the
language. Templates are closest to the Python way as far as C++
technologies go, but they are very unpythonic in their complexity.


Rick> I'm asking this because evidently the C++ standard has
Rick> changed a bit since 1994, when I bought my books. Who knew
Rick> that fstream was deprecated?

Stroustrup book, already mentioned by others, is the one if you just
need a "refresh" your knowledge. "Effective C++" and "More effective
C++" are also great to learn about all the nasty gotchas that your
Python experience might make you neglect. They are also certain to
deepen your appreciation of Python ;-).

-- 
Ville Vainio   http://tinyurl.com/2prnb
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: list unpack trick?

2005-01-22 Thread aurora
Thanks. I'm just trying to see if there is some concise syntax available  
without getting into obscurity. As for my purpose Siegmund's suggestion  
works quite well.

The few forms you have suggested works. But as they refer to list multiple  
times, it need a separate assignment statement like

  list = s.split('=',1)
I am think more in the line of string.ljust(). So if we have a  
list.ljust(length, filler), we can do something like

  name, value = s.split('=',1).ljust(2,'')
I can always break it down into multiple lines. The good thing about list  
unpacking is its a really compact and obvious syntax.


On Sat, 22 Jan 2005 08:34:27 +0100, Fredrik Lundh <[EMAIL PROTECTED]>  
wrote:
...
So more generally, is there an easy way to pad a list into length of n   
with filler items appended
at the end?
some variants (with varying semantics):
list = (list + n*[item])[:n]
or
list += (n - len(list)) * [item]
or (readable):
if len(list) < n:
list.extend((n - len(list)) * [item])
etc.

--
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Fredrik Lundh
Paul Rubin wrote:

> 2. "Would anyone except me have any use for this?" shows a lack of
> understanding of how Python is used.  Some users (call them
> "application users" or AU's) use Python to run Python applications for
> whatever purpose.  Some other users (call them "developers") use
> Python to develop applications that are intended to be run by AU's.

"lack of understanding of how Python is used"

wonderful.  I'm going to make a poster of your post, and put it on my
office wall.

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class introspection and dynamically determining function arguments

2005-01-22 Thread Mike C. Fletcher
Bengt Richter wrote:
On Fri, 21 Jan 2005 20:23:58 -0500, "Mike C. Fletcher" <[EMAIL PROTECTED]> 
wrote:
 

On Thu, 20 Jan 2005 11:24:12 -, "Mark English" <[EMAIL PROTECTED]> wrote:
 

...
Does the BasicProperty base class effectively register itself as an observer
of subclass properties and automatically update widgets etc., a la Delphi
data-driven visual components? I've thought of doing a light-weight form
extension class that would use a text (maybe CSV) definition to control
contruction, and easy programmatic manipulation by python of the definition
parameters, like a stripped-down version of the text view of Delphi forms.
It could also be done via Tkinter, to prototype it. It would be interesting
to allow dragging widgets and edges around in Tkinter and round-trip the parameter
changes automatically into the text representation. A little (well, ok, a fair amount ;-)
further and you'd have a drag-n-drop GUI design tool. But don't hold your breath ;-)
 

BasicProperty itself doesn't register as an observable/observer, 
BasicProperty is the lowest-level of the software stack, so it allows 
you to override and provide notification (e.g. using PyDispatcher) on 
property-setting.  ConflictSolver (old project to create a room 
scheduler) used that to do automatic updating of widgets in the wxPython 
UI based on Model changes (though I don't remember if it was 
per-property or per-object).  My goal for the wxoo project was to 
provide hooks in the wxPython GUI designers for dropping in property 
sheets and/or property-aware controls such that you would have the 
equivalent of "data aware" controls in VB or Access (keeping in mind 
that BasicProperty properties can also represent fields in database rows).

Aside:
   The VRML97 field class in OpenGLContext does notifications for every
   set (using PyDispatcher), btw.  It's a little more limited in its
   scope (focus on 3D data-types), but the effect is what allows the
   scenegraph to cache and then rebuild its internal rendering
   structures with very low overhead.
...
Anyway, if you aren't interested in BasicProperty for this task; another 
project on which I work, PyDispatcher provides fairly robust mechanism 
(called robustApply) for providing a set of possible arguments and using 
inspect to pick out which names match the parameters for a function in 
order to pass them in to the function/method/callable object.  That 
said, doing this for __init__'s with attribute values from an object's 
dictionary doesn't really seem like the proper way to approach the problem.
   

Sounds like a workaround for parameter passing that maybe should have been
keyword-based?
 

Not as such, that is, not a workaround, and it shouldn't be keyword 
based ;) .

The problem with using keyword-based passing is that every method needs 
to be written with this awareness of the keyword-handling structures.  
You spread pointless implementation details throughout your codebase.  
PyDispatcher lets you write very natural functions for dealing with 
events without having every function use **named parameters.

I've now written quite a few such systems, and I'm currently balanced 
between two approaches; that taken in PyDispatcher (define only natural 
parameters, have the system figure out how to deliver them to you), and 
that taken in OpenGLContext (define an event-class hierarchy which 
encapsulates all information about the events).

The PyDispatcher approach is nice in that it makes simple things very 
simple.  You want access to the "lastValue" parameter in the 
"environment" of the event and nothing else, you define your function 
like so:

   def handler( lastValue ):
   print 'got last value', lastValue
which works very nicely when you're early in the development of a 
system, or are linking multiple systems.  There's no need to do all 
sorts of extra work defining event hierarchies, you can often leave 
given handlers entirely alone during refactoring if they aren't dealing 
with the changed properties in the event-environment.

The OpenGLContext approach is more appropriate when you have a large 
system (such as OpenGLContext), where defining an event class is a 
trivial task compared to the total system expenditure.  It allows for 
such things as putting methods on the event objects to make debugging 
easy, and providing common functionality.  It starts to show it's worth 
when you start needing to reason about the phenomena of events 
themselves, rather than just about the phenomena the events represent 
(e.g. when you need to cache, delay, or reorder events).

The named argument passing approach has the disadvantage that every 
function must be written with knowledge of that use of named arguments:

   def handler( lastValue, **named ):
   print 'got last value', lastValue
when using such systems in the past I've often wound up with errors deep 
in an application where some seldom-called callback didn't have a 
**named parameter, so the system would abo

Re: embedding jython in CPython...

2005-01-22 Thread Steve Menard
Jim Hargrave wrote:
I've read that it is possible to compile jython to native code using 
GCJ. PyLucene uses this approach, they then use SWIG to create a Python 
wrapper around the natively compiled (java) Lucene. Has this been done 
before for with jython?

Another approach would be to use JPype to call the jython jar directly.
My goal is to be able to script Java code using Jython - but with the 
twist of using Cpython as a glue layer. This would allow mixing of Java 
and non-Java resources - but stil do it all in Python (Jython and Cpython).

I'd appreciate any pointers to this topic and pros/cons of the various 
methods.


Well now that IS getting kinda complicated ...
AS far a natively compiling Jython scripts ... well, if you natively 
compile them, it'll hard to "script" you java code afterward (I assume 
by scripting you mean loading scripts at runtime that were not know at 
compile time).

As for using JPype ... well it depends on what you want to script. if 
you Java code is the main app, I'd eschew CPython completely and use 
Jython to script. If you main app is in Python, and the Java code is 
"simply" libraries you wish to use, then I'f go with CPython + Jpype. It 
is very easy to manipulate Java objects that way, even to receive callbacks.

I guess it all comes down to what you mean by scripting, and exaclt what 
the structure of your application (as far as what is java and non-java). 
If you care to explain your situation a bit more, we'll be better able 
to help you.

Steve Menard
Maintainer of http://jpype.sourceforge.net
--
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Paul Rubin
"Fredrik Lundh" <[EMAIL PROTECTED]> writes:
> "lack of understanding of how Python is used"
> 
> wonderful.  I'm going to make a poster of your post, and put it on my
> office wall.

Excellent.  I hope you will re-read it several times a day.  Doing
that might improve your attitude.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Fredrik Lundh
Paul Rubin wrote:

> Excellent.  I hope you will re-read it several times a day.  Doing
> that might improve your attitude.

you really don't have a fucking clue about anything, do you?

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Doug Holton
Fredrik Lundh wrote:
A.M. Kuchling wrote:

IMHO that's a bit extreme.  Specifications are written to be detailed, so
consequently they're torture to read.  Seen the ReStructured Text spec
lately?

I've read many specs; YAML (both the spec and the format) is easily
among the worst ten-or-so specs I've ever seen.
What do you expect?  YAML is designed for humans to use, XML is not. 
YAML also hasn't had the backing and huge community behind it like XML.
XML sucks for people to have to write in, but is straightforward to 
parse.  The consequence is hordes of invalid XML files, leading to 
necessary hacks like the mark pilgrim's universal rss parser.  YAML 
flips the problem around, making it harder perhaps to implement a 
universal parser, but better for the end-user who has to actually use 
it.  More people need to work on improving the YAML spec and 
implementing better YAML parsers.  We've got too many XML parsers as it is.
--
http://mail.python.org/mailman/listinfo/python-list


Re: default value in a list

2005-01-22 Thread Nick Craig-Wood
Alex Martelli <[EMAIL PROTECTED]> wrote:
>  Nick Craig-Wood <[EMAIL PROTECTED]> wrote:
> ...
> > Or this version if you want something other than "" as the default
> > 
> >   a, b, b = (line.split(':') + 3*[None])[:3]
> 
>  Either you mean a, b, c -- or you're being subtler than I'm
>  grasping.

Just a typo - I meant c!

> > BTW This is a feature I miss from perl...
> 
>  Hmmm, I understand missing the ``and all the rest goes here'' feature
>  (I'd really love it if the rejected
>  a, b, *c = whatever
>  suggestion had gone through, ah well), but I'm not sure what exactly
>  you'd like to borrow instead -- blissfully by now I've forgotten a lot
>  of the perl I used to know... care to clarify?

I presume your construct above is equivalent to

  my ($a, $b, @c) = split /.../;

which I do indeed miss.

Sometimes I miss the fact that in the below any unused items are set
to undef, rather than an exception being raised

  my ($a, $b, $c) = @array;

However, I do appreciate the fact (for code reliability) that the
python equivalent

  a, b, c = array

will blow up if there aren't exactly 3 elements in array.

So since I obviously can't have my cake an eat it here, I'd leave
python how it is for the second case, and put one of the suggestions
in this thread into my toolbox / the standard library.

BTW I've converted a lot of perl programs to python so I've come
across a lot of little things like this!

-- 
Nick Craig-Wood <[EMAIL PROTECTED]> -- http://www.craig-wood.com/nick
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread rm
Doug Holton wrote:
What do you expect?  YAML is designed for humans to use, XML is not. 
YAML also hasn't had the backing and huge community behind it like XML.
XML sucks for people to have to write in, but is straightforward to 
parse.  The consequence is hordes of invalid XML files, leading to 
necessary hacks like the mark pilgrim's universal rss parser.  YAML 
flips the problem around, making it harder perhaps to implement a 
universal parser, but better for the end-user who has to actually use 
it.  More people need to work on improving the YAML spec and 
implementing better YAML parsers.  We've got too many XML parsers as it is.
100% right on, stuff (like this)? should be easy on the users, and if 
possible, on the developers, not the other way around. But developers 
come second. Now, I didn't check the specs, they might be difficult, 
they might be incorrect, maybe their stated goal is not reached with 
this implementation of their idea. But I'd love to see a generic, 
pythonic data format.

bye,
rm
--
http://mail.python.org/mailman/listinfo/python-list


Re: default value in a list

2005-01-22 Thread Michael Spencer
Alex Martelli wrote:
[explanation and the following code:]
 >>> a, b, c = it.islice(
 ...   it.chain(
 ...   line.split(':'), 
 ...   it.repeat(some_default),
 ...   ), 
 ...   3)
 ... 
 ...   
 >>> def pad_with_default(N, iterable, default=None):
 ... it = iter(iterable)
 ... for x in it:
 ... if N<=0: break
 ... yield x
 ... N -= 1
 ... while N>0:
 ... yield default
 ... N -= 1
Why not put these together and put it in itertools, since the requirement seems 
to crop up every other week?

 >>> line = "A:B:C".split(":")
 ...
 >>> def ipad(N,iterable, default = None):
 ... return it.islice(it.chain(iterable, it.repeat(default)), N)
 ...
 >>> a,b,c,d = ipad(4,line)
 >>> a,b,c,d
('A', 'B', 'C', None)
Michael
--
http://mail.python.org/mailman/listinfo/python-list


Re: default value in a list

2005-01-22 Thread Reinhold Birkenfeld
Michael Spencer wrote:
> Alex Martelli wrote:
> [explanation and the following code:]
> 
>>  >>> a, b, c = it.islice(
>>  ...   it.chain(
>>  ...   line.split(':'), 
>>  ...   it.repeat(some_default),
>>  ...   ), 
>>  ...   3)
>>  ... 
>>  ...   
>>  >>> def pad_with_default(N, iterable, default=None):
>>  ... it = iter(iterable)
>>  ... for x in it:
>>  ... if N<=0: break
>>  ... yield x
>>  ... N -= 1
>>  ... while N>0:
>>  ... yield default
>>  ... N -= 1
> 
> Why not put these together and put it in itertools, since the requirement 
> seems 
> to crop up every other week?
> 
>   >>> line = "A:B:C".split(":")
>   ...
>   >>> def ipad(N,iterable, default = None):
>   ... return it.islice(it.chain(iterable, it.repeat(default)), N)
>   ...
>   >>> a,b,c,d = ipad(4,line)
>   >>> a,b,c,d
> ('A', 'B', 'C', None)

Good idea!

(+1 if this was posted on python-dev!)

Reinhold
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Fredrik Lundh
"rm" <[EMAIL PROTECTED]> wrote:

> 100% right on, stuff (like this)? should be easy on the users, and if 
> possible, on the developers, 
> not the other way around.

I guess you both stopped reading before you got to the second paragraph
in my post.  YAML (at least the version described in that spec) isn't easy on
users; it may look that way at a first glance, and as long as you stick to a
small subset, but it really isn't.  that's not just bad design, that's plain 
evil.

and trust me, when things are hard to get right for developers, users will
suffer too.

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread rm
Fredrik Lundh wrote:
"rm" <[EMAIL PROTECTED]> wrote:

100% right on, stuff (like this)? should be easy on the users, and if possible, on the developers, 
not the other way around.

I guess you both stopped reading before you got to the second paragraph
in my post.  YAML (at least the version described in that spec) isn't easy on
users; it may look that way at a first glance, and as long as you stick to a
small subset, but it really isn't.  that's not just bad design, that's plain 
evil.
and trust me, when things are hard to get right for developers, users will
suffer too.
 


you stopped reading too early as well, I guess:
"maybe their stated goal is not reached with this implementation of 
their idea"

and the implementation being the spec,
furthermore, "users will suffer too", I'm suffering if I have to use 
C++, with all its exceptions and special cases.

BTW, I pickpocketed the idea that if there is a choice where to put the 
complexity, you never put it with the user. "pickpocket" is strong, I've 
learned it from an analyst who was 30 years in the business, and I 
really respect the guy, basically he was always right and right on. On 
the other hand, the management did not always like what he thought :-)

bye,
rm
--
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Doug Holton
Fredrik Lundh wrote:
and trust me, when things are hard to get right for developers, users will
suffer too.
That is exactly why YAML can be improved.  But XML proves that getting 
it "right" for developers has little to do with getting it right for 
users (or for saving bandwidth).  What's right for developers is what 
requires the least amount of work.  The problem is, that's what is right 
for end-users, too.
--
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Doug Holton
rm wrote:
this implementation of their idea. But I'd love to see a generic, 
pythonic data format.
That's a good idea.  But really Python is already close to that.  A lot 
of times it is easier to just write out a python dictionary than using a 
DB or XML or whatever.  Python is already close to YAML in some ways. 
Maybe even better than YAML, especially if Fredrik's claims of YAML's 
inherent unreliability are to be believed.  Of course he develops a 
competing XML product, so who knows.
--
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Paul Rubin
"Fredrik Lundh" <[EMAIL PROTECTED]> writes:
> > Excellent.  I hope you will re-read it several times a day.  Doing
> > that might improve your attitude.
> 
> you really don't have a fucking clue about anything, do you?

You're not making any bloody sense.  I explained to you why I wasn't
interested in writing that particular piece of code unless it was
going in the core.  That was in response to your suggestion that I
write the code without regard to whether it was going in the core or
not.

If you didn't understand the explanation, I suggest you read it again,
perhaps by putting it on your wall like you said.  If you have any
questions after that, feel free to post them.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Fredrik Lundh
"rm" <[EMAIL PROTECTED]> wrote:

> furthermore, "users will suffer too", I'm suffering if I have to use C++, 
> with all its exceptions 
> and special cases.

and when you suffer, your users will suffer.  in the C++ case, they're likely to
suffer from spurious program crashes, massively delayed development projects,
obscure security holes, etc.

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Daniel Bickett
Doug Holton wrote:
> What do you expect?  YAML is designed for humans to use, XML is not.
> YAML also hasn't had the backing and huge community behind it like XML.
> XML sucks for people to have to write in, but is straightforward to
> parse.  The consequence is hordes of invalid XML files, leading to
> necessary hacks like the mark pilgrim's universal rss parser.  YAML
> flips the problem around, making it harder perhaps to implement a
> universal parser, but better for the end-user who has to actually use
> it.  More people need to work on improving the YAML spec and
> implementing better YAML parsers.  We've got too many XML parsers as it is.

However, one of the main reasons that XML is so successful is because
it's roots are shared by (or, perhaps, in) a markup language that a
vast majority of the Internet community knows: HTML.

In it's most basic form, I don't care what anyone says, XML is VERY
straight forward. Throughout the entire concept of XML (again, in its
most basic form) the idea of opening and closing tags (with the
exception of the standalone tags, however still very simple) is
constant, for all different data types.

In my (brief) experience with YAML, it seemed like there were several
different ways of doing things, and I saw this as one of it's failures
(since we're all comparing it to XML). However I maintain, in spite of
all of that, that it can easily boil down to the fact that, for
someone who knows the most minuscule amount of HTML (a very easy thing
to do, not to mention most people have a tiny bit of experience to
boot), the transition to XML is painless. YAML, however, is a brand
new format with brand new semantics.

As for the human read-and-write-ability, I don't know about you, but I
have no trouble whatsoever reading and writing XML. But alas, I don't
need to. Long live elementtree (once again) :-)

Daniel Bickett
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: IDLE Problem in Windows XP

2005-01-22 Thread Josiah Carlson

Branden Smith <[EMAIL PROTECTED]> wrote:
> 
> Hi,
> 
> I am a teaching assistant for an introductory course at Georgia Tech
> which uses Python, and I have a student who has been unable to start
> IDLE on her Windows XP Home Edition machine. Clicking on the shortcut
> (or the program executable) causes the hourglass to appear momentarily
> (and the process to momentarily appear in the process monitor), but
> nothing happens thereafter.
...

> Does anyone have any ideas as to what might cause this problem? It shows
> up with both Python 2.4 and 2.3. Version 2.2 works as it should.

It is probably the socket issue.  To get past the socket issue,
according to the idle docs:

Running without a subprocess:

If IDLE is started with the -n command line switch it will run in a
single process and will not create the subprocess which runs the RPC
Python execution server.  This can be useful if Python cannot create
the subprocess or the RPC socket interface on your platform.  However,
in this mode user code is not isolated from IDLE itself.  Also, the
environment is not restarted when Run/Run Module (F5) is selected.  If
your code has been modified, you must reload() the affected modules and
re-import any specific items (e.g. from foo import baz) if the changes
are to take effect.  For these reasons, it is preferable to run IDLE
with the default subprocess if at all possible.



That is, have the student modify the shortcut to pass a '-n' argument 
(without the quotes) to the command.  If it works, great, if it doesn't,
a traceback would be helpful.

 - Josiah

--
http://mail.python.org/mailman/listinfo/python-list


Re: getting file size

2005-01-22 Thread Marc 'BlackJack' Rintsch
In <[EMAIL PROTECTED]>, Bob Smith wrote:

> Are these the same:
> 
> 1. f_size = os.path.getsize(file_name)
> 
> 2. fp1 = file(file_name, 'r')
> data = fp1.readlines()
> last_byte = fp1.tell()
> 
> I always get the same value when doing 1. or 2. Is there a reason I 
> should do both? When reading to the end of a file, won't tell() be just 
> as accurate as os.path.getsize()?

You don't always get the same value, even on systems where `tell()`
returns a byte position.  You need the rights to read the file in case 2.

>>> import os
>>> os.path.getsize('/etc/shadow')
612L
>>> f = open('/etc/shadow', 'r')
Traceback (most recent call last):
  File "", line 1, in ?
IOError: [Errno 13] Permission denied: '/etc/shadow'

Ciao,
Marc 'BlackJack' Rintsch
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Paul Rubin
Daniel Bickett <[EMAIL PROTECTED]> writes:
> In my (brief) experience with YAML, it seemed like there were several
> different ways of doing things, and I saw this as one of it's failures
> (since we're all comparing it to XML).

YAML looks to me to be completely insane, even compared to Python
lists.  I think it would be great if the Python library exposed an
interface for parsing constant list and dict expressions, e.g.:

   [1, 2, 'Joe Smith', 8237972883334L,   # comment
  {'Favorite fruits': ['apple', 'banana', 'pear']},  # another comment
  'xyzzy', [3, 5, [3.14159, 2.71828, [

I don't see what YAML accomplishes that something like the above wouldn't.

Note that all the values in the above have to be constant literals.
Don't suggest using eval.  That would be a huge security hole.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Stephen Waterbury
Steve Holden wrote:
It seems to me the misunderstanding here is that XML was ever intended 
to be generated directly by typing in a text editor. It was rather 
intended (unless I'm mistaken) as a process-to-process data interchange 
metalanguage that would be *human_readable*.
The premise that XML had a coherent design intent
stetches my credulity beyond its elastic limit.
--
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread rm
Doug Holton wrote:
rm wrote:
this implementation of their idea. But I'd love to see a generic, 
pythonic data format.

That's a good idea.  But really Python is already close to that.  A lot 
of times it is easier to just write out a python dictionary than using a 
DB or XML or whatever.  Python is already close to YAML in some ways. 
Maybe even better than YAML, especially if Fredrik's claims of YAML's 
inherent unreliability are to be believed.  Of course he develops a 
competing XML product, so who knows.
true, it's easy enough to separate the data from the functionality in 
python by putting the data in a dictionary/list/tuple, but it stays 
source code.

rm
--
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Fredrik Lundh
Stephen Waterbury wrote:

> The premise that XML had a coherent design intent
> stetches my credulity beyond its elastic limit.

the design goals are listed in section 1.1 of the specification.

see tim bray's annotated spec for additional comments by one
of the team members:

http://www.xml.com/axml/testaxml.htm

(make sure to click on all (H)'s and (U)'s in that section for the
full story).

if you think that the XML 1.0 team didn't know what they were
doing, you're seriously mistaken.  it's the post-1.0 standards that
are problematic...

 



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Alex Martelli
Paul Rubin  wrote:
   ...
> lists.  I think it would be great if the Python library exposed an
> interface for parsing constant list and dict expressions, e.g.:
> 
>[1, 2, 'Joe Smith', 8237972883334L,   # comment
>   {'Favorite fruits': ['apple', 'banana', 'pear']},  # another comment
>   'xyzzy', [3, 5, [3.14159, 2.71828, [
> 
> I don't see what YAML accomplishes that something like the above wouldn't.
> 
> Note that all the values in the above have to be constant literals.
> Don't suggest using eval.  That would be a huge security hole.

I do like the idea of a parser that's restricted to "safe expressions"
in this way.  Once the AST branch merge is done, it seems to me that
implementing it should be a reasonably simple exercise, at least at a
"toy level".

I wonder, however, if, as an even "toyer" exercise, one might not
already do it easily -- by first checking each token (as generated by
tokenize.generate_tokens) to ensure it's safe, and THEN eval _iff_ no
unsafe tokens were found in the check.  Accepting just square brackets,
braces, commas, constant strings and numbers, and comments, should be
pretty safe -- we'd no doubt want to also accept minus (for unary
minus), plus (to make complex numbers), and specifically None, True,
False -- but that, it appears to me, still leaves little margin for an
attacker to prepare an evil string that does bad things when eval'd...


Alex


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Michael Spencer
Paul Rubin wrote:
YAML looks to me to be completely insane, even compared to Python
lists.  I think it would be great if the Python library exposed an
interface for parsing constant list and dict expressions, e.g.:
   [1, 2, 'Joe Smith', 8237972883334L,   # comment
  {'Favorite fruits': ['apple', 'banana', 'pear']},  # another comment
  'xyzzy', [3, 5, [3.14159, 2.71828, [
I don't see what YAML accomplishes that something like the above wouldn't.
Note that all the values in the above have to be constant literals.
Don't suggest using eval.  That would be a huge security hole.
Not hard at all, thanks to compiler.ast:
>>> import compiler
 ...
 >>> class AbstractVisitor(object):
 ... def __init__(self):
 ... self._cache = {} # dispatch table
 ...
 ... def visit(self, node,**kw):
 ... cls = node.__class__
 ... meth = self._cache.setdefault(cls,
 ... getattr(self,'visit'+cls.__name__,self.default))
 ... return meth(node, **kw)
 ...
 ... def default(self, node, **kw):
 ... for child in node.getChildNodes():
 ... return self.visit(child, **kw)
 ...
 >>> class ConstEval(AbstractVisitor):
 ... def visitConst(self, node, **kw):
 ... return node.value
 ...
 ... def visitName(self,node, **kw):
 ... raise NameError, "Names are not resolved"
 ...
 ... def visitDict(self,node,**kw):
 ... return dict([(self.visit(k),self.visit(v)) for k,v in node.items])
 ...
 ... def visitTuple(self,node, **kw):
 ... return tuple(self.visit(i) for i in node.nodes)
 ...
 ... def visitList(self,node, **kw):
 ... return [self.visit(i) for i in node.nodes]
 ...
 >>> ast = compiler.parse(source,"eval")
 >>> walker = ConstEval()
 >>> walker.visit(ast)
[1, 2, 'Joe Smith', 8237972883334L, {'Favorite fruits': ['apple', 'banana', 
'pear']}, 'xyzzy', [3, 5, [3.14158999, 2.71828, [

Add sugar to taste
Regards
Michael
--
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Fredrik Lundh
Alex Martelli wrote:

>>[1, 2, 'Joe Smith', 8237972883334L,   # comment
>>   {'Favorite fruits': ['apple', 'banana', 'pear']},  # another comment
>>   'xyzzy', [3, 5, [3.14159, 2.71828, [
>>
>> I don't see what YAML accomplishes that something like the above wouldn't.
>>
>> Note that all the values in the above have to be constant literals.
>> Don't suggest using eval.  That would be a huge security hole.
>
> I do like the idea of a parser that's restricted to "safe expressions"
> in this way.  Once the AST branch merge is done, it seems to me that
> implementing it should be a reasonably simple exercise, at least at a
> "toy level".

for slightly more interop, you could plug in a modified tokenizer, and use
JSON:

http://www.crockford.com/JSON/xml.html

> I wonder, however, if, as an even "toyer" exercise, one might not
> already do it easily -- by first checking each token (as generated by
> tokenize.generate_tokens) to ensure it's safe, and THEN eval _iff_ no
> unsafe tokens were found in the check.  Accepting just square brackets,
> braces, commas, constant strings and numbers, and comments, should be
> pretty safe -- we'd no doubt want to also accept minus (for unary
> minus), plus (to make complex numbers), and specifically None, True,
> False

or you could use a RE to make sure the string only contains safe literals,
and pass the result to eval.

> but that, it appears to me, still leaves little margin for an attacker to 
> prepare
> an evil string that does bad things when eval'd...

besides running out of parsing time or object memory, of course.  unless
you check the size before/during the parse.

 



-- 
http://mail.python.org/mailman/listinfo/python-list


debugging process

2005-01-22 Thread jimbo
Hi,
I am trying to create a separate process that will launch python and 
then can be used to step through a script programmatically.

I have tried something like:
(input, output) = os.popen2(cmd="python")
Then I expected I could select over the two handles input and output, 
make sure they aren't going to block, and then be able to write python 
code to the interpreter and read it back. I intend to import a module, 
run it in the debugger with pdb.run() and the start passing debug 
commands in and read the output.

I hope that makes sense, what I am finding is that whenever I try to 
read from the output handle it blocks. My understanding was that if it 
is returned by select that it is ready for reading and won't block.

I think that this must have something to do with python expecting 
itself to by in a TTY? Can anyone give any idea of where I should be 
going with this?

Thanks,
jms.

[EMAIL PROTECTED]   http://www.cordiner.com
--
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread John J. Lee
Paul Rubin <"http://phr.cx"@NOSPAM.invalid> writes:
[...]
> Building larger ones seems to
> have complexity exponential in the number of bits, which is not too
[...]

Why?


> It's not even known in theory whether quantum computing is
> possible on a significant scale.

Discuss. 

(I don't mean I'm requesting a discussion -- it just reads like a
physics / philosophy exam essay question, which traditionally end with
that word :)


John

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Paul Rubin
[EMAIL PROTECTED] (Alex Martelli) writes:
> I wonder, however, if, as an even "toyer" exercise, one might not
> already do it easily -- by first checking each token (as generated by
> tokenize.generate_tokens) to ensure it's safe, and THEN eval _iff_ no
> unsafe tokens were found in the check.

I don't trust that for one minute.  It's like checking a gun to make
sure that it has no bullets, then putting it to your head and pulling
the trigger.  Or worse, it's like checking the gun once, then putting
it to your head and pulling the trigger every day for the next N years
without checking again to see if someone has inserted some bullets
(this is what you basically do if you write your program to check if
the tokens are safe, and then let users keep running it without
re-auditing it, as newer versions of Python get released).

See the history of the pickle module to see how that kind of change
has already screwed people (some comments in SF bug #467384).  "Don't
use eval" doesn't mean mean "check if it's safe before using it".  It
means "don't use it".
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Paul Rubin
[EMAIL PROTECTED] (John J. Lee) writes:
> > Building larger ones seems to
> > have complexity exponential in the number of bits, which is not too
> 
> Why?

The way I understand it, that 7-qubit computer was based on embedding
the qubits on atoms in a large molecule, then running the computation
procedure on a bulk solution containing zillions of the molecules,
then shooting RF pulses through the solution and using an NMR
spectrometer to find a peak at the most likely quantum state (i.e. the
state which had the most of the molecules in that state).  To do it
with 8 qubits instead of 7, you'd have to use twice as much solution,
so that particular technique doesn't scale.  What we want is a way to
calculations on single molecules, not bulk solutions.  But no one so
far has managed to do even 7 qubits that way.

> > It's not even known in theory whether quantum computing is
> > possible on a significant scale.
> 
> Discuss. 

The problem is maintaining enough coherence through the whole
calculation that the results aren't turned into garbage.  In any
physically realizeable experiment, a certain amount of decoherence
will creep in at every step.  So you need to add additional qubits for
error correction, but then those qubits complicate the calculation and
add more decoherence, so you need even more error correcting qubits.
So the error correction removes some of your previous decoherence
trouble but adds some of its own.

As I understand it, whether there's a quantum error correcting scheme
that removes decoherence faster than it adds it as the calculation
gets larger, is an open problem in quantum computing theory. 

I'm not any kind of expert in this stuff but have had some
conversations with people who are into it, and the above is what they
told me, as of a few years ago.  I probably have it all somewhat garbled.
-- 
http://mail.python.org/mailman/listinfo/python-list


RFC: Python bindings to Linux i2c-dev

2005-01-22 Thread Mark M. Hoffman
Hi everyone:

I've created a Python extension in C for the Linux i2c-dev interface.
As this is my first attempt at extending Python, I would appreciate any
comments or suggestions.  I'm especially interested to know if (and
where) I got any of the ref-counting wrong.  But I'm also interested
in comments about the style, the interface, or whatever else.  You can
find my source code here [1].

If you actually want to build/install/use this thing, you better read
the caveats here [2].

And in case you're interested, here is a page full of links to info on
I2C/SMBus [3].

[1] http://members.dca.net/mhoffman/sensors/python/20050122/

[2] http://archives.andrew.net.au/lm-sensors/msg28792.html

[3] http://www2.lm-sensors.nu/~lm78/cvs/lm_sensors2/doc/useful_addresses.html

Thanks and regards,

-- 
Mark M. Hoffman
[EMAIL PROTECTED]

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [perl-python] 20050121 file reading & writing

2005-01-22 Thread Bob Smith
Xah Lee wrote:
# reading entire file as a list, of lines
# mylist = f.readlines()
	
To do this efficiently on a large file (dozens or hundreds of megs), you 
should use the 'sizehint' parameter so as not to use too much memory:

sizehint = 0
mylist = f.readlines(sizehint)

--
http://mail.python.org/mailman/listinfo/python-list


Re: debugging process

2005-01-22 Thread Alex Martelli
<[EMAIL PROTECTED]> wrote:
   ...
> I think that this must have something to do with python expecting 
> itself to by in a TTY? Can anyone give any idea of where I should be 
> going with this?

http://pexpect.sourceforge.net/


Alex
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: need help on need help on generator...

2005-01-22 Thread Terry Reedy

"Francis Girard" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]

>If I understand correctly,

Almost...

> a "generator" produce something over which you can
> iterate with the help of an "iterator".

To be exact, the producer is a generator function, a function whose body 
contains 'yield'.  In CPython, the difference after executing the def is 
that a generator function has a particular flag set.  People sometimes 
shorten 'generator function' to 'generator' as you did, but calling both a 
factory and its products by the same name is confusing.  (For instance, try 
calling an automobile factory an automobile).

>>> def genf(): yield 1
...
>>> genf


The result of calling a generator function is a generator, which is one but 
only one type of iterator.

>>> gen = genf()
>>> gen

>>> dir(gen)
[, '__iter__',  ' gi_frame', 'gi_running', 
'next']

The .__iter__ and .next methods make this an iterator.  The two data 
attributes are for internal use.

> Can you iterate (in the strict sense
>of an "iterator") over something not generated by a "generator" ?

Of course.  Again, a generator is one specific type of iterator, where an 
iterator is anything with the appropriate .__iter__ and .next methods.

Terry J. Reedy





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What YAML engine do you use?

2005-01-22 Thread Tim Parkin
Doug Holton wrote:
> That is exactly why YAML can be improved.  But XML proves that getting 
> it "right" for developers has little to do with getting it right for 
> users (or for saving bandwidth).  What's right for developers is what 
> requires the least amount of work.  The problem is, that's what is right 
> for end-users, too.

Having spent some time with YAML and it's implementations (at least
pyyaml and the ruby/python versions of syck), I thought I should
comment. The only problems with syck we've encountered have been to do
with the python wrapper rather than syck itself. Syck seems to be used
widely without problems within the Ruby community and if anybody has
evidence of issues with it I'd really like to know about them. PyYAML is
a little inactive and doesn't conform to the spec in many ways and, as
such, we prefer the syck implementation.

In my opinion there have been some bad decisions made whilst creating
YAML, but for me they are acceptable given the advantages of a data
format that is simple to read and write. Perhaps judging the utility of
a project on it's documentation is one of the problems, as most people
who have 'just used it' seem to be happy enough. These people include
non-technical clients of ours who manage some of their websites by
editing YAML files directly. That said, I don't think it would be the
best way to enter data for a life support machine, but I wouldn't like
to do that with XML either ;-) 

One thing that should be pointed out is that there are no parsers
available that are built directly on the YAML pseudo BNF. Such work is
in progress in two different forms but don't expect anything soon. As I
understand it, Syck has been built to pass tests rather than conform to
a constantly changing BNF and it seems to have few warts.

Tim






-- 
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Paul Rubin
"A.M. Kuchling" <[EMAIL PROTECTED]> writes:
> It was discussed in this thread:
> http://mail.python.org/pipermail/python-dev/2003-April/034959.html

In that thread, you wrote:

> Rubin wanted to come up with a nice interface for the module, and
> has posted some notes toward it.  I have an existing implementation
> that's 2212 lines of code; I like the interface, but opinions may
> vary. :)

Does that mean you have a 2212-line C implementation of the interface
that I proposed?  Do you plan to release it?

BTW, I just looked at the other messages in that thread and I realize
that I've looked at them before, and that's where I saw the concern
about importing crypto into some countries including Holland.  Again,
I think the reasoning is bizarre.  I'm sure there are tons of Firefox
users in Holland, and Firefox definitely contains an SSL stack that
doesn't have to be downloaded separately.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: finding name of instances created

2005-01-22 Thread André

Steven Bethard wrote:
> If you have access to the user module's text, something like this
might
> be a nicer solution:
>
> py> class Robot(object):
> ... def __init__(self):
> ... self.name = None
> ... def move(self):
> ... print "robot %r moved" % self.name
> ...
> py> class RobotDict(dict):
> ... def __setitem__(self, name, value):
> ... if isinstance(value, Robot):
> ... value.name = name
> ... super(RobotDict, self).__setitem__(name, value)
> ...
> py> user_code = """\
> ... alex = Robot()
> ... anna = Robot()
> ... alex.move()
> ... anna.move()"""
> py> robot_dict = RobotDict()
> py> robot_dict['Robot'] = Robot
> py> exec user_code in robot_dict
> robot 'alex' moved
> robot 'anna' moved
>
> Note that I provide a specialized dict in which to exec the user code
--
> this allows me to override __setitem__ to add the appropriate
attribute
> to the Robot as necessary.
>

I have tried this exact example (using Python 2.3 if it makes any
difference) and what I got was:
robot None moved
robot None moved

I checked what I wrote, used cut & paste on your code, removing the
leading "junk", tried it again ... to no avail. :-(

André

--
http://mail.python.org/mailman/listinfo/python-list


[OT] XML design intent [was Re: What YAML engine do you use?]

2005-01-22 Thread Stephen Waterbury
Fredrik Lundh wrote:
Stephen Waterbury wrote:
The premise that XML had a coherent design intent
stetches my credulity beyond its elastic limit.
the design goals are listed in section 1.1 of the specification.
see tim bray's annotated spec for additional comments by one
of the team members:
http://www.xml.com/axml/testaxml.htm
(make sure to click on all (H)'s and (U)'s in that section for the
full story).
Thanks, Fredrik, I hadn't seen that.  My credulity has been restored
to its original shape.  Whatever that was.  :)
However, now that I have direct access to the documented design
goals (intent) of XML, it's interesting to note that the intent
Steve Holden imputed to it earlier is not explicitly among them:
Steve Holden wrote:
It seems to me the misunderstanding here is that XML was ever intended 
to be generated directly by typing in a text editor. It was rather 
intended (unless I'm mistaken) as a process-to-process data interchange 
metalanguage that would be *human_readable*.
Not unless you interpret "XML shall support a wide variety of applications"
as "XML shall provide a process-to-process data interchange metalanguage".
It might have been a hidden agenda, but it certainly was not an
explicit design goal.
(The "human-readable" part is definitely there:
"6. XML documents should be human-legible and reasonably clear",
and Steve was also correct that generating XML directly by typing
in a text editor was definitely *not* a design intent.  ;)
if you think that the XML 1.0 team didn't know what they were
doing, you're seriously mistaken.  it's the post-1.0 standards that
are problematic...
Agreed.  And many XML-based standards.
- Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: rotor replacement

2005-01-22 Thread Fredrik Lundh
Paul Rubin wrote:

>> you really don't have a fucking clue about anything, do you?
>
> You're not making any bloody sense.

oh, I make perfect sense, and I think most people here understand why
I found your little "lecture" so funny.  if you still don't get it, maybe some-
one can explain it to you.

 



-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >