Re: C#3.0 and lambdas

2005-09-22 Thread Steven Bethard
Reinhold Birkenfeld wrote:
> 
> This is Open Source. If you want an initiative, start one.

+1 QOTW.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: C#3.0 and lambdas

2005-09-23 Thread Steven Bethard
Erik Wilsher wrote:
 > And I think the discussion that followed proved your point perfectly
 > Fredrik. Big discussion over fairly minor things, but no "big
 > picture". Where are the initiatives on the "big stuff" (common
 > documentation format, improved build system, improved web modules,
 > reworking the standard library to mention a few)  Hey, even Ruby is
 > passing us here.

Reinhold Birkenfeld wrote:
 > This is Open Source. If you want an initiative, start one.

Fredrik Lundh wrote:
> you know, this "you have opinions? fuck off!" attitude isn't really helping.

While I should know better than replying to , ;) I have to say that 
I don't think "you have opinions? fuck off!" was the intent at all.  I 
don't know many people who'd argue that we don't need:

* more complete and better organized documentation
* a simpler build/install system
* etc.

But they'll never get done if no one volunteers to work on them. 
Recently, I saw a volunteer on python-dev looking to help make the docs 
more complete, and he was redirected to the docs SIG to help out.  This 
is good.  I know that there's been a bunch of work on setuptools[1] 
that's supposed to be a real improvement on distutils.  This is also goood.

But there're only so many man-hours available to work on these projects. 
  If you see a problem, and you want it fixed, the right thing to do is 
to  donate some of your time to a project that needs it.  This, I 
believe, is the essence of Reinhold Birkenfeld's comment.

STeVe


[1]http://peak.telecommunity.com/DevCenter/setuptools
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: subprocess considered harmfull?

2005-09-25 Thread Steven Bethard
Uri Nix wrote:
>  Using the following snippet:
>   p =
> subprocess.Popen(nmake,stderr=subprocess.PIPE,stdout=subprocess.PIPE, \
>universal_newlines=True, bufsize=1)
>   os.sys.stdout.writelines(p.stdout)
>   os.sys.stdout.writelines(p.stderr)
>  Works fine on the command line, but fails when called from within
> Visual Studio, with the following error:
>   File "C:\Python24\lib\subprocess.py", line 549, in __init__
> (p2cread, p2cwrite,
>   File "C:\Python24\lib\subprocess.py", line 609, in _get_handles
> p2cread = self._make_inheritable(p2cread)
>   File "C:\Python24\lib\subprocess.py", line 650, in _make_inheritable
> DUPLICATE_SAME_ACCESS)
> TypeError: an integer is required

This looks like these known bugs:
 http://python.org/sf/1124861
 http://python.org/sf/1126208

Try setting stderr to subprocess.PIPE.  I think that was what worked for 
me.  (You might also try setting shell=True.  That's what I currently 
have in my code that didn't work before.)

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dynamically adding and removing methods

2005-09-25 Thread Steven Bethard
Steven D'Aprano wrote:
> py> class Klass:
> ... pass
> ...
> py> def eggs(self, x):
> ... print "eggs * %s" % x
> ...
> py> inst = Klass()  # Create a class instance.
> py> inst.eggs = eggs  # Dynamically add a function/method.
> py> inst.eggs(1)
> Traceback (most recent call last):
>   File "", line 1, in ?
> TypeError: eggs() takes exactly 2 arguments (1 given)
> 
> From this, I can conclude that when you assign the function to the
> instance attribute, it gets modified to take two arguments instead of one.

No.  Look at your eggs function.  It takes two arguments.  So the 
function is not modified at all.  (Perhaps you expected it to be?)

> Can we get the unmodified function back again?
> 
> py> neweggs = inst.eggs
> py> neweggs(1)
> Traceback (most recent call last):
>   File "", line 1, in ?
> TypeError: eggs() takes exactly 2 arguments (1 given)
> 
> Nope. That is a gotcha. Storing a function object as an attribute, then
> retrieving it, doesn't give you back the original object again. 

Again, look at your eggs function.  It takes two arguments.  So you got 
exactly the same object back.  Testing this:

py> class Klass:
... pass
...
py> def eggs(self, x):
... print "eggs * %s" % x
...
py> inst = Klass()
py> inst.eggs = eggs
py> neweggs = inst.eggs
py> eggs is neweggs
True

So you get back exactly what you previously assigned.  Note that it's 
actually with *classes*, not *instances* that you don't get back what 
you set:

py> Klass.eggs = eggs
py> Klass.eggs

py> Klass.eggs is eggs
False

> Furthermore, the type of the attribute isn't changed:
> 
> py> type(eggs)
> 
> py> type(inst.eggs)
> 
> 
> But if you assign a class attribute to a function, the type changes, and
> Python knows to pass the instance object:
> 
> py> Klass.eggs = eggs
> py> inst2 = Klass()
> py> type(inst2.eggs)
> 
> py> inst2.eggs(1)
> eggs * 1
> 
> The different behaviour between adding a function to a class and an
> instance is an inconsistency. The class behaviour is useful, the instance
> behaviour is broken.

With classes, the descriptor machinery is invoked:

py> Klass.eggs

py> Klass.eggs.__get__(None, Klass)

py> Klass.eggs.__get__(Klass(), Klass)
>

Because instances do not invoke the descriptor machinery, you get a 
different result:

py> inst.eggs


However, you can manually invoke the descriptor machinery if that's what 
you really want:

py> inst.eggs.__get__(None, Klass)

py> inst.eggs.__get__(inst, Klass)
>
py> inst.eggs.__get__(inst, Klass)(1)
eggs * 1

Yes, the behavior of functions that are attributes of classes is 
different from the behavior of functions that are attributes of 
instances.  But I'm not sure I'd say that it's broken.  It's a direct 
result of the fact that classes are the only things that implicitly 
invoke the descriptor machinery.

Note that if instances invoked the descriptor machinery, setting a 
function as an attribute of an instance would mean you'd always get 
bound methods back.  So code like the following would break:

py> class C(object):
... pass
...
py> def f(x):
... print 'f(%s)' % x
...
py> def g(obj):
... obj.f('g')
...
py> c = C()
py> c.f = f
py> g(c)
f(g)

If instances invoked the descriptor machinery, "obj.f" would return a 
bound method of the "c" instance, where "x" in the "f" function was 
bound to the "c" object.  Thus the call to "obj.f" would result in:

py> g(c)
Traceback (most recent call last):
   File "", line 1, in ?
   File "", line 2, in g
TypeError: f() takes exactly 1 argument (2 given)

Not that I'm claiming I write code like this.  ;)  But I'd be hesitant 
to call it broken.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dynamically adding and removing methods

2005-09-28 Thread Steven Bethard
Terry Reedy wrote:
> "Ron Adam" <[EMAIL PROTECTED]> wrote in message 
> news:[EMAIL PROTECTED]
> 
>>Actually I think I'm getting more confused.  At some point the function
>>is wrapped.  Is it when it's assigned, referenced, or called?
> 
> 
> When it is referenced via the class.
>  If you lookup in class.__dict__, the function is still a function.
> 
> 
class C(object):
> ...   def meth(self): pass
> ...
> 
C.__dict__['meth']
> 
> 
C.meth
> 
> 
C().meth
> >
> 
> I am not sure, without looking, how much of this is language definition and 
> how much CPython implementation, but I think mostly the latter

Well, being that the descriptor machinery is defined in the language 
reference[1][2], I'd have to say it's entirely the former.  The 
descriptor machinery says basically that, for classes,
 C.meth
should always be doing the equivalent of:
 C.__dict__['meth'].__get__(None, C)
and for instances,
 c.meth
should always be doing the equivalent of:
 type(c).__dict__['meth'].__get__(c, type(c))

[1] http://docs.python.org/ref/descriptors.html
[2] http://docs.python.org/ref/descriptor-invocation.html

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Parser suggestion

2005-09-29 Thread Steven Bethard
Jorge Godoy wrote:
> From Google I found almost all of those.  But do you have any suggestion on
> which one would be better to parse Fortran code?  Or more productive to use
> for this task? 
> 
[snip]
> 
>>PyParsing
>>  http://pyparsing.sourceforge.net/

Well, I've never had to parse Fortan code, but I've had a lot of success 
writing a variety of recursive grammars in PyParsing.  I'd highly 
recommend at least trying it out.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Feature Proposal: Sequence .join method

2005-09-29 Thread Steven Bethard
David Murmann wrote:
> Hi all!
> 
> I could not find out whether this has been proposed before (there are 
> too many discussion on join as a sequence method with different 
> semantics). So, i propose a generalized .join method on all sequences 
> with these semantics:
> 
> def join(self, seq):
> T = type(self)
> result = T()
> if len(seq):
> result = T(seq[0])
> for item in seq[1:]:
> result = result + self + T(item)
> return result
> 
> This would allow code like the following:
> 
> [0].join([[5], [42, 5], [1, 2, 3], [23]])

I don't like the idea of having to put this on all sequences.  If you 
want this, I'd instead propose it as a function (perhaps builtin, 
perhaps in some other module).

Also, this particular implementation is a bad idea.  The repeated += to 
result is likely to result in O(N**2) behavior.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pywordnet install problems

2005-09-30 Thread Steven Bethard
vdrab wrote:
> hello pythoneers,
> 
> I recently tried to install wordnet 2.0 and pywordnet on both an ubuntu
> linux running python 2.4 and a winXP running activePython 2.4.1, and I
> get the exact same error on both when I try to "from wordnet import *"
> :
> 
> running install
> error: invalid Python installation: unable to open
> /usr/lib/python2.4/config/Makefile (No such file or directory)
> 
> Adding the directories and files in question (touch Makefile) makes the
> install go through but (obviously) breaks the import of wordnet.py:
> 
import wordnet
> 
> Traceback (most recent call last):
>   File "", line 1, in ?
>   File "wordnet.py", line 1348, in ?
> N = Dictionary(NOUN, 'noun')
>   File "wordnet.py", line 799, in __init__
> self.indexFile = _IndexFile(pos, filenameroot)
>   File "wordnet.py", line 945, in __init__
> self.rewind()
>   File "wordnet.py", line 958, in rewind
> if (line[0] != ' '):
> IndexError: string index out of range
> 
> Is this pywordnet package abandoned, are there weird versioning issues,
> or am I just extremely unlucky for the install to fail on two machines?

Which version of WordNet do you have installed?  I remember that when I 
tried upgrading to the current version of WordNet, pywordnet broke.  I 
don't think the module is maintained very well; I've submitted a number 
of bug reports and RFE patches and not had any of them responded to.  If 
I had the time, I'd pick up the project, but I don't at the moment...

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pywordnet install problems

2005-10-03 Thread Steven Bethard
vdrab wrote:
> I had WordNet 2.0 installed but just now I tried it with 1.7.1 as well
> and the result was the same. It's a shame, glossing over the pywordnet
> page really made me want to give it a try.
> Are there any workarounds you can recommend ?

What's your wordnet setup like?  I have mine installed in:
 C:\Program Files\WordNet\2.0
and I have an environment variable WNHOME set:
 WNHOME=C:\Program Files\WordNet\2.0
That seems to work for me.

> I had a quick look at the wordnet.py file, and it looks readable enough
> to try and fiddle around with it, but if possible I'd like to avoid
> having to mess with the source file.
> Is there any other person / list I can ask for help on this?

Unfortunately, I wasn't able to find anyone else...

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Class property

2005-10-11 Thread Steven Bethard
Laszlo Zsolt Nagy wrote:
> class A(object):
>cnt = 0
>a_cnt = 0
>def __init__(self):
>A.cnt += 1
>if self.__class__ is A:
>A.a_cnt += 1
>   class B(A):
>pass
>   print A.cnt,A.a_cnt # 0,0
> b = B()
> print A.cnt,A.a_cnt # 1,0
> a = A()
> print A.cnt,A.a_cnt # 2,1
> 
> But then, I may want to create read-only class property that returns the 
> cnt/a_cnt ratio.
> This now cannot be implemented with a metaclass, because the metaclass 
> cannot operate on the class attributes:

Huh?  Every function in the metaclass takes the class object as the 
first parameter.  So they can all operate on the class attributes:

py> class A(object):
... cnt = 0
... a_cnt = 0
... def __init__(self):
... A.cnt += 1
... if self.__class__ is A:
... A.a_cnt += 1
... class __metaclass__(type):
... @property
... def ratio(cls):
... return cls.a_cnt/float(cls.cnt)
...
py> class B(A):
... pass
...
py> A.cnt, A.a_cnt
(0, 0)
py> A.ratio
Traceback (most recent call last):
   File "", line 1, in ?
   File "", line 11, in ratio
ZeroDivisionError: float division
py> b = B()
py> A.cnt, A.a_cnt, A.ratio
(1, 0, 0.0)
py> a = A()
py> A.cnt, A.a_cnt, A.ratio
(2, 1, 0.5)

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: override a property

2005-10-18 Thread Steven Bethard
Robin Becker wrote:
> ## my silly example
> class ObserverProperty(property):
> def __init__(self,name,observers=None,validator=None):
> self._name = name
> self._observers = observers or []
> self._validator = validator or (lambda x: x)
> self._pName = '_' + name
> property.__init__(self,
> fset=lambda inst, value: self.__notify_fset(inst,value),
> )
> 
> def __notify_fset(self,inst,value):
> value = self._validator(value)
> for obs in self._observers:
> obs(inst,self._pName,value)
> inst.__dict__[self._pName] = value
> 
> def add(self,obs):
> self._observers.append(obs)
> 
> def obs0(inst,pName,value):
> print 'obs0', inst, pName, value
> 
> def obs1(inst,pName,value):
> print 'obs1', inst, pName, value
> 
> class A(object):
> x = ObserverProperty('x')
> 
> a=A()
> A.x.add(obs0)
> 
> a.x = 3
> 
> b = A()
> b.x = 4
> 
> #I wish I could get b to use obs1 instead of obs0
> #without doing the following
> class B(A):
> x = ObserverProperty('x',observers=[obs1])
> 
> b.__class__ = B
> 
> b.x = 7

Can you add the object to be observed as another parameter to the add 
method?

py> class ObservableProperty(property):
... def __init__(self, *args, **kwargs):
... super(ObservableProperty, self).__init__(*args, **kwargs)
... self._observers = {}
... def __set__(self, obj, value):
... super(ObservableProperty, self).__set__(obj, value)
... for observer in self._observers.get(obj, []):
... observer(obj)
... def add(self, obj, observer):
... self._observers.setdefault(obj, []).append(observer)
...
py> class A(object):
... def _getx(self):
... return self._x
... def _setx(self, value):
... self._x = value
... x = ObservableProperty(_getx, _setx)
...
py> def obs1(obj):
... print 'obs1:', obj.x
...
py> def obs2(obj):
... print 'obs2:', obj.x
...
py> a = A()
py> a.x = 3
py> A.x.add(a, obs1)
py> a.x = 4
obs1: 4
py> A.x.add(a, obs2)
py> a.x = 5
obs1: 5
obs2: 5
py> b = A()
py> b.x = 6
py> A.x.add(b, obs2)
py> b.x = 7
obs2: 7

Probably "self._observers" should use some sort of weakref dict instead 
of a regular dict, but hopefully the idea is clear.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sqlstring -- a library to build a SELECT statement

2005-10-19 Thread Steven Bethard
[EMAIL PROTECTED] wrote:
> Jason Stitt wrote:
> 
>>Using // for 'in' looks really weird, too. It's too bad you can't
>>overload Python's 'in' operator. (Can you? It seems to be hard-coded
>>to iterate through an iterable and look for the value, rather than
>>calling a private method like some other builtins do.)
> 
[snip]
> 
> Python "in" clause doesn't seem exploitable in any way

Sure it is.  Just override __contains__.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: classmethods, class variables and subclassing

2005-10-21 Thread Steven Bethard
Andrew Jaffe wrote:
> Hi,
> 
> I have a class with various class-level variables which are used to 
> store global state information for all instances of a class. These are 
> set by a classmethod as in the following (in reality the setcvar method 
> is more complicated than this!):
> 
> class sup(object):
> cvar1 = None
> cvar2 = None
> 
> @classmethod
> def setcvar1(cls, val):
> cls.cvar1 = val
> 
> @classmethod
> def setcvar2(cls, val):
> cls.cvar2 = val
> 
> @classmethod
> def printcvars(cls):
> print cls.cvar1, cls.cvar2
> 
> 
> I can then call setcvar on either instances of the class or the class 
> itself.
> 
> Now, the problem comes when I want to subclass this class. If I override 
> the setcvar1 method to do some new things special to this class, and 
> then call the sup.setcvar1() method, it all works fine:
> 
> class sub(sup):
> cvar1a = None
> 
> @classmethod
> def setcvar1(cls, val, vala):
> cls.cvar1a = vala
> sup.setcvar1(val)
> 
> @classmethod
> def printcvars(cls):
> print cls.cvar1a
> sup.printcvars()
> 
> This works fine, and sets cvar and cvar2 for both classes.
> 
> However, if  I *don't* override the setcvar2 method, but I call 
> sub.setcvar2(val) directly, then only sub.cvar2 gets set; it is no 
> longer identical to sup.cvar1!
> 
> In particular,
> sub.setcvar1(1,10)
> sub.setcvar2(2)
> sub.printcvars()
> prints
>   10
>   1 None
> 
> i.e. sub.cvar1, sub.cvar1a, sub.cvar2= 1 10 2
> but sup.cvar1, cvar2= 1 None

I'm not sure if I understand your goal here, but you can get different 
behavior using super().

py> class sup(object):
... cvar1 = None
... cvar2 = None
... @classmethod
... def setcvar1(cls, val):
... cls.cvar1 = val
... @classmethod
... def setcvar2(cls, val):
... cls.cvar2 = val
... @classmethod
... def printcvars(cls):
... print cls.cvar1, cls.cvar2
...
py> class sub(sup):
... cvar1a = None
... @classmethod
... def setcvar1(cls, val, vala):
... cls.cvar1a = vala
... super(sub, cls).setcvar1(val)
... @classmethod
... def printcvars(cls):
... print cls.cvar1a
... super(sub, cls).printcvars()
...
py> sub.setcvar1(1, 10); sub.setcvar2(2); sub.printcvars()
10
1 2
py> sup.printcvars()
None None

I'm not sure what you want sup.printcvars() to print afterwards. If you 
want it to print out "1 2" instead of "None None", then what you're 
trying to do is to set every cvar in every superclass.  You'll need to 
be explicit about this, perhaps something like:

py> class sup(object):
... cvar1 = None
... cvar2 = None
... @classmethod
... def setcvar1(cls, val):
... for cls in cls.mro()[:-1]: # search through superclasses
... cls.cvar1 = val
... @classmethod
... def setcvar2(cls, val):
... for cls in cls.mro()[:-1]: # search through superclasses
... cls.cvar2 = val
... @classmethod
... def printcvars(cls):
... print cls.cvar1, cls.cvar2
...
py> class sub(sup):
... cvar1a = None
... @classmethod
... def setcvar1(cls, val, vala):
... for cls in cls.mro()[:-2]: # search through superclasses
... cls.cvar1a = vala
... super(sub, cls).setcvar1(val)
... @classmethod
... def printcvars(cls):
... print cls.cvar1a
... super(sub, cls).printcvars()
...
py> sub.setcvar1(1, 10); sub.setcvar2(2); sub.printcvars()
10
1 2
py> sup.printcvars()
1 2

That is, if you want the cvar set on every superclass, you need an 
assignment statement for every superclass.  There's probably a way to 
factor out the for-loop so you don't have to write it every time, but I 
haven't thought about it too much yet.  Perhaps an appropriate 
descriptor in the metaclass...

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tricky Areas in Python

2005-10-24 Thread Steven Bethard
Alex Martelli wrote:
> 
>>>class Base(object)
>>>def getFoo(self): ...
>>>def setFoo(self): ...
>>>foo = property(getFoo, setFoo)
>>>
>>>class Derived(Base):
>>>def getFoo(self): 
>>
[snip]
> the solution, in Python 2.4 and earlier, is to use
> one extra level of indirection:
> def __getFoo(self): return self.getFoo()
> def getFoo(self): ...
> foo = property(__getFoo)
> so the name lookup for 'getFoo' on self happens when you access s.foo
> (for s being an instance of this here-sketched class) and overriding
> works just as expected.

Another solution (for those of you scoring at home) would be to use a 
property-like descriptor that delays the name lookup until the time of 
the method call, e.g.

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/442418
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/408713

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [OT] Re: output from external commands

2005-10-24 Thread Steven Bethard
darren kirby wrote:
> quoth the Fredrik Lundh:
> 
>>(using either on the output from glob.glob is just plain silly, of course)
> 
[snip]
> 
> It is things like this that make me wary of posting to this list, either to 
> help another, or with my own q's. All I  usually want is help with a specific 
> problem, not a critique involving  how brain-dead my code is. I'm a beginner, 
> of course my code is going to be brain-dead ;)

I wouldn't fret too much about a sharp remark from Fredrik Lundh. 
They're pretty much all that way. ;) It looks like you already did the 
right thing - read past the insults, and gleaned the useful information 
that he included in between.  It takes a little training to get used to 
him, but if you can look past the nasty bite, he's really a valuable 
resource around here.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Would there be support for a more general cmp/__cmp__

2005-10-26 Thread Steven Bethard
Antoon Pardon wrote:
> Christopher Subich schreef :
> 
>> Antoon Pardon wrote:
>>> >>>from decimal import Decimal
>>> >>>Zero = Decimal(0)
>>> >>>cmp( ( ) , Zero)
>>> -1
>>> >>>cmp(Zero, 1)
>>> -1
>>> >>>cmp(1, ( ) )
>>> -1
>>
>> I'd argue that the wart here is that cmp doesn't throw an exception, not 
>> that it returns inconsistent results.  This is a classic case of 
>> incomparable objects, and saying that 1 < an empty tuple is bordering on 
>> meaningless.
> 
> I wont argue with you here, but it is what python gives as now.
> Changing this behaviour is not going to happen.

FWIW, Guido has said a few times that in Python 3.0 we should "Raise an 
exception when making comparisons (other than equality and inequality) 
between two incongruent types."[1]  But yes, the behavior won't change 
in the Python 2.X series due to backwards compatibility concerns.

STeVe

[1] http://wiki.python.org/moin/Python3%2e0
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: data hiding/namespace pollution

2005-10-31 Thread Steven Bethard
Alex Hunsley wrote:
> The two main versions I've encountered for data pseudo-hiding 
> (encapsulation) in python are:
> 
> method 1:
> 
> _X  - (single underscore) - just cosmetic, a convention to let someone
>   know that this data should be private.
> 
> 
> method 2:
> 
> __X - (double underscore) - mangles the name (in a predictable way).
>   Avoids name pollution.

Method 2 is also (though to a lesser degree) just cosmetic -- it doesn't 
prevent all name clashes even if you're reasonable enough not to name 
anything in the _X__xxx pattern.  I gave an example of this in an 
earlier thread on this topic[1].  The basic problem is that 
double-underscore mangling doesn't include the module name, so two 
classes in different modules with the same class names can easily mess 
with each others' "private" attributes.

STeVe

[1]http://groups.google.com/group/comp.lang.python/msg/f03183a2c01c8ecf?hl=en&;
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Attributes of builtin/extension objects

2005-11-02 Thread Steven Bethard
George Sakkis wrote:
> - Where do the attributes of a datetime.date instance live if it has
> neither a __dict__ nor __slots__ ?
> - How does dir() determine them ?

py> from datetime import date
py> d = date(2003,1,23)
py> dir(date) == dir(d)
True
py> for attr_name in ['day', 'month', 'year']:
... attr_val = getattr(date, attr_name)
... print attr_name, type(attr_val)
...
day 
month 
year 

So all the instance "attributes" are actually handled by descriptors on 
the type.  So datetime.date objects don't really have any instance 
attributes...

 > - dir() returns the attributes of the instance itself, its class and
 > its ancestor classes. Is there a way to determine the attributes of
 > the instance alone ?

I'm just guessing now, but perhaps if no __dict__ or __slots__ is 
available, all instance "attributes" are managed by descriptors on the type?

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __slots__ and class attributes

2005-11-03 Thread Steven Bethard
Ewald R. de Wit wrote:
> I'm running into a something unexpected  for a new-style class
> that has both a class attribute and __slots__ defined. If the
> name of the class attribute also exists in __slots__, Python
> throws an AttributeError. Is this by design (if so, why)?
> 
> class A( object ):
>   __slots__ = ( 'value', )
>   value = 1
> 
>   def __init__( self, value = None ):
>   self.value = value or A.value
> 
> a = A()
> print a.value
> 
> 
> Traceback (most recent call last):
>   File "t1.py", line 8, in ?
> a = A()
>   File "t1.py", line 6, in __init__
> self.value = value or A.value
> AttributeError: 'A' object attribute 'value' is read-only

Check the documentation on __slots__[1]:

__slots__ are implemented at the class level by creating descriptors 
(3.3.2) for each variable name. As a result, class attributes cannot be 
used to set default values for instance variables defined by __slots__; 
otherwise, the class attribute would overwrite the descriptor assignment.

I agree that the error you get is a bit confusing.  I think this has to 
do with how the descriptor machinery works.  When you write something like
 a.value
where a is a class instance, Python tries to invoke something like:
 type(a).value.__get__(a)
Here's an example of that, working normallly:

py> class A(object):
... __slots__ = ['value']
... def __init__(self):
... self.value = 1
...
py> a = A()
py> type(a).value

py> type(a).value.__get__

py> type(a).value.__get__(a)
1

Now when you add a class attribute called 'value', you overwrite the 
descriptor.  So when Python tries to do the same thing (because your 
definition of __slots__ makes it assume that 'value' is a descriptor), 
the descriptor machinery raises an AttributeError:

py> class A(object):
... __slots__ = ['value']
... value = 1
...
py> a = A()
py> type(a).value
1
py> type(a).value.__get__
Traceback (most recent call last):
   File "", line 1, in ?
AttributeError: 'int' object has no attribute '__get__'

This AttributeError must be somehow caught by the __slots__ machinery 
and interpreted to mean that you tried to write to a read-only 
attribute.  The resulting error message is probably not what you want, 
but I don't know the source well enough to figure out whether or not a 
better error message could be given.


But why do you want a class level attribute with the same name as an 
instance level attribute?  I would have written your class as:

class A(object):
 __slots__ = ['value']
 def __init__(self, value=1):
 self.value = value

where the default value you put in the class is simply expressed as a 
default value to the __init__ parameter.

Steve

[1]http://docs.python.org/ref/slots.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: __new__

2005-11-08 Thread Steven Bethard
James Stroud wrote:
> Hello All,
> 
> I'm running 2.3.4
> 
> I was reading the documentation for classes & types
>http://www.python.org/2.2.3/descrintro.html
> And stumbled on this paragraph:
> 
> """
> __new__ must return an object. There's nothing that requires that it return a 
> new object that is an instance of its class argument, although that is the 
> convention. If you return an existing object, the constructor call will still 
> call its __init__ method. If you return an object of a different class, its 
> __init__ method will be called.
> """

Any reason why you're looking at 2.2 documentation when you're running 2.3?

Anyway, the current docs corrected this mistake[1]

"""
If __new__() returns an instance of cls, then the new instance's 
__init__() method will be invoked like "__init__(self[, ...])", where 
self is the new instance and the remaining arguments are the same as 
were passed to __new__().

If __new__() does not return an instance of cls, then the new instance's 
__init__() method will not be invoked.
"""

[1]http://docs.python.org/ref/customization.html

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: derived / base class name conflicts

2005-11-16 Thread Steven Bethard
[EMAIL PROTECTED] wrote:
> so the following would not result in any conflicts
> 
> class A:
>def __init__(self):
>   self.__i= 0
> 
> class B(A):
>def __init__(self):
>   A.__init__(self)
>   self.__i= 1
> 

Be careful here.  The above won't result in any conflicts, but related 
cases, where you have two classes with the same name in different 
modules, may still result in conflicts.  See my previous posts on this:

http://groups.google.com/group/comp.lang.python/msg/503984abaee1c2b5
http://groups.google.com/group/comp.lang.python/msg/f03183a2c01c8ecf

However, I tend not to use double-underscore name mangling, and I don't 
think I've ever had a shadowing problem, so clearly I wouldn't have 
shadowing problems with double-underscore name mangling either...

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Proposal for adding symbols within Python

2005-11-16 Thread Steven Bethard
Pierre Barbier de Reuille wrote:
> Proposal
> 
> 
> First, I think it would be best to have a syntax to represent symbols.
> Adding some special char before the name is probably a good way to
> achieve that : $open, $close, ... are $ymbols.

How about using the prefix "symbol." instead of "$"?

 >>> symbol.x
symbol.x
 >>> symbol.y
symbol.y
 >>> x = symbol.x
 >>> x == symbol.x
True
 >>> x == symbol.y
False
 >>> symbol.file.opened
symbol.file.opened
 >>> symbol.file.closed
symbol.file.closed
 >>> symbol.spam(symbol.eggs)
symbol.spam(symbol.eggs)

And the definition of symbol that I used:

 >>> class symbol(object):
... class __metaclass__(type):
... def __getattr__(cls, name):
... return symbol(name)
... def __getattr__(self, name):
... return symbol('%s.%s' % (self.name, name))
... def __init__(self, name):
... self.name = name
... def __eq__(self, other):
... return self.name == other.name
... def __repr__(self):
... return '%s.%s' % (type(self).__name__, self.name)
... def __call__(self, *args):
... arg_str = ', '.join(str(arg) for arg in args)
... return symbol('%s(%s)' % (self.name, arg_str))
...

It doesn't work with "is", but otherwise I think it's pretty close to 
your proposal, syntax-wise.  Is there something obvious this won't 
address for you?

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Library Reference - question

2005-11-17 Thread Steven Bethard
[EMAIL PROTECTED] wrote:
> The "Python LIbrary Reference" at
> http://docs.python.org/lib/contents.html seems to be an important
> document. I have two questions
> 
> Q1. How do you search inside "Python LibraryReference" ? Does it exist
> in pdf or chm form?

One other option.  Go to google and use:

 site:docs.python.org inurl:lib

That should work, though I haven't tested it thoroughly.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


how to organize a module that requires a data file

2005-11-17 Thread Steven Bethard
Ok, so I have a module that is basically a Python wrapper around a big 
lookup table stored in a text file[1].  The module needs to provide a 
few functions::

 get_stem(word, pos, default=None)
 stem_exists(word, pos)
 ...

Because there should only ever be one lookup table, I feel like these 
functions ought to be module globals.  That way, you could just do 
something like::

 import morph
 assist = morph.get_stem('assistance', 'N')
 ...

My problem is with the text file.  Where should I keep it?  If I want to 
keep the module simple, I need to be able to identify the location of 
the file at module import time.  That way, I can read all the data into 
the appropriate Python structure, and all my module-level functions will 
work immediatly after import.

I can only think of a few obvious places where I could find the text 
file at import time -- in the same directory as the module (e.g. 
lib/site-packages), in the user's home directory, or in a directory 
indicated by an environment variable.  The first seems weird because the 
text file is large (about 10MB) and I don't really see any other 
packages putting data files into lib/site-packages.  The second seems 
weird because it's not a per-user configuration - it's a data file 
shared by all users.  And the the third seems weird because my 
experience with a configuration depending heavily on environment 
variables is that this is difficult to maintain.

If I don't mind complicating the module functions a bit (e.g. by 
starting each function with "if _lookup_table is not None"), I could 
allow users to specify a location for the file after the module is 
imported, e.g.::

 import morph
 morph.setfile(r'C:\resources\morph_english.flat')
 ...

Then all the module-level functions would have to raise Exceptions until 
setfile() was called.  I don't like that the user would have to 
configure the module each time they wanted to use it, but perhaps that's 
unaviodable.

Any suggestions?  Is there an obvious place to put the text file that 
I'm missing?

Thanks in advance,

STeVe

[1] In case you're curious, the file is a list of words and their 
morphological stems provided by the University of Pennsylvania.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to organize a module that requires a data file

2005-11-17 Thread Steven Bethard
Terry Hancock wrote:
> On Thu, 17 Nov 2005 12:18:51 -0700
> Steven Bethard <[EMAIL PROTECTED]> wrote:
> 
>>My problem is with the text file.  Where should I keep it?
>>
>>I can only think of a few obvious places where I could
>>find the text  file at import time -- in the same
>>directory as the module (e.g.  lib/site-packages), in the
>>user's home directory, or in a directory  indicated by an
>>environment variable.  
> 
> 
> Why don't you search those places in order for it?
> 
> Check ~/.mymod/myfile, then /etc/mymod/myfile, then
> /lib/site-packages/mymod/myfile or whatever. It won't take
> long, just do the existence checks on import of the module.
> If you don't find it after checking those places, *then*
> raise an exception.
> 
> You don't say what this data file is or whether it is
> subject to change or customization. If it is, then there is
> a real justification for this approach, because an
> individual user might want to shadow the system install with
> his own version of the data.

The file is a lookup table of word stems distributed by the University 
of Pennsylvania.  It doesn't really make sense for users to customize 
it, because it's not a configuration file, but it is possible that UPenn 
would distribute a new version at some point.  That's what I meant when 
I said "it's not a per-user configuration - it's a data file shared by 
all users".  So there should be exactly one copy of the file, so I 
shouldn't have to deal with shadowing.

Of course, even with only one copy of the file, that doesn't mean that I 
couldn't search a few places.  Maybe I could by default put it in 
lib/site-packages, but allow an option to setup.py to put it somewhere 
else for anyone who was worried about putting 10MB into 
lib/site-packages.  Those folks would then have to use an environment 
variable, say $MORPH_FLAT, to identify the directory they .  At module 
import I would just check both locations...

I'll have to think about this some more...

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to organize a module that requires a data file

2005-11-17 Thread Steven Bethard
Larry Bates wrote:
> Personally I would do this as a class and pass a path to where
> the file is stored as an argument to instantiate it (maybe try
> to help user if they don't pass it).  Something like:
> 
> class morph:
> def __init__(self, pathtodictionary=None):
> if pathtodictionary is None:
> # Insert code here to see if it is in the current
> # directory and/or look in other directories.
> try:  self.fp=open(pathtodictionary, 'r')
>   except:
> print "unable to locate dictionary at: %s" % pathtodictionary
>   else:
> # Insert code here to load data from .txt file
> fp.close()
> return
> 
> def get_stem(self, arg1, arg2):
> # Code for get_stem method

Actually, this is basically what I have right now.  It bothers me a 
little because you can get two instances of "morph", with two separate 
dictionaries loaded.  Since they're all loading the same file, it 
doesn't seem like there should be multiple instances.  I know I could 
use a singleton pattern, but aren't modules basically the singletons of 
Python?

> The other way I've done this is to have a .INI file that always lives
> in the same directory as the class with an entry in it that points me
> to where the .txt file lives.

That's a thought.  Thanks.

Steve
-- 
http://mail.python.org/mailman/listinfo/python-list


textwrap.dedent() drops tabs - bug or feature?

2005-11-17 Thread Steven Bethard
So I've recently been making pretty frequent use of textwrap.dedent() to 
allow me to use triple-quoted strings at indented levels of code without 
getting the extra spaces prefixed to each line.  I discovered today that 
not only does textwrap.dedent() strip any leading spaces, but it also 
substitutes any internal tabs with spaces.  For example::

py> def test():
... x = ('abcd  efgh\n'
...  'ijkl  mnop\n')
... y = textwrap.dedent('''\
... abcdefgh
... ijklmnop
... ''')
... return x, y
...
py> test()
('abcd\tefgh\nijkl\tmnop\n', 'abcdefgh\nijklmnop\n')

Note that even though the tabs are internal, they are still removed by 
textwrap.dedent().  The documentation[1] says:

"""
dedent(text)
 Remove any whitespace that can be uniformly removed from the left 
of every line in text.

 This is typically used to make triple-quoted strings line up with 
the left edge of screen/whatever, while still presenting it in the 
source code in indented form.
"""

So it looks to me like even if this is a "feature" it is undocumented. 
I'm planning on filing a bug report, but I wanted to check here first in 
case I'm just smoking something.

STeVe

[1] http://docs.python.org/lib/module-textwrap.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to convert a "long in a string" to a "long"?

2005-11-18 Thread Steven Bethard
[EMAIL PROTECTED] wrote:
0xL
> 
> 4294967295L
> 
> OK, this is what I want, so I tried
> 
> s = long("0xL")
> ValueError: invalid literal for long(): 0xL

 >>> int("0x", 0)
4294967295L

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: textwrap.dedent() drops tabs - bug or feature?

2005-11-19 Thread Steven Bethard
Peter Hansen wrote:
> Steven Bethard wrote:
> 
>> Note that even though the tabs are internal, they are still removed by 
>> textwrap.dedent().  The documentation[1] says:
> 
> ...
> 
>> So it looks to me like even if this is a "feature" it is undocumented. 
>> I'm planning on filing a bug report, but I wanted to check here first 
>> in case I'm just smoking something.
> 
> While I wouldn't say it's obvious, I believe it is (indirectly?) 
> documented and deliberate.
> 
> Search for this in the docs:
> """
> expand_tabs
> (default: True) If true, then all tab characters in text will be 
> expanded to spaces using the expandtabs() method of text.
> """

Thanks for double-checking this for me.  I looked at expand_tabs, and 
it's part of the definition of the TextWrapper class, which is not 
actually used by textwrap.dedent().  So I think the textwrap.dedent() 
expanding-of-tabs behavior is still basically undocumented.

I looked at the source code, and the culprit is the first line of the 
function definition:

 lines = text.expandtabs().split('\n')

I filed a bug_ report, but left the Category unassigned so that someone 
else can decide whether it's a doc bug or a code bug.

STeVe

.. _bug: http://python.org/sf/1361643
-- 
http://mail.python.org/mailman/listinfo/python-list


the name of a module in which an instance is created?

2005-11-21 Thread Steven Bethard
The setup: I'm working within a framework (designed by someone else) 
that requires a number of module globals to be set.  In most cases, my 
modules look like:
(1) a class definition
(2) the creation of one instance of that class
(3) binding of the instance methods to the appropriate module globals

I'm trying to hide the complexity of step (3) by putting it in a common 
base class.  That way, when I'm writing a new module, I never have to 
see the step (3) code.  Right now, that code is in the __init__ method 
of the common base class and looks something like::

 setattr(mod, 'creole_%s' % name, self._call)
 setattr(mod, 'creole_%s_Initialize' % name, self._initialize)
 setattr(mod, 'creole_%s_Finish' % name, self._finish)

where 'mod' is the module and 'name' is the name of the module.

In the basic situation, where the instance is created in the same module 
as the class, I can figure out 'mod' and 'name' like::

 cls = type(self)
 name = cls.__module__
 mod = __import__(cls.__module__)

However, this fails whenever the instance is not created in the same 
module as the class was defined (e.g. when I've factored a common base 
class into another module, and only imported this class to do steps (2) 
and (3)).  How can I figure out 'name' if the class was created in a 
different module?

One option, of course, is to pass it explicitly, e.g.::

 import C
 instance = C(__name__, ...)

This isn't a horrible option, but it does mean that I'm not hiding all 
of the step (3) machinery anymore

Another option would be to declare a dummy class, e.g.::

 import C
 class Dummy(C):
 pass
 instance = Dummy(...)

Again, this isn't horrible, but it also fails to hide some of the step 
(3) machinery.

Any other possibilities?

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: the name of a module in which an instance is created?

2005-11-22 Thread Steven Bethard
Mardy wrote:
> I'm not sure I got your problem correctly, however see if this helps:
> 
> $ cat > test.py
> class myclass:
> name = __module__
> ^D
>
[snip]
> 
> >>> import test
> >>> a = test.myclass()
> >>> a.name
> 'test'
> 
> This works, as we define "name" to be a class attribute. 
> Is this useful to you?

Unfortunately, no, this is basically what I currently have.  Instead of 
a.name printing 'test', it should print '__main__'.  I want the name of 
the module in which the *instance* is created, not the name of the 
module in which the *class* is created.

STeVe

P.S. Note that I already discussed two possible solutions which I didn't 
much like: (1) pass __name__ to the class instance, e.g. ``a = 
test.myclass(__name__)`` or (2) declare an empty sublcass of myclass in 
the second module (in your case, at the interactive prompt).
-- 
http://mail.python.org/mailman/listinfo/python-list


defining the behavior of zip(it, it) (WAS: Converting a flat list...)

2005-11-22 Thread Steven Bethard
[Duncan Booth]
 > >>> aList = ['a', 1, 'b', 2, 'c', 3]
 > >>> it = iter(aList)
 > >>> zip(it, it)
 >[('a', 1), ('b', 2), ('c', 3)]

[Alan Isaac]
 > That behavior is currently an accident.
 >http://sourceforge.net/tracker/?group_id=5470&atid=105470&func=detail&aid=1121416

[Bengt Richter]
 > That says
 > """
 > ii. The other problem is easier to explain by example.
 > Let it=iter([1,2,3,4]).
 > What is the result of zip(*[it]*2)?
 > The current answer is: [(1,2),(3,4)],
 > but it is impossible to determine this from the docs,
 > which would allow [(1,3),(2,4)] instead (or indeed
 > other possibilities).
 > """
 > IMO left->right is useful enough to warrant making it defined
 > behaviour

And in fact, it is defined behavior for itertools.izip() [1].

I don't see why it's such a big deal to make it defined behavior for 
zip() too.

STeVe

[1]http://docs.python.org/lib/itertools-functions.html#l2h-1392
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: defining the behavior of zip(it, it) (WAS: Converting a flat list...)

2005-11-22 Thread Steven Bethard
[EMAIL PROTECTED] wrote:
>> > ii. The other problem is easier to explain by example.
>> > Let it=iter([1,2,3,4]).
>> > What is the result of zip(*[it]*2)?
>> > The current answer is: [(1,2),(3,4)],
>> > but it is impossible to determine this from the docs,
>> > which would allow [(1,3),(2,4)] instead (or indeed
>> > other possibilities).
>> > """
>> > IMO left->right is useful enough to warrant making it defined
>> > behaviour
>>
>>And in fact, it is defined behavior for itertools.izip() [1].
>>
>>I don't see why it's such a big deal to make it defined behavior for
>>zip() too.
> 
> 
> IIRC, this was discussednd rejected in an SF bug report.  It should not
> be a defined behavior for severals reasons:
[snip arguments about how confusing zip(it, it) is]
> Overall, I think anyone using zip(it,it) is living in a state of sin,
> drawn to the tempations of one-liners and premature optimization.  They
> are forsaking obvious code in favor of screwy special cases.  The
> behavior has been left undefined for a reason.

Then why document itertools.izip() as it is?  The documentation there is 
explicit enough to know that izip(it, it) will work as intended.  Should 
we make the documentation there less explicit to discourage people from 
using the izip(it, it) idiom?

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: defining the behavior of zip(it, it) (WAS: Converting a flat list...)

2005-11-23 Thread Steven Bethard
[EMAIL PROTECTED] wrote:
> Steven Bethard wrote:
> 
>>[EMAIL PROTECTED] wrote:
>>
>>>>>ii. The other problem is easier to explain by example.
>>>>>Let it=iter([1,2,3,4]).
>>>>>What is the result of zip(*[it]*2)?
>>>>>The current answer is: [(1,2),(3,4)],
>>>>>but it is impossible to determine this from the docs,
>>>>>which would allow [(1,3),(2,4)] instead (or indeed
>>>>>other possibilities).
>>>>>"""
>>>>>IMO left->right is useful enough to warrant making it defined
>>>>>behaviour
>>>>
>>>>And in fact, it is defined behavior for itertools.izip() [1].
>>>>
>>>>I don't see why it's such a big deal to make it defined behavior for
>>>>zip() too.
>>>
>>>
>>>IIRC, this was discussednd rejected in an SF bug report.  It should not
>>>be a defined behavior for severals reasons:
>>
>>[snip arguments about how confusing zip(it, it) is]
>>
>>>Overall, I think anyone using zip(it,it) is living in a state of sin,
>>>drawn to the tempations of one-liners and premature optimization.  They
>>>are forsaking obvious code in favor of screwy special cases.  The
>>>behavior has been left undefined for a reason.
>>
>>Then why document itertools.izip() as it is?  The documentation there is
>>explicit enough to know that izip(it, it) will work as intended.  Should
>>we make the documentation there less explicit to discourage people from
>>using the izip(it, it) idiom?
> 
[snip]
> 
> But technically speaking, you are still referring to the implementation
> detail of izip(), not the functionality of izip().
> 
> I do now agree with another poster that the documentation of both zip
> and izip should state clear that the order of picking from which
> iterable is undefined or can be changed from implementation to
> implementation, to avoid this kind of temptation.
> 

Actually, it's part of the specificiation.  Read the itertools 
documentation[1]:

"""
izip(*iterables)
 Make an iterator that aggregates elements from each of the 
iterables. Like zip() except that it returns an iterator instead of a 
list. Used for lock-step iteration over several iterables at a time. 
Equivalent to:

  def izip(*iterables):
  iterables = map(iter, iterables)
  while iterables:
  result = [i.next() for i in iterables]
  yield tuple(result)
"""

So technically, since itertools.izip() is "equivalent to" the Python 
code above, it is part of the specification, not the implementation.

But I certainly understand Raymond's point -- the code in the itertools 
documentation there serves a number of purposes other than just 
documenting the behavior.

[1]http://docs.python.org/lib/itertools-functions.html#l2h-1392

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Steven Bethard
Dan Bishop wrote:
> Mike Meyer wrote:
> 
>>Is there any place in the language that still requires tuples instead
>>of sequences, except for use as dictionary keys?
> 
> The % operator for strings.  And in argument lists.
> 
> def __setitem__(self, (row, column), value):
>...

Interesting that both of these two things[1][2] have recently been 
suggested as candidates for removal in Python 3.0.

[1]http://www.python.org/dev/summary/2005-09-01_2005-09-15.html#string-formatting-in-python-3-0
[2]http://www.python.org/dev/summary/2005-09-16_2005-09-30.html#removing-nested-function-parameters

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Death to tuples!

2005-11-28 Thread Steven Bethard
Mike Meyer wrote:
> Steven Bethard <[EMAIL PROTECTED]> writes:
> 
>>Dan Bishop wrote:
>>
>>>Mike Meyer wrote:
>>>
>>>
>>>>Is there any place in the language that still requires tuples instead
>>>>of sequences, except for use as dictionary keys?
>>>
>>>The % operator for strings.  And in argument lists.
>>>def __setitem__(self, (row, column), value):
>>>   ...
>>
>>Interesting that both of these two things[1][2] have recently been
>>suggested as candidates for removal in Python 3.0.
>>[1]http://www.python.org/dev/summary/2005-09-01_2005-09-15.html#string-formatting-in-python-3-0
>>[2]http://www.python.org/dev/summary/2005-09-16_2005-09-30.html#removing-nested-function-parameters
> 
> #2 I actually mentioned in passing, as it's part of the general
> concept of tuple unpacking. When names are bound, you can use a
> "tuple" for an lvalue, and the sequence on the rhs will be "unpacked"
> into the various names in the lvalue:
> 
> for key, value = mydict.iteritems(): ...
> a, (b, c) = (1, 2), (3, 4)
> 
> I think of the parameters of a function as just another case of
> this; any solution that works for the above two should work for
> function paremeters as well.

The difference is that currently, you have to use tuple syntax in 
functions, while you have your choice of syntaxes with normal unpacking::

py> def f(a, (b, c)):
... pass
...
py> def f(a, [b, c]):
... pass
...
Traceback (  File "", line 1
 def f(a, [b, c]):
  ^
SyntaxError: invalid syntax
py> a, (b, c) = (1, 2), (3, 4)
py> a, [b, c] = (1, 2), (3, 4)
py> a, [b, c] = [1, 2], (3, 4)
py> a, [b, c] = [1, 2], [3, 4]

Of course, the result in either case is still a tuple.  So I do agree 
that Python doesn't actually require tuples in function definitions; 
just their syntax.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python speed

2005-11-30 Thread Steven Bethard
David Rasmussen wrote:
> Harald Armin Massa wrote:
> 
>> Dr. Armin Rigo has some mathematical proof, that High Level Languages
>> like esp. Python are able to be faster than low level code like
>> Fortran, C or assembly.
> 
> Faster than assembly? LOL... :)

I think the claim goes something along the lines of "assembly is so hard 
to get right that if you can automatically generate it from a HLL, not 
only will it be more likely to be correct, it will be more likely to be 
fast because the code generator can provide the appropriate optimizations".

OTOH, you can almost certainly take automatically generated assembly 
code and make optimizations the code generator wasn't able to, thanks to 
knowing more about the real semantics of the program.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [newbie] super() and multiple inheritance

2005-12-01 Thread Steven Bethard
hermy wrote:
> As I understand it, using super() is the preferred way to call
> the next method in method-resolution-order. When I have parameterless
> __init__ methods, this works as expected.
> However, how do you solve the following simple multiple inheritance
> situation in python ?
> 
> class A(object):
> def __init__(self,x):
> super(A,self).__init__(x)
> print "A init (x=%s)" % x
> 
> class B(object):
> def __init__(self,y):
> super(B,self).__init__(y)
> print "B init (y=%s)" % y
> 
> class C(A,B):
> def __init__(self,x,y):
> super(C,self).__init__(x,y)  < how to do this ???
> print "C init (x=%s,y=%s)" % (x,y)

Unfortunately, super() doesn't mix too well with hierarchies that change 
the number of arguments to a method.  One possibility:

 class A(object):
 def __init__(self, x, **kwargs):
 super(A, self).__init__(x=x, **kwargs)
 print "A init (x=%s)" % x

 class B(object):
 def __init__(self, y, **kwargs):
 super(B, self).__init__(y=y, **kwargs)
 print "B init (y=%s)" % y

 class C(A,B):
 def __init__(self, x, y):
 super(C, self).__init__(x=x,y=y)
 print "C init (x=%s,y=%s)" % (x,y)

Then you can get::

 py> C(1, 2)
 B init (y=2)
 A init (x=1)
 C init (x=1,y=2)
 <__main__.C object at 0x00B9FA70>

But you have to make sure to always pass the **kwargs around.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


aligning a set of word substrings to sentence

2005-12-01 Thread Steven Bethard
I've got a list of word substrings (the "tokens") which I need to align 
to a string of text (the "sentence").  The sentence is basically the 
concatenation of the token list, with spaces sometimes inserted beetween 
tokens.  I need to determine the start and end offsets of each token in 
the sentence.  For example::

py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
py> text = '''\
... She's gonna write
... a book?'''
py> list(offsets(tokens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]

Here's my current definition of the offsets function::

py> def offsets(tokens, text):
... start = 0
... for token in tokens:
... while text[start].isspace():
... start += 1
... text_token = text[start:start+len(token)]
... assert text_token == token, (text_token, token)
... yield start, start + len(token)
... start += len(token)
...

I feel like there should be a simpler solution (maybe with the re 
module?) but I can't figure one out.  Any suggestions?

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: aligning a set of word substrings to sentence

2005-12-01 Thread Steven Bethard
Fredrik Lundh wrote:
> Steven Bethard wrote:
>> I feel like there should be a simpler solution (maybe with the re
>> module?) but I can't figure one out.  Any suggestions?
> 
> using the finditer pattern I just posted in another thread:
> 
> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
> text = '''\
> She's gonna write
> a book?'''
> 
> import re
> 
> tokens.sort() # lexical order
> tokens.reverse() # look for longest match first
> pattern = "|".join(map(re.escape, tokens))
> pattern = re.compile(pattern)
> 
> I get
> 
> print [m.span() for m in pattern.finditer(text)]
> [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
> 
> which seems to match your version pretty well.

That's what I was looking for.  Thanks!

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: aligning a set of word substrings to sentence

2005-12-01 Thread Steven Bethard
Paul McGuire wrote:
> "Steven Bethard" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
> 
>>I've got a list of word substrings (the "tokens") which I need to align
>>to a string of text (the "sentence").  The sentence is basically the
>>concatenation of the token list, with spaces sometimes inserted beetween
>>tokens.  I need to determine the start and end offsets of each token in
>>the sentence.  For example::
>>
>>py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
>>py> text = '''\
>>... She's gonna write
>>... a book?'''
>>py> list(offsets(tokens, text))
>>[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
> 
> ===
> from pyparsing import oneOf
> 
> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
> text = '''\
> She's gonna write
> a book?'''
> 
> tokenlist = oneOf( " ".join(tokens) )
> offsets = [(start,end) for token,start,end in tokenlist.scanString(text) ]
> 
> print offsets
> ===
> [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]

Now that's a pretty solution. Three cheers for pyparsing! :)

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: aligning a set of word substrings to sentence

2005-12-02 Thread Steven Bethard
Fredrik Lundh wrote:
> Steven Bethard wrote:
> 
> 
>>>>I feel like there should be a simpler solution (maybe with the re
>>>>module?) but I can't figure one out.  Any suggestions?
>>>
>>>using the finditer pattern I just posted in another thread:
>>>
>>>tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
>>>text = '''\
>>>She's gonna write
>>>a book?'''
>>>
>>>import re
>>>
>>>tokens.sort() # lexical order
>>>tokens.reverse() # look for longest match first
>>>pattern = "|".join(map(re.escape, tokens))
>>>pattern = re.compile(pattern)
>>>
>>>I get
>>>
>>>print [m.span() for m in pattern.finditer(text)]
>>>[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
>>>
>>>which seems to match your version pretty well.
>>
>>That's what I was looking for.  Thanks!
> 
> 
> except that I misread your problem statement; the RE solution above allows the
> tokens to be specified in arbitrary order.  if they've always ordered, you 
> can re-
> place the code with something like:
> 
> # match tokens plus optional whitespace between each token
> pattern = "\s*".join("(" + re.escape(token) + ")" for token in tokens)
> m = re.match(pattern, text)
> result = (m.span(i+1) for i in range(len(tokens)))
> 
> which is 6-7 times faster than the previous solution, on my machine.

Ahh yes, that's faster for me too.  Thanks again!

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: aligning a set of word substrings to sentence

2005-12-02 Thread Steven Bethard
Michael Spencer wrote:
> Steven Bethard wrote:
> 
>> I've got a list of word substrings (the "tokens") which I need to 
>> align to a string of text (the "sentence").  The sentence is basically 
>> the concatenation of the token list, with spaces sometimes inserted 
>> beetween tokens.  I need to determine the start and end offsets of 
>> each token in the sentence.  For example::
>>
>> py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
>> py> text = '''\
>> ... She's gonna write
>> ... a book?'''
>> py> list(offsets(tokens, text))
>> [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]
>>
>
[snip]
>
> and then, for an entry in the wacky category, a difflib solution:
>
>  >>> def offsets(tokens, text):
>  ... from difflib import SequenceMatcher
>  ... s = SequenceMatcher(None, text, "\t".join(tokens))
>  ... for start, _, length in s.get_matching_blocks():
>  ... if length:
>  ... yield start, start + length
>  ...
>  >>> list(offsets(tokens, text))
>  [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 25)]

That's cool, I've never seen that before.  If you pass in str.isspace, 
you can even drop the "if length:" line::

py> def offsets(tokens, text):
... s = SequenceMatcher(str.isspace, text, '\t'.join(tokens))
... for start, _, length in s.get_matching_blocks():
... yield start, start + length
...
py> list(offsets(tokens, text))
[(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 
25), (25, 25)]

I think I'm going to have to take a closer look at 
difflib.SequenceMatcher; I have to do things similar to this pretty often...

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: aligning a set of word substrings to sentence

2005-12-02 Thread Steven Bethard
Steven Bethard wrote:
> Michael Spencer wrote:
> 
>> Steven Bethard wrote:
>>
>>> I've got a list of word substrings (the "tokens") which I need to 
>>> align to a string of text (the "sentence").  The sentence is 
>>> basically the concatenation of the token list, with spaces sometimes 
>>> inserted beetween tokens.  I need to determine the start and end 
>>> offsets of each token in the sentence.  For example::
>>>
>>> py> tokens = ['She', "'s", 'gon', 'na', 'write', 'a', 'book', '?']
>>> py> text = '''\
>>> ... She's gonna write
>>> ... a book?'''
>>> py> list(offsets(tokens, text))
>>> [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 
>>> 25)]
>>>
>>
> [snip]
> 
>>
>> and then, for an entry in the wacky category, a difflib solution:
>>
>>  >>> def offsets(tokens, text):
>>  ... from difflib import SequenceMatcher
>>  ... s = SequenceMatcher(None, text, "\t".join(tokens))
>>  ... for start, _, length in s.get_matching_blocks():
>>  ... if length:
>>  ... yield start, start + length
>>  ...
>>  >>> list(offsets(tokens, text))
>>  [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 
>> 25)]
> 
> 
> That's cool, I've never seen that before.  If you pass in str.isspace, 
> you can even drop the "if length:" line::
> 
> py> def offsets(tokens, text):
> ... s = SequenceMatcher(str.isspace, text, '\t'.join(tokens))
> ... for start, _, length in s.get_matching_blocks():
> ... yield start, start + length
> ...
> py> list(offsets(tokens, text))
> [(0, 3), (3, 5), (6, 9), (9, 11), (12, 17), (18, 19), (20, 24), (24, 
> 25), (25, 25)]

Sorry, that should have been::
 list(offsets(tokens, text))[:-1]
since the last item is always the zero-length one.  Which means you 
don't really need str.isspace either.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: hash()

2005-12-05 Thread Steven Bethard
Tim Peters wrote:
> First, if `st` is a string, `st[::-1]` is a list.

I hate to question the great timbot, but am I missing something?

 >>> 'abcde'[::-1]
'edcba'

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Iterating over test data in unit tests

2005-12-05 Thread Steven Bethard
Ben Finney wrote:
> Maybe I need to factor out the iteration into a generic iteration
> function, taking the actual test as a function object. That way, the
> dataset iterator doesn't need to know about the test function, and
> vice versa.
> 
> def iterate_test(self, test_func, test_params=None):
> """ Iterate a test function for all the sets """
> if not test_params:
> test_params = self.game_params
> for key, params in test_params.items():
> dataset = params['dataset']
> instance = params['instance']
> test_func(key, dataset, instance)
> 
> def test_score_throws(self):
> """ Game score should be calculated from throws """
> def test_func(key, dataset, instance):
> score = dataset['score']
> for throw in dataset['throws']:
> instance.add_throw(throw)
> self.failUnlessEqual(score, instance.get_score())
> 
> self.iterate_test(test_func)
> 
> That's somewhat clearer; the test function actually focuses on what
> it's testing. Those layers of indirection are annoying, but they allow
> the data sets to grow without writing more code to handle them.

Don't know if this helps, but I'd be more likely to write this as 
something like (untested)::

 def get_tests(self, test_params=None):
 """ Iterate a test function for all the sets """
 if not test_params:
 test_params = self.game_params
 for key, params in test_params.items():
 dataset = params['dataset']
 instance = params['instance']
 yield key, dataset, instance

 def test_score_throws(self):
 """ Game score should be calculated from throws """
 for key, dataset, instance in self.get_tests()
 score = dataset['score']
 for throw in dataset['throws']:
 instance.add_throw(throw)
 self.failUnlessEqual(score, instance.get_score())

That is, make an interator to the various test information, and just put 
your "test_func" code inside a for-loop.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: i=2; lst=[i**=2 while i<1000]

2005-12-06 Thread Steven Bethard
Daniel Schüle wrote:
> I am wondering if there were proposals or previous disscussions in this 
> NG considering using 'while' in comprehension lists
> 
> # pseudo code
> i=2
> lst=[i**=2 while i<1000]

I haven't had much need for anything like this.  Can't you rewrite with 
a list comprehension something like::

 >>> [4**(2**i) for i in xrange(math.log(1000, 4))]
[4, 16, 256, 65536]

?

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unexpected behavior of read only attributes and super

2005-12-06 Thread Steven Bethard
Samuel M. Smith wrote:
> The dict class has some read only attributes that generate an  exception 
> if I try to assign a value to them.
> I wanted to trap for this exception in a subclass using super but it  
> doesn't happen.
> 
> class SD(dict):
>pass
> 
[snip]
> s = SD()
> super(SD,s).__setattr__('__iter__', True)
> 
> Expecting to get the ReadOnly exception but I don't get the exception.

Note that __iter__ is on the dict *type* not dict instances.  Try this:

py> class SD(dict):
... pass
...
py> super(SD, SD).__init__ = False
Traceback (most recent call last):
   File "", line 1, in ?
AttributeError: 'super' object attribute '__init__' is read-only

You can always shadow class-level attributes in the instance dict. 
(That's what you were doing.)  If you want to (try to) replace an 
attribute in the class dict, you need to use the class object, not an 
instance object.

HTH,

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Documentation suggestions

2005-12-07 Thread Steven Bethard
[EMAIL PROTECTED] wrote:
> Iain> I like the Global Module Index in general - it allows quick access
> Iain> to exactly what I want.  I would like a minor change to it though
> Iain> - stop words starting with a given letter rolling over to another
> Iain> column (for example, os.path is at the foot of one column, while
> Iain> ossaudiodev is at the head of the next), and provide links to each
> Iain> initial letter at the top of the page.
> 
> I know it's not what you asked for, but give
> 
> http://staging.musi-cal.com/modindex/
> 
> a try.  See if by dynamically migrating the most frequently requested
> modules to the front of the section it becomes more manageable.

That's pretty cool.  What I don't know is how it would look after 
thousands of people using it.  I know that I probably only have 10 
modules or so that I consistently need to check the docs for.  Your hack 
above would conveniently place those all at the top if I was the only 
user.  But are those 10 modules the same 10 modules that other folks 
need?  I don't know...

Of course, the only way to find out is to try...

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ElementTree - Why not part of the core?

2005-12-07 Thread Steven Bethard
[EMAIL PROTECTED] wrote:
> ElementTree on the other hand provides incredibly easy access to XML
> elements and works in a more Pythonic way.  Why has the API not been
> included in the Python core?

While I fully agree that ElementTree is far more Pythonic than the 
dom-based stuff in the core, this issue has been discussed on 
python-dev[1].  Fredrik Lundh's response:

 shipping stable versions of ElementTree/cElementTree (or PIL, or
 python-doc, or exemaker, or what else you might find useful) with
 official Python releases is perfectly okay.

 moving the main trunk and main development over to the Python CVS is
 another thing, entirely.

I think some people were hoping that instead of adding these things to 
the standard library, we would come up with a better package manager 
that would make adding these things to your local library much simpler.

STeVe

[1]http://www.python.org/dev/summary/2005-06-01_2005-06-15.html#reorganising-the-standard-library-again
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unexpected behavior of read only attributes and super

2005-12-07 Thread Steven Bethard
Samuel M. Smith wrote:
> On 06 Dec, 2005, at 20:53, Steven Bethard wrote:
>> You can always shadow class-level attributes in the instance dict.
>> (That's what you were doing.)  If you want to (try to) replace an
>> attribute in the class dict, you need to use the class object, not an
>> instance object.
> 
> I guess that's where my understanding breaks down. I thought the only  
> way to access class attributes was by
> calling the class directly as your example indicates but __iter__ is  a 
> class attribute that I can access from the instance
> at least to read it. So what determines which class attributes get  
> copied to the instance and which ones don't?

When "reading" an attribute, Python looks through the namespaces in the 
order (instance, type).  So if the attribute exists in the instance, the 
instance-level value is returned.  If the attribute does not exist in 
the instance, but does exist in the class, then the class-level value is 
returned:

 >>> class C(object):
... x = 1
...
 >>> inst = C()
 >>> inst.x
1
 >>> C.x
1
 >>> class C(object):
... x = 1
... def __init__(self):
... self.x = 2
...
 >>> inst = C()
 >>> inst.x
2
 >>> C.x
1

When "writing" an attribute (i.e. using the assignment statement), 
Python does not try to do any namespace searching.  Thus if you use the 
instance in an assignment statement, then it is the instance's 
attributes that get modified, and if you use the class in an assignment 
statement, then it is the class's attributes that get modififed:

 >>> class C(object):
... pass
...
 >>> inst = C()
 >>> inst.x = 1
 >>> C.x
Traceback (most recent call last):
   File "", line 1, in ?
AttributeError: type object 'C' has no attribute 'x'
 >>> inst.x
1
 >>> class C(object):
... pass
...
 >>> inst = C()
 >>> C.x = 1
 >>> inst.x
1
 >>> C.x
1

HTH,

STeVe

P.S. Note that there is an additional complication resulting from the 
fact that functions are descriptors:

 >>> class C(dict):
... pass
...
 >>> C.__iter__

 >>> C().__iter__


Even though the C instance is accessing the __iter__ function on the 
class, it gets back a different value because descriptors return 
different values depending on whether they are accessed from a class or 
an instance.  I don't think you need to understand this to solve your 
problem though, so I won't go into any more details unless you think it 
would be helpful.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Documentation suggestions

2005-12-07 Thread Steven Bethard
Aahz wrote:
> In article <[EMAIL PROTECTED]>,
> A.M. Kuchling <[EMAIL PROTECTED]> wrote:
> 
>>So now we're *really* stuck.  The RefGuide doesn't describe the rules;
>>the PEP no longer describes them either; and probably only Guido can
>>write the new text for the RefGuide.  (Or are the semantics the same
>>and only some trivial details are different?)
> 
> Raymond Hettinger (and/or maybe one of the metaclass wizards) can
> probably also write it, with Guido editing after.  That might produce
> even more accuracy in the end.

I'm not a metaclass wizard, but I have submitted a few doc patches 
trying to address some of the inaccuracies in the description of 
new-style classes and related matters:
 http://www.python.org/sf/1123716
 http://www.python.org/sf/1163367
The problem is that they don't seem to get accepted (even when 
accompanied by positive comments).  And unfortunately, I don't currently 
have time to do the 5 reviews for 1 deal offered by some of the 
python-dev folks.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ElementTree - Why not part of the core?

2005-12-08 Thread Steven Bethard
[EMAIL PROTECTED] wrote:
> I think the key here is ElementTree's Pythoninc API.  While it's clearly
> possible to install it as a third-party package, I think there's a clear
> best-of-breed aspect here that suggests it belongs in the standard
> distribution simply to discourage continued use of DOM-based APIs.

I second this.  Guido has said many times that the stdlib is for 
best-of-breed modules that have proven themselves in the wild. 
ElementTree has proven itself in the wild and is clearly best-of-breed. 
  And dramatically better (IMHO) than the APIs currently included in the 
stdlib[1].

I don't have a whole lot of free time, and I'm not sure exactly how I 
could help, but if there's anything I could do that would help get 
ElementTree into the stdlib, let me know.

STeVe

[1] If I had my way, we'd deprecate and then remove the current Python 
xml modules.  But of course then people would complain that Python 
doesn't have a SAX or DOM API.  Of course we could tell them that they 
don't need it and that ElementTree is easier, but I'm not sure people 
really want to fight that battle.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Unexpected behavior of read only attributes and super

2005-12-08 Thread Steven Bethard
Samuel M. Smith wrote:
> If you would care to elaborate on the how the lookup differs with  
> method descriptor it would be most appreciated.

For the more authoritative guide, see:
 http://users.rcn.com/python/download/Descriptor.htm

The basic idea is that a descriptor is an object that sits at the class 
level, and redefines how some attribute accesses work.  Consider a 
simple example:

 >>> class D(object):
... def __get__(self, obj, objtype=None):
... if obj is None:
... return 'called from class %r' % objtype
... else:
... return 'called from instance %r' % obj
...
 >>> class C(object):
... d = D()
...
 >>> C.d
"called from class "
 >>> C().d
'called from instance <__main__.C object at 0x00E73A30>'

As you can see, instances of the D class, when used as class attributes, 
can tell whether they're being called by the class or the instance. 
This means that descriptors with a __get__ method defined can do just 
about anything on an attribute access.

Note that all functions in Python are descriptors, and they use the 
__get__ method to return either an unbound method or a bound method, 
depending on whether they were called from the type or the instance:

 >>> def f(x):
... return x*2
...
 >>> class C(object):
... func = f
...
 >>> f

 >>> C.func

 >>> C().func
>

> This might help explain why it is that when I define __slots__, the  
> behavior when writing an attribute is different

Yes.  Defining __slots__ basically tells the class to create descriptors 
for each name in the list.  So, for example:

>  >>> class C(dict):
> ... __slots__ = ['a','b']
> ...

Creates two descriptors that are attributes of class C: one named "a" 
and one named "b".

> Now the behavior is different for class variables and methods when  
> slots defined versus when slots is not defined.
> 
>  >>> c.__iter__ = 4
> Traceback (most recent call last):
>   File "", line 1, in ?
> AttributeError: 'C' object attribute '__iter__' is read-only

Here, Python is trying to set the "__iter__" attribute of the object. 
Since you defined __slots__, it tells you that it can't.  So it never 
even looks at the type.

>  >>> super(C,c).__iter__ = 4
> Traceback (most recent call last):
>   File "", line 1, in ?
> TypeError: 'super' object has only read-only attributes (assign  to 
> .__iter__)

In this case, you explicitly request the superclass, so you get the same 
error as before because you bypass the __slots__, which are defined for 
instances of C, not for instances of the superclass, dict.

> Then why wasn't __class__ added to c.__dict__ ? Looks like namespace  
> searching to me.

No, as you conclude later, __class__ is special, so you can still assign 
to __class__ even when __slots__ is defined because it's not considered 
a normal attribute.  But note that __class__ is an *instance* attribute, 
not a class attribute, so "c.__class__ = C" changes the class of that 
single instance, and makes no change to the type:

 >>> class C(object):
... pass
...
 >>> class D(C):
... pass
...
 >>> c1 = C()
 >>> c2 = C()
 >>> C, c1, c2
(, <__main__.C object at 0x00E73A30>, <__main__.C 
object at 0x00E73210>)
 >>> c1.__class__ = D
 >>> C, c1, c2
(, <__main__.D object at 0x00E73A30>, <__main__.C 
object at 0x00E73210>)

So no, even with __class__, you're only assigning to the instance, and 
so Python's not searching any additional namespaces.

> now with slots defined
> 
>  >>> class C(dict):
> ... __slots__ = ['b']
> ... a = 0
> ...
>  >>> c = C()
>  >>> c.a
> 0
>  >>> c.a = 4
> Traceback (most recent call last):
>   File "", line 1, in ?
> AttributeError: 'C' object attribute 'a' is read-only
>  >>> C.a = 5
>  >>> c.a
> 5
> 
> So the rule is that when __slots__ is defined class variables become  
> read only.

That's not quite right.  As you show above, class variables are still 
modifiable from the class object.  But yes, defining __slots__ means 
that, from an instance, you can only modify the attributes defined in 
__slots__.

> What if the class variable is included in __slots__
> 
>  >>> class C(dict):
> ... __slots__ = ['b']
> ... b = 1
> ...
>  >>> c = C()
>  >>> c.b
> 1
>  >>> c.b = 2
> Traceback (most recent call last):
>   File "", line 1, in ?
> AttributeError: 'C' object attribute 'b' is read-only
> 
> So even though b is in slots I still can't create an instance  variable 
> by that name and shadow the class variable.

Yes, this behavior is documented:
 http://docs.python.org/ref/slots.html

"""
__slots__ are implemented at the class level by creating descriptors 
(3.3.2) for each variable name. As a result, class attributes cannot be 
used to set default values for instance variables defined by __slots__; 
otherwise, the class attribute would overwrite the descriptor assignment.
"""

The documentation isn't great, I'll agree, but the result is basically 
that if you combine __slots__ with class attributes of the same name, 
you're asking for

Re: How to find the type ...

2005-12-09 Thread Steven Bethard
Lad wrote:
> How can I find out in Python whether the operand is integer or a
> character and change from char to int ?

Python doesn't have a separate character type, but if you want to 
convert a one-character string to it's ASCII number, you can use ord():

 >>> ord('A'), ord('z')
(65, 122)

The answer to your first question is that you probably don't want to. 
You probably want two separate functions, one that takes an integer and 
one that takes a character.  What's your actual function look like?

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: slice notation as values?

2005-12-10 Thread Steven Bethard
Antoon Pardon wrote:
> So lets agree that tree['a':'b'] would produce a subtree. Then
> I still would prefer the possibility to do something like:
> 
>   for key in tree.iterkeys('a':'b')
> 
> Instead of having to write
> 
>   for key in tree['a':'b'].iterkeys()
> 
> Sure I can now do it like this:
> 
>   for key in tree.iterkeys('a','b')
> 
> But the way default arguments work, prevents you from having
> this work in an analague way as a slice.

How so?  Can't you just pass the *args to the slice contstructor?  E.g.::

 def iterkeys(self, *args):
 keyslice = slice(*args)
 ...

Then you can use the slice object just as you would have otherwise.

STeVe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: what is lambda used for in real code?

2004-12-31 Thread Steven Bethard
Alex Martelli wrote:
Steven Bethard <[EMAIL PROTECTED]> wrote:
(2) lambda a: a.lower()
My first thought here was to use str.lower instead of the lambda, but of
course that doesn't work if 'a' is a unicode object:

Right, but string.lower works (after an 'import string').  More
generally, maybe it would be nice to have a way to say "call a method on
x" without x's type being checked, just like attrgetter says "fetch an
attribute on x" -- say s/thing like:
def methodcaller(method_name, *a, **k):
def callit(x):
return getattr(x, method_name)(*a, **k)
callit.__name__ = method_name
return callit
Yeah, that's exactly the kind of thing I was looking for.  Very nice!
(3)  self.plural = lambda n: int(n != 1)
Note that this is *almost* writable with def syntax.  If only we could do:
def self.plural(n):
int(n != 1)

Not sure about the context, but maybe we could use, at class-level:
@staticmethod
def plural(n):
return int(n != 1)
The context was within the _parse method of GNUTranslations.  Basically, 
this method uses the fp passed in and a bunch of conditionals to 
determine how to define the plural method.  So I don't think it can be 
done at the class level.  Also, doesn't the assignment:
self.plural = lambda n: int(n != 1)
make this more like (at class level):
def plural(self, n):
return int(n != 1)
that is, isn't this an instance method, not a staticmethod?

py> class C(object):
... def __init__(self):
... self.plural = lambda n: int(n != 1)
...
py> c = C()
py> c.__class__.plural(1)
Traceback (most recent call last):
  File "", line 1, in ?
AttributeError: type object 'C' has no attribute 'plural'
py> c.plural(1)
0
Even though a good number of lambda uses may be avoidable or removable
by such means, I think there's just slightly too much variety -- in some
cases, a def with a name will have to be best
Yup, that was my feeling.  I was only able to rewrite as an expression 
about 50% of the lambdas that I found.  However, I (personally) don't 
have much of a problem with adding a def in most of the other cases. 
The only ones that make me a little nervous are examples like:

inspect.py: def formatargspec(args, varargs=None, varkw=None,
  ...
  formatvarargs=lambda name: '*' + name,
  formatvarkw=lambda name: '**' + name,
  formatvalue=lambda value: '=' + repr(value),
where the lambdas are declaring functions as keyword arguments in a def. 
   I'm not sure how much I like adding to the module multiple function 
defs that are really intended to be accessed only within formatargspec. 
 Still, were lambda to go away in Python 3000, it certainly wouldn't be 
the end of the world. ;-)

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: More baby squeaking - iterators in a class

2004-12-31 Thread Steven Bethard
Bulba! wrote:
Thanks to everyone for their responses, but it still doesn't work re
returning next() method:
class R3:
def __init__(self, d):
self.d=d
self.i=len(d)
def __iter__(self):
d,i = self.d, self.i
while i>0:
i-=1
yield d[i]


p=R3('eggs')
p.next()

[snip]
What's strange is that when it comes to function, it does return
the .next method:
def rev(d):
for i in range (len(d)-1, -1, -1):
yield d[i]


o=rev('eggs')

[snip]

o.next()
's'
Note the difference here.  When you're using the function, you call the 
iter function (called rev in your example).  When you're using the 
class, you haven't called the iter function, only instantiated the class 
(i.e. called the __init__ function).  Try one of the following:

py> p = R3('eggs')
py> i = p.__iter__()
py> i.next()
's'
or
py> p = R3('eggs')
py> i = iter(p)
py> i.next()
's'
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: what is lambda used for in real code?

2004-12-31 Thread Steven Bethard
Alex Martelli wrote:
Steven Bethard <[EMAIL PROTECTED]> wrote:
py> class C(object):
... def __init__(self):
... self.plural = lambda n: int(n != 1)
...
py> c = C()
py> c.__class__.plural(1)
Traceback (most recent call last):
  File "", line 1, in ?
AttributeError: type object 'C' has no attribute 'plural'
py> c.plural(1)
0

This shows that staticmethod has slightly wider applicability, yes, but
I don't see this as a problem.  IOW, I see no real use cases where it's
important that hasattr(C, 'plural') is false while hasattr(C(),
'plural') is true [I could of course be missing something!].
True, true.  I guess I was just wrapped up in reproducing the class 
behavior.  Making it available as a staticmethod of the class would of 
course only add functionality, not remove any.

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Securing a future for anonymous functions in Python

2004-12-31 Thread Steven Bethard
Alex Martelli wrote:
Paul L. Du Bois <[EMAIL PROTECTED]> wrote:
def fn(gen):
   """Turns a generator expression into a callable."""
   def anonymous(*args): return gen.next()
   return anonymous
def args():
   """Works with fn(); yields args passed to anonymous()."""
   while True: yield sys._getframe(2).f_locals['args']
args = args()
foo = fn(a + b * c for (a,b,c) in args)
assert foo(3,4,5) == 3+4*5
assert foo(4,5,6) == 4+5*6

Paul, you really SHOULD have posted this BEFORE I had to send in the
files for the 2nd ed's Coobook... this gets my vote for the most
delightful abuse of sys._getframe even (and I've seen quite a few;-).
So, I couldn't figure out why this worked until I started to write an 
email to ask.  Once I understood it, I figured it wouldn't hurt to send 
my thoughts out anyway to (1) verify that I understand it right, and (2) 
help anyone else who was trying to figure this out.

As I understand it sys._getframe(2).f_locals should get the names local 
to the stack frame two above the current one.  So, in the context of:
fn(... for ... in args)
sys._getframe(2).f_locals should be looking at the names local to the 
'anonymous' function in the 'fn' function, because one stack frame up 
from the 'args' function is the generator's 'next' function, and two 
stack frames up is the 'anonymous' function.  That means that:
sys._getframe(2).f_locals['args']
gets whatever object has been bound to 'args' in:
def anonymous(*args):
So then in:
foo = fn(a + b * c for (a,b,c) in args)
foo(3,4,5)
foo(4,5,6)
sys._getframe(2).f_locals['args'] will get (3, 4, 5) in the first foo 
call, (4, 5, 6) in the second foo call, etc.

So basically the way a call like foo(3, 4, 5) works is:
(1) foo(3, 4, 5) calls gen.next() where gen is the generator expression
(2) gen.next() calls args.next()
(3) args.next() returns the (3, 4, 5) argument tuple of foo by looking 
up the stack frames
(4) gen.next() binds (3, 4, 5) to the names a, b, c respectively
(5) gen.next() returns the value of "a + b * c" for these bindings
(6) foo(3, 4, 5) returns the same value (as gen.next() did)

Does that seem about right?
Steve
P.S.  That's so *evilly* cool!
--
http://mail.python.org/mailman/listinfo/python-list


Re: Securing a future for anonymous functions in Python

2004-12-31 Thread Steven Bethard
Simo Melenius wrote:
map (def x:
 if foo (x):
 return baz_1 (x)
 elif bar (x):
 return baz_2 (x)
 else:
 global hab
 hab.append (x)
 return baz_3 (hab),
 [1,2,3,4,5,6])
I think this would probably have to be written as:
map (def x:
 if foo(x):
 return baz_1(x)
 elif bar(x):
 return baz_2(x)
 else:
 global hab
 hab.append(x)
 return baz_3(hab)
 , [1,2,3,4,5,6])
or:
map (def x:
 if foo(x):
 return baz_1(x)
 elif bar(x):
 return baz_2(x)
 else:
 global hab
 hab.append(x)
 return baz_3(hab)
 ,
 [1,2,3,4,5,6])
Note the placement of the comma.  As it is,
return baz_3(hab),
returns the tuple containing the result of calling baz_3(hab):
py> def f(x):
... return float(x),
...
py> f(1)
(1.0,)
It's not horrible to have to put the comma on the next line, but it 
isn't as pretty as your version that doesn't.  Unfortunately, I don't 
think anyone's gonna want to revise the return statement syntax just to 
introduce anonymous functions.

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: what is lambda used for in real code?

2004-12-31 Thread Steven Bethard
Adam DePrince wrote:
Lets not forget the "real reason" for lambda ... the elegance of
orthogonality.   Why treat functions differently than any other object? 

We can operate on every other class without having to involve the
namespace, why should functions be any different?
Yup.  I think in most of the examples that I didn't know how to rewrite, 
this was basically the issue.  On the other hand, I do think that 
lambdas get overused, as indicated by the number of examples I *was* 
able to rewrite.[1]

Still, I have to admit that in some cases (especially those involving 
reduce), I wish the coder had named the function -- it would have given 
me a little bit more documentation as to what the code was trying to do.

On the other hand, in other cases, like when a function is a keyword 
argument to another function (e.g. inspect.py's "def formatargspec..." 
example) using a def statement and naming the function would be redundant.

Steve
[1] Note that this isn't entirely fair to the examples, some of which 
were written before list comprehensions, generator expressions and 
itemgetter/attrgetter.
--
http://mail.python.org/mailman/listinfo/python-list


Re: what is lambda used for in real code?

2004-12-31 Thread Steven Bethard
Hans Nowak wrote:
Adam DePrince wrote:
In sort, we must preserve the ability to create an anonymous function
simply because we can do so for every other object type, and functions
are not special enough to permit this special case.

Your reasoning makes sense... lambda enables you to create a function as 
part of an expression, just like other types can be part of an 
expression.  However, by that same reasoning, maybe classes aren't 
special enough either to warrant a special case.  Where's the keyword to 
create an anonymous class? :-)
Well, no keyword, but you can use the type function:
py> d = dict(c=type('C', (object,), dict(spam=42)),
...  d=type('D', (dict,), dict(badger=True)))
py> d['c'].spam
42
py> d['c']()
<__main__.C object at 0x063F2DD0>
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Securing a future for anonymous functions in Python

2004-12-31 Thread Steven Bethard
Paul Rubin wrote:
[EMAIL PROTECTED] (Alex Martelli) writes:
We should have an Evilly Cool Hack of the Year, and I nominate Paul du
Bois's one as the winner for 2004.  Do I hear any second...?
The year's not over yet :).
Ok, now that we're past 0:00:00 UTC, I'll second that nomination! ;)
Steve
P.S. Happy New Year all!
--
http://mail.python.org/mailman/listinfo/python-list


Re: Looping using iterators with fractional values

2005-01-01 Thread Steven Bethard
Mark McEahern wrote:
drife wrote:
Hello,
Making the transition from Perl to Python, and have a
question about constructing a loop that uses an iterator
of type float. How does one do this in Python?
 

Use a generator:
 >>> def iterfloat(start, stop, inc):
... f = start
... while f <= stop:
... yield f
... f += inc
...
 >>> for x in iterfloat(0.25, 2.25, 0.25):
... print '%9.2f' % x
...
   0.25
   0.50
   0.75
   1.00
   1.25
   1.50
   1.75
   2.00
   2.25
 >>>
Or use the numarray module:
py> import numarray as na
py> for f in na.arange(0.25, 2.25, 0.25):
... print '%9.2f' % f
...
 0.25
 0.50
 0.75
 1.00
 1.25
 1.50
 1.75
 2.00
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: I need some advice/help on running my scripts

2005-01-01 Thread Steven Bethard
Sean wrote:
My problem is that many of the example scripts are run on Linux
machines and I am using Win XP Pro.  Here is a specific example of what
is confusing me.  If I want to open a file from the dos prompt in some
script do I just write the name of the file I want to open (assuming it
is in the same directory) after the script name?
such as
c:\some_script.py some_text_file.txt
It's unclear to me what you want to do here.  If your some_script.py 
looks like:

import sys
f = file(sys.argv[1])
then yes, you can call some_script.py as above, and the file will be 
readable from the 'f' file object.


Does piping work the same way in dos as it does on a linux machine?
Mostly:
[D:\Steve]$ type test.py
import sys
for i, line in enumerate(sys.stdin):
sys.stdout.write("%i:%s" % (i, line))
[D:\Steve]$ type input.txt
A
B
C
D
[D:\Steve]$ python test.py < input.txt
0:A
1:B
2:C
3:D
[D:\Steve]$ python test.py > output.txt
Z
Y
X
^Z
^Z
[D:\Steve]$ type output.txt
0:Z
1:Y
2:X
[D:\Steve]$ python test.py < input.txt > output.txt
[D:\Steve]$ type output.txt
0:A
1:B
2:C
3:D
[D:\Steve]$ type input.txt | python test.py
0:A
1:B
2:C
3:D
Note however, that you may run into problems if you don't explicitly 
call python:

[D:\Steve]$ test.py < input.txt
Traceback (most recent call last):
  File "D:\Steve\test.py", line 2, in ?
for i, line in enumerate(sys.stdin):
IOError: [Errno 9] Bad file descriptor
And last but not least, is there a way to do this all from IDLE?
What exactly do you want to do?  You can certainly type something like:
f = file('input.txt')
in IDLE to get access to the 'input.txt' file...
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: UserDict deprecated

2005-01-01 Thread Steven Bethard
Uwe Mayer wrote:
Saturday 01 January 2005 22:48 pm Hans Nowak wrote:
I am curious, what would you do with a class that derives from both file
and dict?
I was writing a class that read /writes some binary file format. I
implemented the functions from the file interface such that they are
refering to records. However, the file format has some header fields and
I'd wanted to grant access to those via the dict-interface.
If you implemented the file interface functions yourself, why do you 
want to inherit from file?

Another example: working with PyQt I have an instance of a QListView and
wanted to use the list-interface to get and set individual records.
But just inheriting from list won't make this work, will it?  Don't you 
want to do something like:

class C(QListView):
def __getitem__(self, i):
return self.getIndividualRecord(i) # or whatever method gives
   # you the record
Steve
--
http://mail.python.org/mailman/listinfo/python-list


PEP 288 ponderings

2005-01-01 Thread Steven Bethard
PEP 288 was mentioned in one of the lambda threads and so I ended up 
reading it for the first time recently.  I definitely don't like the 
idea of a magical __self__ variable that isn't declared anywhere.  It 
also seemed to me like generator attributes don't really solve the 
problem very cleanly.  An example from the PEP[1]:

def mygen():
while True:
print __self__.data
yield None
g = mygen()
g.data = 1
g.next()# prints 1
g.data = 2
g.next()# prints 2
I looked in the archives but couldn't find a good discussion of why 
setting an attribute on the generator is preferable to passing the 
argument to next.  Isn't this example basically equivalent to:

class mygen(object):
def next(self, data):
print data
return None
g = mygen()
g.next(1)   # prints 1
g.next(2)   # prints 2
Note that I didn't even define an __iter__ method since it's never used 
in the example.

Another example from the PEP:
def filelike(packagename, appendOrOverwrite):
data = []
if appendOrOverwrite == 'w+':
data.extend(packages[packagename])
try:
while True:
data.append(__self__.dat)
yield None
except FlushStream:
packages[packagename] = data
ostream = filelike('mydest','w')
ostream.dat = firstdat; ostream.next()
ostream.dat = firstdat; ostream.next()
ostream.throw(FlushStream)
This could be rewritten as:
class filelike(object):
def __init__(self, packagename, appendOrOverwrite):
self.data = []
if appendOrOverwrite == 'w+':
self.data.extend(packages[packagename])
def next(self, dat):
self.data.append(dat)
return None
def close(self):
packages[packagename] = self.data
ostream = filelike('mydest','w')
ostream.next(firstdat)
ostream.next(firstdat)
ostream.close()
So, I guess I have two questions:
(1) What's the benefit of the generator versions of these functions over 
the class-based versions?

(2) Since in all the examples there's a one-to-one correlation between 
setting a generator attribute and calling the generator's next function, 
aren't these generator attribute assignments basically just trying to 
define the 'next' parameter list?

If this is true, I would have expected that a more useful idiom would 
look something like:

def mygen():
while True:
data, = nextargs()
print data
yield None
g = mygen()
g.next(1)   # prints 1
g.next(2)   # prints 2
where the nextargs function retrieves the arguments of the most recent 
call to the generator's next function.

With a little sys._getframe hack, you can basically get this behavior now:
py> class gen(object):
... def __init__(self, gen):
... self.gen = gen
... def __iter__(self):
... return self
... def next(self, *args):
... return self.gen.next()
... @staticmethod
... def nextargs():
... return sys._getframe(2).f_locals['args']
...
py> def mygen():
... while True:
... data, = gen.nextargs()
... print data
... yield None
...
py> g = gen(mygen())
py> g.next(1)
1
py> g.next(2)
2
Of course, it's still a little magical, but I think I like it a little 
better because you can see, looking only at 'mygen', when 'data' is 
likely to change value...

Steve
[1] http://www.python.org/peps/pep-0288.html
--
http://mail.python.org/mailman/listinfo/python-list


Re: PEP 288 ponderings

2005-01-02 Thread Steven Bethard
Raymond Hettinger wrote:
[Steven Bethard]
(2) Since in all the examples there's a one-to-one correlation between
setting a generator attribute and calling the generator's next function,
aren't these generator attribute assignments basically just trying to
define the 'next' parameter list?
They are not the same.  The generator needs some way to receive the values. 
 The
function arguments cannot be used because they are needed to create the
generator-iterator.  The yield statements likewise won't work because the first
yield is not encountered until well after the first next() call.
Yeah, I wasn't trying to claim that passing the arguments to .next() is 
equivalent to generator attributes, only that the point at which new 
values for the generator state variables are provided correspond with 
calls to .next().  So if there was a means within a generator of getting 
access to the arguments passed to .next(), generator attributes would be 
unnecessary for the examples provided.

The given examples are minimal and are intended only to demonstrate the idea.
Do you have an example where the generator state isn't updated in 
lock-step with .next() calls?  I'd be interested to look at an example 
of this...

I definitely don't like the
idea of a magical __self__ variable that isn't declared anywhere.
It is no more magical than f.__name__ or f.__doc__ for functions.
I'm not sure this is quite a fair parallel.  The difference here is that 
 f.__name__ and f.__doc__ are accessed as attributes of the f object, 
and the __name__ and __doc__ attributes are created as a result of 
function creation.  The proposed __self__ is (1) not an attribute that 
becomes available, rather, a new binding local to the function, and (2) 
not created as a result of generator object creation but created as a 
result of calling .next() on the generator object.

Also, the __self__ argument is a non-issue because there are other alternate
approaches such as providing a function that retrieves the currently
running generator.
Is there a discussion of some of these alternate suggested approaches 
somewhere you could point me to?

The more important part of the PEP is the idea for generator exceptions.  The
need arises in the context of flushing/closing resources upon generator
termination.
I wonder if maybe it would be worth moving this part to a separate PEP. 
 It seems like it could probably stand on its own merit, and having it 
in with the generator attributes PEP means it isn't likely to be 
accepted separately.

Of course, I would probably declare a class and provide a .close() 
method. =)

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: arbitrary number of arguments in a function declaration

2005-01-02 Thread Steven Bethard
rbt wrote:
How do I set up a function so that it can take an arbitrary number of 
arguments?
If you haven't already, you should check out the Tutorial:
http://docs.python.org/tut/node6.html#SECTION00673
How might I make this dynamic so 
that it can handle any amount of expenses?

def tot_expenses(self, e0, e1, e2, e3):
pass
py> class C(object):
... def tot_expenses(self, *expenses):
... print expenses
...
py> C().tot_expenses(110, 24)
(110, 24)
py> C().tot_expenses(110, 24, 2, 56)
(110, 24, 2, 56)
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda as declarative idiom (was RE: what is lambda used for in real code?)

2005-01-03 Thread Steven Bethard
Roman Suzi wrote:
I wish lambdas will not be deprecated in Python but the key to that is
dropping the keyword (lambda). If anybody could think of a better syntax for
lambdas _with_ arguments, we could develop PEP 312 further.
Some suggestions from recent lambda threads (I only considered the ones 
that keep lambda as an expression):

* Args Before Expression *
Nick Coghlan: def-to syntax [1]
(def (a, b, c) to f(a) + o(b) - o(c))
(def (x) to x * x)
(def () to x)
(def (*a, **k) to x.bar(*a, **k))
((def () to x(*a, **k)) for x, a, k in funcs_and_args_list)
Nick Coghlan: def-arrow syntax [1]
(def (a, b, c) -> f(a) + o(b) - o(c))
(def (x) -> x * x)
(def () -> x)
(def (*a, **k) -> x.bar(*a, **k))
((def () -> x(*a, **k)) for x, a, k in funcs_and_args_list)
Alex Martelli: def-as syntax [2]
(def (a, b, c) as f(a) + o(b) - o(c))
(def (x) as x * x)
(def () as x)
(def (*a, **k) as x.bar(*a, **k))
((def () as x(*a, **k)) for x, a, k in funcs_and_args_list)
Dave Benjamin: fun syntax [7]
(fun(a, b, c): f(a) + o(b) - o(c))
(fun(x): x * x)
(fun(): x)
(fun(*a, **k): x.bar(*a, **k))
((fun(): x(*a, **k)) for x, a, k in funcs_and_args_list)
* Expression Before Args *
Robert Brewer: for (no-parens) syntax [3]
(f(a) + o(b) - o(c) for a, b, c)
(x * x for x)
(x for ())
(x.bar(*a, **k) for *a, **k)
((x(*a, **k) for ()) for x, a, k in funcs_and_args_list)
Nick Coghlan: for syntax [6]
(f(a) + o(b) - o(c) for (a, b, c))
(x * x for (x))
(x for ())
(x.bar(*a, **k) for (*a, **k))
((x(*a, **k) for ()) for x, a, k in funcs_and_args_list)
Nick Coghlan: def-from syntax [4]
(def f(a) + o(b) - o(c) from (a, b, c))
(def x * x from (x))
(def x from ())
(def x.bar(*a, **k) from (*a, **k))
((def x(*a, **k) from ()) for x, a, k in funcs_and_args_list)
Michael Spencer: from-args syntax [5]
(f(a) + o(b) - o(c) from args(a, b, c))
(x * x from args(x))
(x from args())
(x.bar(*a, **k) from args(*a, **k))
((x(*a, **k) from args()) for x, a, k in funcs_and_args_list)
Michael Spencer: for-args syntax [5]
(f(a) + o(b) - o(c) for args(a, b, c))
(x * x for args(x))
(x for args())
(x.bar(*a, **k) for args(*a, **k))
((x(*a, **k) for args()) for x, a, k in funcs_and_args_list)
So there's a bunch of ideas out there.  I don't know if any of them 
could be overwhelmingly preferred over lambda.

Personally, I lean slightly towards the def-from syntax because it uses 
the 'def' keyword to bring your attention to the fact that a function is 
being defined, and it gives the expression precedence over the arglist, 
which makes sense to me for an anonymous function, where (IMHO) the 
expression is really the most important part of the declaration.

OTOH, I think Michael Spencer's args() function, if implementable, could 
have a lot of cool uses, like getting the arguments passed to next 
within a generator.  (See the thread about that[8].)

Steve
[1]http://mail.python.org/pipermail/python-list/2004-December/256859.html
[2]http://mail.python.org/pipermail/python-list/2004-December/256881.html
[3]http://mail.python.org/pipermail/python-list/2004-December/257023.html
[4]http://boredomandlaziness.skystorm.net/2004/12/anonymous-functions-in-python.html
[5]http://mail.python.org/pipermail/python-list/2004-December/257893.html
[6]http://mail.python.org/pipermail/python-list/2004-December/257977.html
[7]http://mail.python.org/pipermail/python-list/2005-January/258441.html
[8]http://mail.python.org/pipermail/python-list/2005-January/258238.html
--
http://mail.python.org/mailman/listinfo/python-list


Re: Hlelp clean up clumpsy code

2005-01-04 Thread Steven Bethard
It's me wrote:
Another newbie question.
There must be a cleaner way to do this in Python:
 section of C looking Python code 
a = [[1,5,2], 8, 4]
a_list = {}
i = 0
for x in a:
if isinstance(x, (int, long)):
x = [x,]
for w in [y for y in x]:
i = i + 1
a_list[w] = i
print a_list
#
The code prints what I want but it looks so "C-like".  How can I make it
more Python like?
Don't know what version of Python you're using, but if you're using 2.4 
(or with a few slight modifications, with 2.3), you can write:

py> dict((item, i+1)
...  for i, item in enumerate(
...  a_sub_item
...  for a_item in a
...  for a_sub_item
...  in isinstance(a_item, (int, long)) and [a_item] or a_item))
{8: 4, 1: 1, 2: 3, 4: 5, 5: 2}
Basically, I use a generator expression to flatten your list, and then 
use enumerate to count the indices instead of keeping the i variable.

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: search backward

2005-01-04 Thread Steven Bethard
Robert wrote:
I need to find the location of a short string in a long string. The
problem however is that i need to search backward.
Does anybody know how to search in reverse direction?
How about str.rfind?
py> s = 'abc:def:abc'
py> s.rfind('abc')
8
Steve
--
http://mail.python.org/mailman/listinfo/python-list


why does UserDict.DictMixin use keys instead of __iter__?

2005-01-04 Thread Steven Bethard
Sorry if this is a repost -- it didn't appear for me the first time.
So I was looking at the Language Reference's discussion about emulating
container types[1], and nowhere in it does it mention that .keys() is
part of the container protocol.  Because of this, I would assume that to
use UserDict.DictMixin correctly, a class would only need to define
__getitem__, __setitem__, __delitem__ and __iter__.  So why does
UserDict.DictMixin require keys() to be defined?
py> class D(object, UserDict.DictMixin):
... """Simple dict wrapper that implements container protocol"""
... def __init__(self, dict): self.dict = dict
... def __len__(self, key): return len(self.dict)
... def __getitem__(self, key): return self.dict[key]
... def __setitem__(self, key, value): self.dict[key] = value
... def __delitem__(self, key): del self.dict[key]
... def __iter__(self): return iter(self.dict)
... def __contains__(self, key): return key in self.dict
...
py> d = D(dict(a=1, b=2))
py> d.clear()
Traceback (most recent call last):
  File "", line 1, in ?
  File "C:\Program Files\Python\lib\UserDict.py", line 114, in clear
for key in self.keys():
AttributeError: 'D' object has no attribute 'keys'
py> d.keys()
Traceback (most recent call last):
  File "", line 1, in ?
AttributeError: 'D' object has no attribute 'keys'
I thought about submitting a patch, but I couldn't think of a way that
didn't raise backwards compatibility concerns...
Steve
[1]http://docs.python.org/ref/sequence-types.html
--
http://mail.python.org/mailman/listinfo/python-list


Re: why does UserDict.DictMixin use keys instead of __iter__?

2005-01-04 Thread Steven Bethard
Nick Coghlan wrote:
Steven Bethard wrote:
Sorry if this is a repost -- it didn't appear for me the first time.
So I was looking at the Language Reference's discussion about emulating
container types[1], and nowhere in it does it mention that .keys() is
part of the container protocol.  Because of this, I would assume that to
use UserDict.DictMixin correctly, a class would only need to define
__getitem__, __setitem__, __delitem__ and __iter__.  So why does
UserDict.DictMixin require keys() to be defined?

Because it's a DictMixin, not a ContainerMixin?
"Containers usually are sequences (such as lists or tuples) or mappings 
(like dictionaries)".

.keys() is definitely part of the standard dictionary interface, and not 
something the mixin can derive from the generic container methods.
Why is that?  Isn't keys derivable as:
def keys(self):
return list(self)
if __iter__ is defined?
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: why does UserDict.DictMixin use keys instead of __iter__?

2005-01-04 Thread Steven Bethard
John Machin wrote:
Steven Bethard wrote:
So I was looking at the Language Reference's discussion about
emulating container types[1], and nowhere in it does it mention that
.keys() is part of the container protocol.
I don't see any reference to a "container protocol".
Sorry, I extrapolated "container protocol" from this statement:
"Containers usually are sequences (such as lists or tuples) or mappings 
(like dictionaries), but can represent other containers as well. The 
first set of methods is used either to emulate a sequence or to emulate 
a mapping"

and the fact that there is a "sequence protocol" and a "mapping protocol".
But all I was really reading from this statement was that the "first set 
of methods" (__len__, __getitem__, __setitem__, __delitem__ and 
__iter__) were more integral than the second set of methods (keys(), 
values(), ...).


What I do see is
(1) """It is also recommended that mappings provide the methods keys(),
..."""
You skipped the remaining 13 methods in this list:
"It is also recommended that mappings provide the methods keys(), 
values(), items(), has_key(), get(), clear(), setdefault(), iterkeys(), 
itervalues(), iteritems(), pop(), popitem(), copy(), and update() 
behaving similar to those for Python's standard dictionary objects."

This is the "second set of methods" I mentioned above.  I don't 
understand why the creators of UserDict.DictMixin decided that keys(), 
from the second list, is more important than __iter__, from the first list.


Because of this, I would assume that to
use UserDict.DictMixin correctly, a class would only need to define
__getitem__, __setitem__, __delitem__ and __iter__.

So I can't see why would you assume that, given that the docs say in
effect "you supply get/set/del + keys as the building blocks, the
DictMixin class will provide the remainder". This message is reinforced
in the docs for UserDict itself.
Sorry, my intent was not to say that I didn't know from the docs that 
UserDict.DictMixin required keys().  Clearly it's documented.  My 
question was *why* does it use keys()?  Why use keys() when keys() can 
be derived from __iter__, and __iter__ IMHO looks to be a more basic 
part of the mapping protocol.

In any case, isn't UserDict past history? Why are you mucking about
with it?
UserDict is past history, but DictMixin isn't.  As you note, DictMixin 
is even mentioned in the section of the Language Reference that we're 
discussing:

"The UserDict module provides a DictMixin class to help create those 
methods from a base set of __getitem__(), __setitem__(), __delitem__(), 
and keys()."

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda as declarative idiom (was RE: what is lambda used for in real code?)

2005-01-04 Thread Steven Bethard
Bengt Richter wrote:
On Mon, 03 Jan 2005 18:54:06 GMT, Steven Bethard <[EMAIL PROTECTED]> wrote:

Roman Suzi wrote:
I wish lambdas will not be deprecated in Python but the key to that is
dropping the keyword (lambda). If anybody could think of a better syntax for
lambdas _with_ arguments, we could develop PEP 312 further.
Some suggestions from recent lambda threads (I only considered the ones 
that keep lambda as an expression):

Just for reference, am I correct in assuming these are the equivalent
uses of lambda?:
 lambda a, b, c:f(a) + o(b) - o(c)
 lambda x: x * x
 lambda : x
 lambda *a, **k: x.bar(*a, **k)
 (lambda : x(*a, **k)) for x, a, k in funcs_and_args_list)
Yeah, I believe that was the intention, though I stole the examples from 
[1].

That last seems like it might need the default-arg-value hack: i.e.,
 (lambda x=x, a=a, k=k: x(*a, **k)) for x, a, k in funcs_and_args_list)
Good point.
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Lambda as declarative idiom (was RE: what is lambda used for in real code?)

2005-01-04 Thread Steven Bethard
Roman Suzi wrote:
On Mon, 3 Jan 2005, Steven Bethard wrote:

Roman Suzi wrote:
I wish lambdas will not be deprecated in Python but the key to that is
dropping the keyword (lambda). If anybody could think of a better syntax for
lambdas _with_ arguments, we could develop PEP 312 further.
Some suggestions from recent lambda threads (I only considered the ones
that keep lambda as an expression):
Wow! Is there any wiki-page these could be put on?
It's now on:
http://www.python.org/moin/AlternateLambdaSyntax
and I added Bengt Richter's and your recent suggestions.
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: why does UserDict.DictMixin use keys instead of __iter__?

2005-01-04 Thread Steven Bethard
John Machin wrote:
OK, I'll rephrase: what is your interest in DictMixin?
My interest: I'm into mappings that provide an approximate match
capability, and have a few different data structures that I'd like to
implement as C types in a unified manner. The plot includes a base type
that, similarly to DictMixin, provides all the non-basic methods.
I was recently trying to prototype a simple mapping type that implements 
the suggestion "Improved default value logic for Dictionaries" from
http://www.python.org/moin/Python3_2e0Suggestions
You can't just inherit from dict and override dict.__getitem__ because 
dict.__getitem__ isn't always called:

py> class D(dict):
... def __init__(*args, **kwds):
... self = args[0]
... self.function, self.args, self.kwds = None, None, None
... super(D, self).__init__(*args[1:], **kwds)
... def setdefault(self, function, *args, **kwds):
... self.function, self.args, self.kwds = function, args, kwds
... def __getitem__(self, key):
... if key not in self:
... super(D, self).__setitem__(
... key, self.function(*self.args, **self.kwds))
... return super(D, self).__getitem__(key)
...
py> d = D()
py> d.setdefault(list)
py> d['c'].append(2)
py> d
{'c': [2]}
py> print d.get('d') # should print []
None
This, of course, is exactly the kind of thing that DictMixin is designed 
for. =)

Of course, it's no trouble for me to implement keys().  I was just 
wondering why that design decision was made when it seems like __iter__ 
is more integral to the mapping protocol.  And if you want efficient 
iteration over your mapping type, you're going to have to define 
__iter__ too...

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: what is lambda used for in real code?

2005-01-06 Thread Steven Bethard
I wrote:
* Functions I don't know how to rewrite
Some functions I looked at, I couldn't figure out a way to rewrite them 
without introducing a new name or adding new statements.

[snip]
inspect.py: def formatargspec(args, varargs=None, varkw=None,
  ...
  formatvarargs=lambda name: '*' + name,
  formatvarkw=lambda name: '**' + name,
  formatvalue=lambda value: '=' + repr(value),
inspect.py: def formatargvalues(args, varargs, varkw, locals,
...
formatvarargs=lambda name: '*' + name,
formatvarkw=lambda name: '**' + name,
formatvalue=lambda value: '=' + repr(value),
Realized today that I do know how to rewrite these without a lambda, 
using bound methods:
def formatargspec(args, varargs=None, varkw=None,
  ...
  formatvarargs='*%s'.__mod__,
  formatvarkw='**%s'.__mod__,
  formatvalue='=%r'.__mod__,
I like this rewrite a lot because you can see that the function is 
basically just the given format string. YMMV, of course.

Similarly, if DEF_PARAM, DEF_BOUND and glob are all ints (or supply the 
int methods), I can rewrite

symtable.py:   self.__params = self.__idents_matching(lambda x:
  x & DEF_PARAM)
symtable.py:   self.__locals = self.__idents_matching(lambda x:
  x & DEF_BOUND)
symtable.py:   self.__globals = self.__idents_matching(lambda x:
   x & glob)
with the bound methods of the int objects:
self.__params = self.__idents_matching(DEF_PARAM.__rand__)
self.__locals = self.__idents_matching(DEF_BOUND.__rand__)
self.__globals = self.__idents_matching(glob.__rand__)
(Actually, I could probably use __and__ instead of __rand__, but 
__rand__ was the most direct translation.)

Ahh, the glory of bound methods... ;)
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Securing a future for anonymous functions in Python

2005-01-07 Thread Steven Bethard
Alan Gauld wrote:
On Thu, 06 Jan 2005 21:02:46 -0600, Doug Holton <[EMAIL PROTECTED]> wrote:
used, but there are people who do not like "lambda":
http://lambda-the-ultimate.org/node/view/419#comment-3069
The word "lambda" is meaningless to most people.  Of course so is "def", 
which might be why Guido van Robot changed it to "define": 
http://gvr.sourceforge.net/screen_shots/

The unfamiliar argument doesn't work for me. After all most
people are unfamiliar with complex numbers (or imaginary) numbers
but python still provides a complex number type. Just because the
name is unfamiliar to some doesn't mean we shouldn't use the
term if its the correct one for the concept.
I'm not sure this is really a fair comparison.  What's the odds that if 
you're unfamiliar with complex numbers that you're going to have to read 
or write code that uses complex numbers?  Probably pretty low.  I don't 
think I've ever had to read or write such code, and I *do* understand 
complex numbers.  Lambdas, on the other hand, show up in all kinds of 
code, and even though I hardly ever use them myself, I have to 
understand them because other people do (over-)use them.

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-07 Thread Steven Bethard
Andrey Tatarinov wrote:
Hi.
It would be great to be able to reverse usage/definition parts in 
haskell-way with "where" keyword. Since Python 3 would miss lambda, that 
would be extremly useful for creating readable sources.

Usage could be something like:
 >>> res = [ f(i) for i in objects ] where:
 >>> def f(x):
 >>> #do something
or
 >>> print words[3], words[5] where:
 >>> words = input.split()
- defining variables in "where" block would restrict their visibility to 
one expression
How often is this really necessary?  Could you describe some benefits of 
this?  I think the only time I've ever run into scoping problems is with 
lambda, e.g.

[lambda x: f(x) for x, f in lst]
instead of
[lambda x, f=f: for x, f in lst]
Are there other situations where you run into these kinds of problems?
- it's more easy to read sources when you know which part you can skip, 
compare to

 >>> def f(x):
 >>> #do something
 >>> res = [ f(i) for i in objects ]
in this case you read definition of "f" before you know something about 
it usage.
Hmm...  This seems very heavily a matter of personal preference.  I find 
that your where clause makes me skip the 'res' assignment to read what 
the 'res' block contains.  I had to read it twice before I actually 
looked at the list comprehension.  Of course, I'm sure I could be 
retrained to read it the right way, but until I see some real benefit 
from it, I'd rather not have to.

TOOWTDI-ily-yrs,
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: python3: 'where' keyword

2005-01-07 Thread Steven Bethard
Steven Bethard wrote:
How often is this really necessary?  Could you describe some benefits of 
this?  I think the only time I've ever run into scoping problems is with 
lambda, e.g.

[lambda x: f(x) for x, f in lst]
instead of
[lambda x, f=f: for x, f in lst]
Sorry, bad example, this should have looked something more like:
[lambda y: f(x, y) for x, f in lst]
...
[lambda y, x=x, f=f: f(x, y) for x, f in lst]
where you actually need the lambda.
Steve
--
http://mail.python.org/mailman/listinfo/python-list


switching an instance variable between a property and a normal value

2005-01-07 Thread Steven Bethard
I'd like to be able to have an instance variable that can sometimes be 
accessed as a property, and sometimes as a regular value, e.g. something 
like:

py> class C(object):
... def usevalue(self, x):
... self.x = x
... def usefunc(self, func, *args, **kwds):
... self._func, self._args, self._kwds = func, args, kwds
... self.x = C._x
... def _get(self):
... return self._func(*self._args, **self._kwds)
... _x = property(_get)
...
py> c = C()
py> c.usevalue(4)
py> c.x
4
py> c.usefunc(list)
py> c.x # I'd like this to print []

Of course, the code above doesn't do what I want because C._x is a 
property object, so the assignment to self.x in usefunc just adds 
another name for that property object.  If I use self._x (or 
getattr(self, '_x'), etc.) then self._func only gets called that one time:

py> class C(object):
... def usevalue(self, x):
... self.x = x
... def usefunc(self, func, *args, **kwds):
... self._func, self._args, self._kwds = func, args, kwds
... self.x = self._x
... def _get(self):
... return self._func(*self._args, **self._kwds)
... _x = property(_get)
...
py> c = C()
py> c.usefunc(list)
py> c.x is c.x # I'd like this to be False
True
Is there any way to get the kind of behavior I'm looking for?  That is, 
is there any way to make self.x use the property magic only some of the 
time?

Steve
P.S.  Yes, I know I could make both access paths run through the 
property magic, with code that looks something like:

py> class C(object):
... _undefined = object
... def __init__(self):
... self._value, self._func = C._undefined, C._undefined
... def usevalue(self, x):
... self._value = x
... self._func = C._undefined
... def usefunc(self, func, *args, **kwds):
... self._func, self._args, self._kwds = func, args, kwds
... self._value = C._undefined
... def _get(self):
... if self._value is not C._undefined:
... return self._value
... if self._func is not C._undefined:
... return self._func(*self._args, **self._kwds)
... raise AttributeError('x')
... x = property(_get)
...
py> c = C()
py> c.usevalue(4)
py> c.x
4
py> c.usefunc(list)
py> c.x is c.x
False
This code is kinda complicated though because I have to make sure that 
only one of self._func and self._value is defined at any given time.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Display Function Code Body?

2005-01-07 Thread Steven Bethard
Haibao Tang wrote:
What I would like to do is to write a function like disp(), when typed,
it can give you the code infomation.
Check the docs entitled "Retrieving source code":
http://docs.python.org/lib/inspect-source.html
Depending on what you want, you may be able to use inspect.getsource:
py> import inspect
py> import string
py> print inspect.getsource(string.split)
def split(s, sep=None, maxsplit=-1):
"""split(s [,sep [,maxsplit]]) -> list of strings
Return a list of the words in the string s, using sep as the
delimiter string.  If maxsplit is given, splits at no more than
maxsplit places (resulting in at most maxsplit+1 words).  If sep
is not specified or is None, any whitespace string is a separator.
(split and splitfields are synonymous)
"""
return s.split(sep, maxsplit)
However, this won't work for functions you've defined interactively I 
don't think.  On the other hand, if you defined them interactively, you 
can just scroll up. ;)

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: switching an instance variable between a property and a normal value

2005-01-07 Thread Steven Bethard
Robert Brewer wrote:
Steven Bethard wrote:
I'd like to be able to have an instance variable that can 
sometimes be 
accessed as a property, and sometimes as a regular value, 
e.g. something 
like:
...
py> c.x is c.x # I'd like this to be False

You'd like 'c.x is c.x' to be FALSE? You can't be serious. Must be a
typo.
Hmm... maybe I better give the larger context.  The short answer is no, 
I'm serious.

I'm playing around with a mapping type that uses setdefault as suggested 
in http://www.python.org/moin/Python3_2e0Suggestions.  The default value 
for a missing key is either a simple value, or a value generated from a 
function.  If it's generated from the function, it should be generated 
new each time so that, for example, if the default is an empty list, 
d[1] and d[2] don't access the same list.  This is why 'c.x is c.x' 
should be False if I'm using the function.

Some more context:
Before I added the ability to use a function, my code looked something like:
py> class D(dict):
... def __init__(self):
... self._default = None
... def __getitem__(self, key):
... if not key in self:
... self[key] = self._default
... return dict.__getitem__(self, key)
... def setdefaultvalue(self, value):
... self._default = value
...
py> d = D()
py> d[0]
py> d.setdefaultvalue(0)
py> d[1]
0
py> d[2] += 1
py> d
{0: None, 1: 0, 2: 1}
To add the ability to use a function to create the default value, it 
would have been nice to leave basically the same code I already had and 
do something like:

py> class D(dict):
... def __init__(self):
... self._default = None
... def __getitem__(self, key):
... if not key in self:
... self[key] = self._default
... return dict.__getitem__(self, key)
... def setdefaultvalue(self, value):
... self._default = value
... def setdefaultfunction(self, func, *args, **kwds):
... self._func, self._args, self._kwds = func, args, kwds
... self._default = D._defaultfunc
... def _get(self):
... return self._func(*self._args, **self._kwds)
... _defaultfunc = property(_get)
...
Of course, this doesn't work for the reasons that I discussed, but the 
idea would be that D would use a regular attribute when a simple value 
was needed, and a property when a value had to be generated by a 
function each time.

The best option I guess is to rewrite this with a _getdefault() function 
instead of a property:

py> class D(dict):
... _undefined = object()
... def __init__(self):
... self._value = None
... self._func = self._undefined
... def __getitem__(self, key):
... if not key in self:
... self[key] = self.getdefault()
... return dict.__getitem__(self, key)
... def getdefault(self):
... if self._value is not self._undefined:
... return self._value
... if self._func is not self._undefined:
... return self._func(*self._args, **self._kwds)
... def setdefaultvalue(self, value):
... self._value = value
... self._func = self._undefined
... def setdefaultfunction(self, func, *args, **kwds):
... self._func, self._args, self._kwds = func, args, kwds
... self._value = self._undefined
...
But I was hoping to avoid having two separate attributes (self._value 
and self._func) when only one should have a value at any given time.

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Notification of PEP Updates

2005-01-07 Thread Steven Bethard
Bengt Richter wrote:
On Sat, 08 Jan 2005 03:28:34 +1000, Nick Coghlan <[EMAIL PROTECTED]> wrote:

I can't recall which thread this came up in, so I'm starting a new one. . .
Barry Warsaw has kindly added a "peps" topic to the python-checkins mailing 
list. If you want to be notified only when PEP's get updated, then subscribe to 
python-checkins and edit your settings to select just the 'peps' topic.
How does one get to editing one's settings?

Go to the bottom of the page
http://mail.python.org/mailman/listinfo/python-checkins
under "Python-checkins Subscribers", fill in your email address and 
click "Unsubscribe or edit options".  Fill in the "password" field on 
the next page and click "Log in".  The topic option should be near the 
bottom of the page.

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: switching an instance variable between a property and a normal value

2005-01-07 Thread Steven Bethard
Robert Brewer wrote:
Steven Bethard wrote:
I'm playing around with a mapping type that uses setdefault 
as suggested 
in http://www.python.org/moin/Python3_2e0Suggestions.  The 
default value 
for a missing key is either a simple value, or a value 
generated from a 
function.  If it's generated from the function, it should be 
generated 
new each time so that, for example, if the default is an empty list, 
d[1] and d[2] don't access the same list.  This is why 'c.x is c.x' 
should be False if I'm using the function.

The best option I guess is to rewrite this with a 
_getdefault() function instead of a property:

But I was hoping to avoid having two separate attributes (self._value 
and self._func) when only one should have a value at any given time.

It seems to me like you were using the property as a glorified flag.
Just use a flag.
ftypes = ('BuiltinFunctionType', 'BuiltinMethodType',
  'FunctionType', 'GeneratorType', 'LambdaType',
  'MethodType', 'UnboundMethodType',)
class D(dict):
def __init__(self):
self._default = None
self._call_default = False
def __getitem__(self, key):
if not key in self:
if self._call_default:
self[key] = self._default()
else:
self[key] = self._default
return dict.__getitem__(self, key)
def setdefaultvalue(self, value):
self._default = value
self._call_default = isinstance(value, ftypes)
...or:
def setdefaultvalue(self, value, call_callables=True):
self._default = value
self._call_default = callable(value) and call_callables
Well, the right solution using a flag for the particular behavior I was 
looking for would have to look something like:

class D(dict):
def __init__(self):
self._default = None
self._call = False
def __getitem__(self, key):
if not key in self:
if self._call:
func, args, kwds = self._default
self[key] = func(*args, **kwds)
else:
self[key] = self._default
return dict.__getitem__(self, key)
def setdefault(self, value, call=False, *args, **kwds):
if call:
self._default = value, args, kwds
else:
self._default = value
self._call = call
where I also accept *args and **kwds when the default value is to be 
called.  It's certainly doable with a flag, but note that I have to 
check the flag every time in both __getitem__ and setdefault.  It'd 
minimize redundancy a bit if I only had to check it in one place.  Guess 
I could do something like:

class D(dict):
def __init__(self):
self._default = None
self._call_default = False
def __getitem__(self, key):
if not key in self:
self[key] = self._default()
return dict.__getitem__(self, key)
def setdefault(self, value, call=False, *args, **kwds):
if call:
def caller():
return value(*args, **kwds)
else:
def caller():
return value
self._default = caller
Then I only have to test call when setdefault is called.  Not sure I 
like this any better though...

Steve
P.S.  The reason I had two functions, setdefaultvalue and 
setdefaultfunction has to do with argument parsing for 
setdefaultfunction.  Note that

def setdefault(self, value, call=False, *args, **kwds):
...
means that you can't call functions with keyword arguments 'value' or 
'call'.  That means I have to rewrite this function as something like

def setdefault(*args, **kwds):
self = args[0]
value = args[1]
call = ???
...
The problem is, if 'call' is a keyword argument, I don't know whether it 
was intended as one of the function arguments or as an argument to 
setdefault.

If setdefaultvalue and setdefaultfunction are two separate methods, I 
don't run into this problem.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Securing a future for anonymous functions in Python

2005-01-07 Thread Steven Bethard
Alan Gauld wrote:
On Fri, 07 Jan 2005 08:44:57 -0700, Steven Bethard
<[EMAIL PROTECTED]> wrote:
The unfamiliar argument doesn't work for me. After all most
people are unfamiliar with complex numbers (or imaginary) numbers
complex numbers.  Lambdas, on the other hand, show up in all kinds of 
code, and even though I hardly ever use them myself, I have to 
understand them because other people do (over-)use them.

That's a fair point I suppose but I still don't see much point in
introducing new names and syntaxes when the existing name is a
sensible one, even if unfamiliar to many. After all it works in
Lisp and Haskell - Haskell even uses Lambda as its emblem...
Yeah, I personally expect that if GvR doesn't like lambda now, he won't 
like any of the new syntaxes either.  But I'm in the camp that won't 
miss lambda if it's gone, so I'm not too worried. ;)

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Speed revisited

2005-01-08 Thread Steven Bethard
Bulba! wrote:
Following advice of two posters here (thanks) I have written two
versions of  the same program, and both of them work, but the
difference in speed is drastic, about 6 seconds vs 190 seconds
for about 15000 of processed records, taken from 2 lists of
dictionaries.
I've read "Python Performance Tips" at
http://manatee.mojam.com/~skip/python/fastpython.html
..but still don't understand why the difference is so big. 

[snip]
# snippet 1, this runs in about 6 seconds
!def prepend(l):
!map = {}
!for d in l:
!key = d['English']
!map[key] = d
!return map
!
!old_map = prepend(oldl)
!new_map = prepend(newl)
!
!for engphrase in old_map:
! if engphrase in new_map:
! o = old_map[engphrase]
! n = new_map[engphrase]
! cm.writerow(matchpol(o,n))
# snippet 2, this needs 190 seconds
!while 1:
!if len(oldl) == 0 or len(newl) == 0:
!break
!if oldl[o]['English'] == newl[n]['English']:
!cm.writerow(matchpol(oldl[o], newl[n]))
!del oldl[o]
!del newl[n]
!o, n = 0, 0
!continue
!elif cmp(oldl[o]['English'], newl[n]['English']) < 0:
!if o == len(oldl):
!cm.writerow(newl[0])
!del(newl[0])
!o, n = 0, 0
!continue
!o+=1
!elif cmp(oldl[o]['English'], newl[n]['English']) > 0:
!if n == len(newl):
!cm.writerow(newl[0])
!del(oldl[0])
!o, n = 0, 0
!continue
!n+=1
I believe you're running into the fact that deleting from anywhere but 
the end of a list in Python is O(n), where n is the number of items in 
the list.  Consider:

-- test.py --
def delfromstart(lst):
while lst:
del lst[0]
def delfromend(lst):
for i in range(len(lst)-1, -1, -1):
del lst[i]
-
[D:\Steve]$ python -m timeit -s "import test" 
"test.delfromstart(range(1000))"
1000 loops, best of 3: 1.09 msec per loop

[D:\Steve]$ python -m timeit -s "import test" "test.delfromend(range(1000))"
1000 loops, best of 3: 301 usec per loop
Note that Python lists are implemented basically as arrays, which means 
that deleting an item from anywhere but the end of the list is O(n) 
because all items in the list must be moved down to fill the hole.

Repeated deletes from a list are generally not the way to go, as your 
example shows. =)

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3: on removing map, reduce, filter

2005-01-09 Thread Steven Bethard
Robert Kern wrote:
Andrey Tatarinov wrote:
anyway list comprehensions are just syntaxic sugar for
 >>> for var in list:
 >>> smth = ...
 >>> res.append(smth)
(is that correct?)
so there will be no speed gain, while map etc. are C-implemented

It depends.
Try
  def square(x):
  return x*x
  map(square, range(1000))
versus
  [x*x for x in range(1000)]
Hint: function calls are expensive.
Some timings to verify this:
$ python -m timeit -s "def square(x): return x*x" "map(square, range(1000))"
1000 loops, best of 3: 693 usec per loop
$ python -m timeit -s "[x*x for x in range(1000)]"
1000 loops, best of 3: 0.0505 usec per loop
Note that list comprehensions are also C-implemented, AFAIK.
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3: on removing map, reduce, filter

2005-01-09 Thread Steven Bethard
John Machin wrote:
Steven Bethard wrote:
Note that list comprehensions are also C-implemented, AFAIK.
Rather strange meaning attached to "C-implemented". The implementation
generates the code that would have been generated had you written out
the loop yourself, with a speed boost (compared with the fastest DIY
approach) from using a special-purpose opcode LIST_APPEND. See below.
Fair enough. ;)
So you basically replace the SETUP_LOOP, CALL_FUNCTION, POP_TOP and 
POP_BLOCK with a DUP_TOP and a LIST_APPEND.

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3: on removing map, reduce, filter

2005-01-09 Thread Steven Bethard
[EMAIL PROTECTED] wrote:
Steve Bethard wrote:
Robert Kern wrote: 
  def square(x):
  return x*x
  map(square, range(1000))
versus
  [x*x for x in range(1000)]
Hint: function calls are expensive.
$ python -m timeit -s "def square(x): return x*x" "map(square, range(1000))"
1000 loops, best of 3: 693 usec per loop
$ python -m timeit -s "[x*x for x in range(1000)]"
1000 loops, best of 3: 0.0505 usec per loop
Functions will often be complicated enought that inlining them is not
feasible.
True, true.  However, list comprehensions still seem to be comparable in 
speed (at least in Python 2.4):

$ python -m timeit -s "def f(x): return x*x" "[f(x) for x in xrange(1000)]"
1000 loops, best of 3: 686 usec per loop
$ python -m timeit -s "def f(x): return x*x" "map(f, xrange(1000))"
1000 loops, best of 3: 690 usec per loop
Presumably this is because the C code for the byte codes generated by a 
list comprehension isn't too far off of the C code in map.  I looked at 
bltinmodule.c for a bit, but I'm not ambitious enough to try verify this 
hypothesis. ;)

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: else condition in list comprehension

2005-01-10 Thread Steven Bethard
Luis M. Gonzalez wrote:
It's me wrote:
z = [i + (2, -2)[i % 2] for i in range(10)]
But then why would you want to use such feature?  Wouldn't that make
the code much harder to understand then simply:
z=[]
for i in range(10):
if  i%2:
z.append(i-2)
else:
z.append(i+2)
Or are we trying to write a book on "Puzzles in Python"?
Once you get used to list comprehensions (and it doesn't take long),
they are a more concise and compact way to express these operations.
After looking the two suggestions over a couple of times, I'm still 
undecided as to which one is more readable for me.  The problem is not 
the list comprehensions (which I love and use extensively).  The problem 
is the odd syntax that has to be used for an if/then/else expression in 
Python.  I think I would have less trouble reading something like:

z = [i + (if i % 2 then -2 else 2) for i in range(10)]
but, of course, adding a if/then/else expression to Python is unlikely 
to ever happen -- see the rejected PEP 308[1].

Steve
[1] http://www.python.org/peps/pep-0308.html
--
http://mail.python.org/mailman/listinfo/python-list


Re: syntax error in eval()

2005-01-10 Thread Steven Bethard
harold fellermann wrote:
Python 2.4 (#1, Dec 30 2004, 08:00:10)
[GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
 >>> class X : pass
...
 >>> attrname = "attr"
 >>> eval("X.%s = val" % attrname , {"X":X, "val":5})
Traceback (most recent call last):
  File "", line 1, in ?
  File "", line 1
X.attr = val
   ^
SyntaxError: invalid syntax
You may want to use exec instead of eval:
py> class X(object):
... pass
...
py> attrname = "attr"
py> exec "X.%s = val" % attrname in dict(X=X, val=5)
py> X.attr
5
But personally, I'd use setattr, since that's what it's for:
py> class X(object):
... pass
...
py> attrname = "attr"
py> setattr(X, attrname, 5)
py> X.attr
5
Steve
--
http://mail.python.org/mailman/listinfo/python-list


[OT] Re: Old Paranoia Game in Python

2005-01-10 Thread Steven Bethard
Terry Reedy wrote:
Never saw this specific game.  Some suggestions on additional factoring out 
of duplicate code.


def next_page(this_page):
 print "\n"
 if this_page == 0:
page = 0
return

The following elif switch can be replaced by calling a selection from a 
list of functions:

[None, page1, pag2, ... page57][this_page]()

 elif this_page == 1:
page1()
return
 elif this_page == 2:
page2()
return
...
 elif this_page == 57:
page57()
return

Also, a chose3 function to complement your chose (chose2) function would 
avoid repeating the choose-from-3 code used on multiple pages.

Terry J. Reedy
This is what I love about this list.  Where else is someone going to 
look at 1200+ lines of code and give you useful advice?!  ;)  Very cool. 
 (Thanks Terry!)

While we're making suggestions, you might consider writing dice_roll as:
def dice_roll(num, sides):
return sum(random.randrange(sides) for _ in range(num)) + num
for Python 2.4 or
def dice_roll(num, sides):
return sum([random.randrange(sides) for _ in range(num)]) + num
for Python 2.3.
You also might consider writing all the pageX methods in a class, so all 
your globals can be accessed through self, e.g.:

class Game(object):
def __init__(self):
...
self.page = 1
self.computer_request = 0
...
def page2():
print ...
if self.computer_request == 1:
new_clone(45)
else:
new_clone(32)
...
You could then have your class implement the iteration protocol:
def __iter__(self):
try:
while True:
self.old_page = self.page
yield getattr(self, "page%i" % page)
except AttributeError:
raise StopIteration
And then your main could look something like:
def main(args):
   ...
   instructions()
   more()
   character()
   more()
   for page_func in Game():
   page_func()
   print "-"*79
Anyway, cool stuff.  Thanks for sharing!
Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python3: on removing map, reduce, filter

2005-01-10 Thread Steven Bethard
David M. Cooke wrote:
Steven Bethard <[EMAIL PROTECTED]> writes:
Some timings to verify this:
$ python -m timeit -s "def square(x): return x*x" "map(square, range(1000))"
1000 loops, best of 3: 693 usec per loop
$ python -m timeit -s "[x*x for x in range(1000)]"
1000 loops, best of 3: 0.0505 usec per loop

Maybe you should compare apples with apples, instead of oranges :-)
You're only running the list comprehension in the setup code...
$ python2.4 -m timeit -s "def square(x): return x*x" "map(square, range(1000))"
1000 loops, best of 3: 464 usec per loop
$ python2.4 -m timeit "[x*x for x in range(1000)]"
1000 loops, best of 3: 216 usec per loop
So factor of 2, instead of 13700 ...
Heh heh.  Yeah, that'd be better.  Sorry about that!
Steve
--
http://mail.python.org/mailman/listinfo/python-list


stretching a string over several lines (Re: PyChecker messages)

2005-01-10 Thread Steven Bethard
Frans Englich wrote:
Also, another newbie question: How does one make a string stretch over several 
lines in the source code? Is this the proper way?
(1)
print "asda asda asda asda asda asda " \
"asda asda asda asda asda asda " \
"asda asda asda asda asda asda"
A couple of other options here:
(2)
print """asda asda asda asda asda asda
asda asda asda asda asda asda
asda asda asda asda asda asda"""
(3)
print """\
asda asda asda asda asda asda
asda asda asda asda asda asda
asda asda asda asda asda asda"""
(4)
print ("asda asda asda asda asda asda "
   "asda asda asda asda asda asda "
   "asda asda asda asda asda asda")
Note that backslash continuations (1) are on Guido's list of "Python 
Regrets", so it's likely they'll disappear with Python 3.0 (Of course 
this is 3-5 years off.)

I typically use either (3) or (4), but of course the final choice is up 
to you.

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: Handing a number of methods to the same child class

2005-01-11 Thread Steven Bethard
Dave Merrill wrote:
Somewhat silly example:
I know you've hedged this by calling it a "silly" example, but I would 
like to point out that your set_X methods are unnecessary -- since 
Python allows you to overload attribute access, getters and setters are 
generally unnecessary.

class Address:
def __init__():
self.displayed_name = ''
self.adr = ''
self.city = ''
self.state = ''
def set_name(name):
self.displayed_name = name
def set_adr(adr):
self.adr = adr
def set_city(city):
self.city = city
def set_state(state):
self.state = state
class Phone:
def __init__():
self.displayed_name = ''
self.number = ''
def set_name(name):
self.displayed_name = name
def set_number(number):
self.number = number
class Customer:
def __init__():
self.last_name = ''
self.first_name = ''
self.adr = Adr()
self.phone = Phone()
def set_adr_name(name):
self.adr.set_name(name)
def set_adr_adr(adr):
self.adr.set_adr(adr)
def set_adr_city(city):
self.adr.set_city(city)
def set_adr_state(state):
self.adr.set_state(state)
def set_phone_name(name):
self.phone.set_name(name)
def set_phone_number(number):
self.phone.set_number(number)
IOW, all the adr methods go to the corresponding method in self.adr, all the
phone methods go to self.phone, theorectically etc for other rich
attributes.
What I'd really like is to say, "the following list of methods pass all
their arguments through to a method of the same name in self.adr, and the
following methods do the same but to self.phone." Is there some sane way to
express that in python?
py> class Address(object):
... def __init__(self, city, state):
... self.city = city
... self.state = state
...
py> class Customer(object):
... def __init__(self, name, addr):
... self.name = name
... self.addr = addr
... def __getattr__(self, attr):
... if attr.startswith('adr_'):
... return getattr(self.addr, attr[4:])
... raise AttributeError(attr)
...
py> c = Customer("Steve", Address("Tucson", "AZ"))
py> c.adr_city
'Tucson'
py> c.adr_state
'AZ'
I've used a slightly different example from yours, but hopefully you can 
see how to apply it in your case.  The __getattr__ method is called when 
an attribute of an object cannot be found in the normal locations (e.g. 
self.__dict__).  For all attributes that begin with "adr_", I delegate 
the attribute lookup to the self.addr object instead.

Steve
--
http://mail.python.org/mailman/listinfo/python-list


Re: property () for Java Programmers ?

2005-01-12 Thread Steven Bethard
michael wrote:
Hi there,
I am somewhat confused by the following :
class C(object):
def getx(self): return self.__x
def setx(self, value): self.__x = "extended" + value
def delx(self): del self.__x
x = property(getx, setx, delx, "I'm the 'x' property.")
So far so good :-) But what to do with this now

c = C
c

dir (c)
['__class__', '__delattr__', '__dict__', '__doc__',
'__getattribute__', '__hash__', '__init__', '__module__', '__new__',
'__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__',
'__weakref__', 'delx', 'getx', 'setx', 'x']
c.x

?? What can I do with this "property object" now.
Well, if you actually want your getx/setx/delx to be called, then you 
need an *instance* of class C:

py> c = C()
py> c.x
Traceback (most recent call last):
  File "", line 1, in ?
  File "", line 2, in getx
AttributeError: 'C' object has no attribute '_C__x'
py> c.x = "42"
py> c.x
'extended42'
py> del c.x
py> c.x
Traceback (most recent call last):
  File "", line 1, in ?
  File "", line 2, in getx
AttributeError: 'C' object has no attribute '_C__x'
Note that I used 'c = C()' instead of 'c = C' as in your code.
STeve
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   7   8   9   10   >