Problems with emulation of numeric types and coercion rules
""" I am trying to write some classes representing the quaternion number. I wrote a base class, which implements only the numerical interface, and a few subclasses, which provide methods for their specific domain. Since the operator methods will be the same for all these classes, the base class operator methods don't explicitly return an instance of this base class, but rather an instance of the class that called them (ie 'return self.__class__(*args)' not 'return Quaternion(*args)') Documentation at http://docs.python.org/ref/coercion-rules.html says: Below, __op__() and __rop__() are used to signify the generic method names corresponding to an operator; __iop__() is used for the corresponding in-place operator. For example, for the operator `+', __add__() and __radd__() are used for the left and right variant of the binary operator, and __iadd__() for the in-place variant. For objects x and y, first x.__op__(y) is tried. If this is not implemented or returns NotImplemented, y.__rop__(x) is tried. If this is also not implemented or returns NotImplemented, a TypeError exception is raised. But see the following exception: Exception to the previous item: if the left operand is an instance of a built-in type or a new-style class, and the right operand is an instance of a proper subclass of that type or class, the right operand's __rop__() method is tried before the left operand's __op__() method. This is done so that a subclass can completely override binary operators. Otherwise, the left operand's __op__ method would always accept the right operand: when an instance of a given class is expected, an instance of a subclass of that class is always acceptable. So I thought my plan would work. But it shows that even if the right operand is a subclass of left operand, its __rop__() method is called first _only_ when it overwrites the parent's method. If the method is inherited or just copied from its parent, the rule is ignored. Here is a simplified example: """ # file: number.py def convert(obj): if isinstance(obj, Number): return obj._value try: f = float(obj) except (TypeError, ValueError): return NotImplemented if f == obj: return f return NotImplemented class Number(object): def __init__(self, value=0.): value = float(value) self._value = value def __add__(self, other): """ Return sum of two real numbers. Returns an instance of self.__class__ so that subclasses would't have to overwrite this method when just extending the base class' interface. """ other = convert(other) if other is NotImplemented: return NotImplemented return self.__class__(self._value + other) __radd__ = __add__ # other methods class Broken(Number): pass class StillBroken(Number): __add__ = __radd__ = Number.__add__ class Working(Number): def __add__(self, other): return Number.__add__(self, other) __radd__ = __add__ __doc__ = \ """ If I now open the python interpreter:: >python Python 2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from number import * >>> number = Number() >>> broken1 = Broken() >>> broken2 = StillBroken() >>> working = Working() When the subclass is on the left side of the operator, everything works as intended:: >>> print type(broken1 + number).__name__ Broken >>> print type(broken2 + number).__name__ StillBroken >>> print type(working + number).__name__ Working But when the sublass is on the right side of the operator, only the subclass that has owerwritten the operator method gets called first:: >>> print type(number + broken1).__name__ Number >>> print type(number + broken2).__name__ Number >>> print type(number + working).__name__ Working According to the coercion rule, the subclass should allways be called first. Is this a bug (either in documentation or in python), or should I stop trying to 'upcast' the return value? I did find a solution to this problem, but it isn't pretty and I'm not the only one using this method. Also if this is a bug could this mail be used as a bug report? Thanks in advance. Ziga """ if __name__ == '__main__': import doctest doctest.testmod() -- http://mail.python.org/mailman/listinfo/python-list
Re: Problems with emulation of numeric types and coercion rules
Never mind, I forgot that class inheritance tree is a tree. Resulting type of adding Broken and Working from previous example would also depend on the order of operands. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Adding methods to instances
You can also just use: t.dynamic = dynamic.__get__(t) -- http://mail.python.org/mailman/listinfo/python-list
Re: new-style classes multiplication error message isn't very informative
Jon Guyer wrote: > >>> This is a fake line to confuse the stupid top-posting filter at gmane > > We have a rather complicated class that, under certain circumstances, knows > that it cannot perform various arithmetic operations, and so returns > NotImplemented. As a trivial example: > > >>> class my: > ... def __mul__(self, other): > ... return NotImplemented > ... > >>> my() * my() > Traceback (most recent call last): > File "", line 1, in ? > TypeError: unsupported operand type(s) for *: 'instance' and 'instance' > > This error message isn't hugely meaningful to many of our users (and in > complicated expressions, I'd certainly benefit from knowing exactly which > subclasses of 'my' are involved), but it beats the behavior with new-style > classes: > > >>> class my(object): > ... def __mul__(self, other): > ... return NotImplemented > ... > >>> my() * my() > Traceback (most recent call last): > File "", line 1, in ? > TypeError: can't multiply sequence to non-int > > After a lot of googling and a lot of pouring over abstract.c, I now > understand that object() is defined with a tp_as_sequence, and so the error > message is the result of the last-ditch effort to do sequence concatentation. > > What if I don't want to permit sequence concatenation? > Is there a way to unset tp_as_sequence? > Should I be inheriting from a different class? We started inheriting from > object because we want a __new__ method. > > The "'instance' and 'instance'" message would be OK, but even better is the > result of this completely degenerate class: > > >>> class my(object): > ... pass > ... > >>> class your(my): > ... pass > ... > >>> my() * your() > Traceback (most recent call last): > File "", line 1, in ? > TypeError: unsupported operand type(s) for *: 'my' and 'your' > > That's an error message I can actually do something with. Is there any way > to get this behavior when I do have a __mul__ method and sometimes return > NotImplemented? > > We're doing most of our development in Python 2.3, if it matters. This is a bug in Python. See this thread: http://mail.python.org/pipermail/python-dev/2005-December/059046.html and this patch: http://sourceforge.net/tracker/?group_id=5470&atid=305470&func=detail&aid=1390657 for more details. -- http://mail.python.org/mailman/listinfo/python-list
Re: Detecting Python Installs from the Windows Registry
Fuzzyman wrote: > Does anyone know how to use _winreg to get path information (location > of install) for all versions of Python installed (and also which is the > most recent) ? This should probably work: import _winreg def get_subkey_names(reg_key): index = 0 L = [] while True: try: name = _winreg.EnumKey(reg_key, index) except EnvironmentError: break index += 1 L.append(name) return L def function_in_search_of_a_name(): """ Return a list with info about installed versions of Python. Each version in the list is represented as a tuple with 3 items: 0 A long integer giving when the key for this version was last modified as 100's of nanoseconds since Jan 1, 1600. 1 A string with major and minor version number e.g '2.4'. 2 A string of the absolute path to the installation directory. """ python_path = r'software\python\pythoncore' L = [] for reg_hive in (_winreg.HKEY_LOCAL_MACHINE, _winreg.HKEY_CURRENT_USER): try: python_key = _winreg.OpenKey(reg_hive, python_path) except EnvironmentError: continue for version_name in get_subkey_names(python_key): key = _winreg.OpenKey(python_key, version_name) modification_date = _winreg.QueryInfoKey(key)[2] install_path = _winreg.QueryValue(key, 'installpath') L.append((modification_date, version_name, install_path)) return L -- http://mail.python.org/mailman/listinfo/python-list
Re: Calling foreign functions from Python? ctypes?
Sorry, the previous post is wrong. I mixed the function names. -- http://mail.python.org/mailman/listinfo/python-list
Re: Calling foreign functions from Python? ctypes?
Paul Watson wrote: . . . > I need to call GetVersionInfo() and handle VERSIONINFO information. I > thought that distutils might have something, but I do not see it yet. > Any suggestions? This information is provided with sys.getwindowsversion(). -- http://mail.python.org/mailman/listinfo/python-list
Re: Getting better traceback info on exec and execfile - introspection?
R. Bernstein wrote: . . . > which is perhaps is a little more honest since one is not really in a > file called . However the way the debugger gets this *is* > still a little hoaky in that it looks for something in the frame's > f_code.co_filename *called* . And from that it *assumes* this > is an exec, so it can't really tell the difference between an exec > command an execfile command, or a file called . But I suppose > a little more hoakiness *could* be added to look at the outer frame > and parse the source line for "exec" or "execfile". You should check the getFrameInfo function in zope.interface package: http://svn.zope.org/Zope3/trunk/src/zope/interface/advice.py?rev=25177&view=markup > And suppose instead of '' I'd like to give the value or the > leading prefix of the value instead of the unhelpful word ''? > How would one do that? Again, one way is to go into the outer frame > get the source line (if that exists), parse that and interpolate > argument to exec(file). Is there are better way? Py library (http://codespeak.net/py/current/doc/index.html) has some similar functionality in the code subpackage. -- http://mail.python.org/mailman/listinfo/python-list
Re: [Python for .NET] Any plans for supporting CLR2.0?
F. GEIGER wrote: > > Sorry, for not being precise about "Python for .NET": I didn't mean > IronPython, which I'am aware of, I meant > http://www.zope.org/Members/Brian/PythonNet > > Kind regards > Franz GEIGER Python for .NET has a separate list, see: http://mail.python.org/mailman/listinfo/pythondotnet . Ziga Seilnacht -- http://mail.python.org/mailman/listinfo/python-list
Re: .dll and .pyd
[EMAIL PROTECTED] wrote: > Please, confirm me one thing. According to Python documentation for > Windows the objects .pyd and .dll have the same characteristics. I > observed that in Python24 it does not produce errors when importing > xx.dll or xx.pyd, however in python25b2, it only accepts nto import > xx.pyd. > Best regards. Yes, this is intentional, see this bug report: www.python.org/sf/1472566 Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Finding the name of a class
Kirk Strauser wrote: [snip] > OK, now for the good stuff. In the code below, how can I find the name of > the class that 'bar' belongs to: > > >>> class Foo(object): > ... def bar(self): > ... pass > ... > >>> b = Foo.bar >>> print b.im_class.__name__ Foo But if you are writing a decorator, you can use this code: import sys def tracer(func): """ A decorator that prints the name of the class from which it was called. The name is determined at class creation time. This works only in CPython, since it relies on the sys._getframe() function. The assumption is that it can only be called from a class statement. The name of the class is deduced from the code object name. """ classframe = sys._getframe(1) print classframe.f_code.co_name return func Hope this helps, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Projects Continuous Integration
Dave Potts wrote: > Hi, > > I'm just starting a development project in Python having spent time in > the Java world. I was wondering what tool advice you could give me > about setting up a continuous integration environment for the python > code: get the latest source, run all the tests, package up, produce the > docs, tag the code repository. I'm used to things like Maven and > CruiseControl in the Java world. > > Cheers, > > Dave. Buildbot might be what you are looking for: http://buildbot.sourceforge.net/ Hope this helps, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: is it possible to dividing up a class in multiple files?
Martin Höfling wrote: > Hi there, > > is it possible to put the methods of a class in different files? I just > want to order them and try to keep the files small. > > Regards > Martin You could use something like this: """ Example usage: >>> class Person(object): ... def __init__(self, first, last): ... self.first = first ... self.last = last ... >>> john = Person('John', 'Smith') >>> jane = Person('Jane', 'Smith') >>> class Person(extend(Person)): ... def fullname(self): ... return self.first + ' ' + self.last ... >>> john.fullname() 'John Smith' >>> jane.fullname() 'Jane Smith' """ def extend(cls): extender = object.__new__(Extender) extender.class_to_extend = cls return extender class Extender(object): def __new__(cls, name, bases, dict): # check that there is only one base base, = bases extended = base.class_to_extend # names have to be equal otherwise name mangling wouldn't work if name != extended.__name__: msg = "class names are not identical: expected %r, got %r" raise ValueError(msg % (extended.__name__, name)) # module is added automatically module = dict.pop('__module__', None) if module is not None: modules = getattr(extended, '__modules__', None) if modules is None: modules = extended.__modules__ = [extended.__module__] modules.append(module) # replace the docstring only if it is not None doc = dict.pop('__doc__', None) if doc is not None: setattr(extended, '__doc__', doc) # now patch the original class with all the new attributes for attrname, value in dict.items(): setattr(extended, attrname, value) return extended Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Class attributes, instances and metaclass __getattribute__
Pedro Werneck wrote: > Hi [snip] > Well... I'm not talking about metaclass attributes... that's perfectly > consistent, agreed. > > I'm saying that when the class implements a custom __getattribute__, > when you try to access the instance attributes from itself, it uses it. > But if the class is a metaclass, instances of its instances have acess > to the attribute anyway, but don't use the custom __getattribute__ you > implemented. Attribute lookup for instances of a class never calls metaclass' __getattribute__() method. This method is called only when you access attributes directly on the class. [snip] > And, I'm curious anyway... is it possible to customize attribute access > in this case in any other way ? What really happens here ? There are two distinct methods involved in your example; attribute lookup for classes is controled by metaclass' __getattribute__() method, while instance attribute lookup is controled by class' __getattribute__() method. They are basicaly the same, but they never use ``type(obj).attr`` to access the class' attributes. The code for these methods would look something like this in Python: class Object(object): """ Emulates object's and type's behaviour in attribute lookup. """ def __getattribute__(self, name): cls = type(self) # you normally access this as self.__dict__ try: dict_descriptor = cls.__dict__['__dict__'] except KeyError: # uses __slots__ without dict mydict = {} else: mydict = dict_descriptor.__get__(self, cls) # Can't use cls.name because we would get descriptors # (methods and similar) that are provided by class' # metaclass and are not meant to be accessible from # instances. classdicts = [c.__dict__ for c in cls.__mro__] # We have to look in class attributes first, since it can # be a data descriptor, in which case we have to ignore # the value in the instance's dict. for d in classdicts: if name in d: classattr = d[name] break else: # None of the classes provides this attribute; perform # the normal lookup in instance's dict. try: return mydict[name] except KeyError: # Finally if everything else failed, look for the # __getattr__ hook. for d in classdicts: if '__getattr__' in d: return d['__getattr__'](self, name) msg = "%r object has no attribute %r" raise AttributeError(msg % (cls.__name__, name)) # Check if class' attribute is a descriptor. if hasattr(classattr, '__get__'): # If it is a non-data descriptor, then the value in # instance's dict takes precedence if not hasattr(classattr, '__set__') and name in mydict: return mydict[name] return classattr.__get__(self, cls) # Finally, look into instance's dict. return mydict.get(name, classattr) As you can see, it completely avoids calling metaclass' __getattribute__() method. If it wouldn't do that, then the metaclass' attributes would 'leak' to instances of its classes. For example, __name__, __mro__ and mro() are some of the descriptors provided by type to every class, but they are not accesible through instances of these classes, and shouldn't be, otherwise they could mask some errors in user's code. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Error with: pickle.dumps(numpy.float32)
Iljya wrote: > I have reproduced the error with Numpy 1.0b1 > > The output with v.1.0b1 reads: > PicklingError: Can't pickle : it's not found as > __builtin__.float32scalar > > Has anyone else encountered this? > > Thanks, > > Iljya > > Iljya wrote: > > Hello, > > > > I need to pickle the type numpy.float32 but encounter an error when I > > try to do so. I am able to pickle the array itself, it is specifically > > the type that I cannot pickle. > > > > I am using: > > Numpy version: 0.9.4 > > Python version: 2.4.3 > > Windows XP > > > > Here is the code that reproduces the error: > > __ > > import numpy > > import pickle > > > > pickle.dumps(numpy.float32) > > > > Output: > > PicklingError: Can't pickle : it's not found as > > __builtin__.float32_arrtype > > __ > > > > Any suggestions would be much appreciated. > > > > Thanks, > > > > Iljya Kalai This looks like a numpy bug. It seems that float32_arrtype's name is incomplete. Its tp_name field should start with a module name, but since it doesn't, Python assumes that it is a builtin type: >>> import numpy >>> numpy.float32.__module__ '__builtin__' You should report this bug either to the numpy list: https://lists.sourceforge.net/lists/listinfo/numpy-discussion or to their bug tracker: http://projects.scipy.org/scipy/numpy Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: efficient memoize decorator?
[EMAIL PROTECTED] wrote: > im plugging away at the problems at > http://www.mathschallenge.net/index.php?section=project > im trying to use them as a motivator to get into advanced topics in > python. > one thing that Structure And Interpretation Of Computer Programs > teaches is that memoisation is good. > all of the memoize decorators at the python cookbook seem to make my > code slower. > is using a decorator a lazy and inefficient way of doing memoization? > can anyone point me to where would explain how to do it quickly. or is > my function at fault? Your problem is that you are mixing psyco and memoize decorators; psyco cannot accelerate inner functions that use nested scopes (see http://psyco.sourceforge.net/psycoguide/unsupported.html ). You could try using the memoize decorator from: http://wiki.python.org/moin/PythonDecoratorLibrary , which doesn't use functions with closures, or use Fredrik Lundh's solution which puts memoization directly into the function. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Dumping the state of a deadlocked process
[EMAIL PROTECTED] wrote: > Hi all > > I'm currently having some issues with a process getting deadlocked. The > problem is that the only way I can seem to find information about where > it deadlocks is by making a wild guess, insert a pdb.set_trace() before > this point, and then step until it locks up, hoping that I've guessed > right. > > The frustrating part is that most of the time my guesses are wrong. > > It would be really nice if I could send the python process some signal > which would cause it to print the current stacktrace and exit > immediately. That way I would quickly be able to pinpoint where in the > code the deadlock happens. Java has a somewhat similar feature where > you can send a running VM process a SIGQUIT, to which it will respond > by dumping all current threads and lots of other information on stdout. > > Is this possible somehow? Check out the sys._current_frames() function, new in Python 2.5: http://docs.python.org/lib/module-sys.html#l2h-5122 Hope this helps, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Efficiently iterating over part of a list
Steven D'Aprano wrote: [snip] > The important thing to notice is that alist[1:] makes a copy. What if the > list has millions of items and duplicating it is expensive? What do people > do in that case? > > Are there better or more Pythonic alternatives to this obvious C-like > idiom? > > for i in range(1, len(alist)): > x = alist[i] for x in itertools.islice(alist, 1, len(alist)): HTH Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Overriding traceback print_exc()?
Bob Greschke wrote: > I want to cause any traceback output from my applications to show up in one > of my dialog boxes, instead of in the command or terminal window (between > running on Solaris, Linux, OSX and Windows systems there might not be any > command window or terminal window to show the traceback messages in). Do I > want to do something like override the print_exc (or format_exc?) method of > traceback to get the text of the message and call my dialog box routine? If > that is right how do I do that (monkeying with classes is all still a grey > area to me)? You can overwrite the sys.exepthook() with your own function: import sys from traceback import format_exception def my_excepthook(exctype, value, traceback): details = "".join(format_exception(exctype, value, traceback)) # now show the details in your dialog box sys.excepthook = my_excepthook See the documentation for details: http://docs.python.org/lib/module-sys.html#l2h-5125 Hope this helps, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: tp_richcompare
Sreeram Kandallu wrote: > I'm writing an extension type, for which i'd like to implement only == > and !=, but not the other comparison operators like <,<=,>,>=. > What is the right way to do this? > I currently have a tp_richcompare function, which handles Py_EQ, and > Py_NE, but raises a TypeError for the other operations. Is this the > 'right' way? Yes. This is exactly what the builtin complex type does. > Sreeram > Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: __getattribute__ doesn't work on 'type' type for '__class__'
Barry Kelly wrote: [snipped] > Yet when I try this with the 'type' type, it doesn't work: > > ---8<--- > >>> x.__class__.__class__ > > >>> x.__class__.__getattribute__('__class__') > Traceback (most recent call last): > File "", line 1, in ? > TypeError: descriptor '__getattribute__' requires a 'int' object but > received a 'str' > --->8--- > > Why is this? The problem is that your class (I would guess that x is an int) and its type have a method with the same name. As is normal for attribute lookup, the instance's attribute is first looked up in its __dict__. Since x.__class__ is a type, this results in __getattribute__ being an unbound method of that type. What you are doing is similar to: >>> L = ["spam", "eggs"] >>> "".__class__.join(L) Traceback (most recent call last): ... TypeError: descriptor 'join' requires a 'str' object but received a 'list' Which as you can see, fails with the same error message. > -- Barry > > -- > http://barrkel.blogspot.com/ Hope this helps, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Traversing Inheritance Model
[EMAIL PROTECTED] wrote: > What's the best way to traverse the web of inheritance? I want to take > a class and traverse its bases and then the bases' bases etc > looking for a particular class. What first came to mind was nested for > loops. However, I want to know if there's some pre-existing method for > doing this or if this isn't even possible (might send me in circles > perhaps?).Thanks all. The __mro__ descriptor does what you want. Example: >>> class A(object): ... pass ... >>> class B(object): ... pass ... >>> class C(A, B): ... pass ... >>> C.__mro__ (, , , ) See: http://www.python.org/download/releases/2.2.3/descrintro/ and http://www.python.org/download/releases/2.3/mro/ for details. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: PyPy and constraints
Paddy wrote: > I followed the recent anouncement of version 0.9 of PyPi and found out > that there was work included on adding constraint satisfaction solvers > to PyPy: > http://codespeak.net/pypy/dist/pypy/doc/howto-logicobjspace-0.9.html > > I was wondering if this was a possibiity for "mainstream" python, and > wether the the algorithms used could handle the kind of use mentioned > here: > > http://groups.google.com/group/comp.lang.python/browse_frm/thread/d297170cfbf1bb34/d4773320e3417d9c?q=constraints+paddy3118&rnum=3#d4773320e3417d9c > > Thanks, Paddy. See: http://www.logilab.org/projects/constraint Hope this helps, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Leaks in subprocess.Popen
zloster wrote: > I'm using Python 2.4.3 for Win32. > I was trying to run a few child processes simultaneously in separate > threads and get their STDOUT, but the program was leaking memory and I > found that it was because of subprocess operating in another thread. > The following code works fine, but I get a leaking handle every second. > You can see it in the task manager if you choose to see the count> column. Does anybody have a solution? Please help! > This bug is fixed in the 2.5 version of Python and will be fixed in the next 2.4 maintainance release (2.4.4). See: http://www.python.org/sf/1500293 for the bug report. You can find the relevant changes here: http://mail.python.org/pipermail/python-checkins/2006-June/053417.html http://mail.python.org/pipermail/python-checkins/2006-June/053418.html If you have Python Win32 Extensions installed you can try using that instead of the extension in the standard library; you have to change a single line in the subprocess module: --- subprocess_modified.py 2006-09-20 22:04:29.734375000 +0200 +++ subprocess.py 2006-09-20 22:01:52.296875000 +0200 @@ -350,7 +350,7 @@ if mswindows: import threading import msvcrt -if 0: # <-- change this to use pywin32 instead of the _subprocess driver +if 1: # <-- change this to use pywin32 instead of the _subprocess driver import pywintypes from win32api import GetStdHandle, STD_INPUT_HANDLE, \ STD_OUTPUT_HANDLE, STD_ERROR_HANDLE Hope this helps, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: PyOpenGL pour python 2.5 ???
Sébastien Ramage wrote: > oh! > sorry, I made some search on comp.lang.python and fr.comp.lang.python > and finally I forgot where I was... > > My question is : > how use pyopengl with python 2.5 ?? > it seems that pyopengl was stop on 2005 PyOpenGL is still maintained, but most of the development is focused on porting PyOpenGL to ctypes, which should eliminate the need for compilation. > I'm on windows and I've not tools to recompile pyopengl for python 2.5 > (thinking recompilation is the only things) > > Somebody can help me? The binary installer for Python 2.5 should be available soon. You can track its progress through Mike Fletcher's blog: http://blog.vrplumber.com/1633 http://blog.vrplumber.com/1639 http://blog.vrplumber.com/1640 > Seb Hope this helps, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Has comparison of instancemethods changed between python 2.5 and 2.4?
Frank Niessink wrote: > I tried to lookup the python source code where the actual comparison > happens. I think it is in methodobject.c (I'm not familiar with the > python source so please correct me if I'm wrong), meth_compare. That > function did not change between python 2.4.4 and 2.5. Moreover, the > implementation suggests that the only way for two methods to be equal is > that their instances point to the same object and their method > definitions are the same. Am I interpreting that correctly? No, the comparison happens in function instancemethod_compare in file classobject.c: http://svn.python.org/view/python/trunk/Objects/classobject.c?view=markup This method was changed in Python 2.5. Previously, two instancemethods compared equal if their im_self attributes were *identical* and their im_func attributes were equal. Now, they compare equal if their im_self attributes are *equal* and their im_func attributes are equal. See this change: http://svn.python.org/view?rev=46739&view=rev A small example: >type cmpmethod.py class Test(object): def meth(self): pass def __eq__(self, other): return True >python24 Python 2.4.4 (#71, Oct 18 2006, 08:34:43) ... >>> from cmpmethod import Test >>> a, b = Test(), Test() >>> a.meth == b.meth False >>> ^Z >python25 Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) ... >>> from cmpmethod import Test >>> a, b = Test(), Test() >>> a.meth == b.meth True >>> ^Z If you think this is a bug, you should report it to the bugtracker: http://sourceforge.net/bugs/?group_id=5470 Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Python embedded interpreter: how to initialize the interpreter ?
[EMAIL PROTECTED] wrote: > Hello, > > I've written a C embedded application. I want to open a python gui > application in my C program but when I do : > > PyRun_String( "import gui.py", file_input, pDictionary, pDictionary ); > > the interpreter emits an error: tkinter module not defined > > What script must I load to initialize the embedded python interpreter > so as I have the same modules in the python command line and in the > python embedded interpreter ? /usr/lib/python2.4/*.py ?? > > Yann COLLETTE Did you call the Py_Initialize() function before trying to execute that statement? Note also that you might have to Py_SetProgramName(somepath) before calling Py_Initialize(). See the documentation for details: http://docs.python.org/ext/embedding.html http://docs.python.org/api/embedding.html Hope this helps, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Bizarre floating-point output
Nick Maclaren wrote: > I think that you should. Where does it say that tuple's __str__ is > the same as its __repr__? > > The obvious interpretation of the documentation is that a sequence > type's __str__ would call __str__ on each sub-object, and its __repr__ > would call __repr__. How would you distinguish ['3', '2', '1'] from [3, 2, 1] in that case? Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Bizarre floating-point output
Nick Maclaren wrote: > Well, it's not felt necessary to distinguish those at top level, so > why should it be when they are in a sequence? Well, this probably wasn't the best example, see the links below for a better one. > But this whole thing is getting ridiculous. The current implementation > is a bizarre interpretation of the specification, but clearly not an > incorrect one. It isn't important enough to get involved in a religious > war over - I was merely puzzled as to the odd behaviour, because I have > to teach it, and it is the sort of thing that can confuse naive users. There was a recent bug report identical to your complaints, which was closed as invalid. The rationale for closing it was that things like: print ("a, bc", "de f,", "gh), i") would be extremely confusing if the current behaviour was changed. See http://www.python.org/sf/1534769 for details. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: representing physical units
Russ wrote: > I know that python packages are available for representing physical > units, but I am getting frustrated trying to find them and determine > which is the best. > Where can I find a good package that does this? Thanks. Unum is a special package just for this purpose: http://home.tiscali.be/be052320/Unum.html ScientificPython http://starship.python.net/~hinsen/ScientificPython/ has a simmilar module, but it depends on Numeric package. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Req. for module style/organization
RayS wrote: > I've begun a Python module to provide a complete interface to the > Meade LX200 command set, and have searched for a style/development > guide for Python Lib/site-packages type modules, but only saw guides > for C-modules. I realize that I need to make some changes to follow > http://www.python.org/doc/essays/styleguide.html > better. Does anyone have an appropriate URL for me to follow for this > task? Is one of the C-module guides appropriate? > There are two informal Python Enhancements Proposals: Style Guide for C Code - http://www.python.org/peps/pep-0007.html Style Guide for Python Code - http://www.python.org/peps/pep-0008.html > > I have: > LX200/ > __init__.py > LXSerial.py > Telescope.py > Focuser.py > LXUtils.py > ... etc > Each file has one class defined. This style is not encuraged any more because of the ambiguity with imports; see http://mail.python.org/pipermail/web-sig/2006-February/002093.html for details. > All advice is appreciated, > Ray Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: How to set docstrings for extensions supporting PyNumberMethods?
Nick Alexander wrote: > Hello, > > I am writing a python extension (compiled C code) that defines an > extension type with PyNumberMethods. Everything works swimmingly, > except I can't deduce a clean way to set the docstring for tp_* > methods. That is, I always have > > type.__long__.__doc__ == 'x.__long__() <==> long(x)' > > which a quick glance at the Python 2.5 source shows is the default. > > I have found that I can use PyObject_GetAttr and PyWrapperDescrObject > and set the descriptor objects d_base->doc to a char pointer... but I > can't tell if this is safe. Or the right way to do it. > > If I'm on the wrong list, please let me know! > Thanks, > Nick Alexander I think that the right way is to add the methods to the tp_methods slot and use METH_COEXIST in the PyMethodDef flags field. Example: /* start of silly module */ #include "Python.h" typedef struct { PyObject_HEAD double value; } SillyNumber_Object; /* Forward declarations */ static PyTypeObject SillyNumber_Type; #define SillyNumber_Check(op) PyObject_TypeCheck(op, &SillyNumber_Type) static PyObject * new_SillyNumber(PyTypeObject *type, double value) { PyObject *self; self = type->tp_alloc(type, 0); if (self == NULL) return NULL; ((SillyNumber_Object *)self)->value = value; return self; } static PyObject * SillyNumber_new(PyTypeObject *type, PyObject *args, PyObject *kwds) { double value = 0.0; static char *kwlist[] = {"value", 0}; if (!PyArg_ParseTupleAndKeywords(args, kwds, "|d:SillyNumber", kwlist, &value)) return NULL; return new_SillyNumber(type, value); } static PyObject * SillyNumber_add(PyObject *left, PyObject *right) { double sum; if (!SillyNumber_Check(left) || !SillyNumber_Check(right)) { Py_INCREF(Py_NotImplemented); return Py_NotImplemented; } sum = (((SillyNumber_Object *)left)->value + ((SillyNumber_Object *)right)->value); return new_SillyNumber(&SillyNumber_Type, sum); } static PyObject * SillyNumber_radd(PyObject *right, PyObject *left) { return SillyNumber_add(left, right); } static PyNumberMethods SillyNumber_as_number = { SillyNumber_add,/* nb_add */ 0, /* nb_subtract */ 0, /* nb_multiply */ 0, /* nb_divide */ 0, /* nb_remainder */ 0, /* nb_divmod */ 0, /* nb_power */ 0, /* nb_negative */ 0, /* nb_positive */ 0, /* nb_absolute */ 0, /* nb_nonzero */ }; static PyMethodDef SillyNumber_methods[] = { {"__add__", SillyNumber_add, METH_O | METH_COEXIST, "Add two SillyNumbers."}, {"__radd__", SillyNumber_radd, METH_O | METH_COEXIST, "Same as __add__."}, {NULL, NULL, 0, NULL} }; static PyTypeObject SillyNumber_Type = { PyObject_HEAD_INIT(NULL) 0, /* ob_size */ "silly.SillyNumber",/* tp_name */ sizeof(SillyNumber_Object), /* tp_basicsize */ 0, /* tp_itemsize */ 0, /* tp_dealloc */ 0, /* tp_print */ 0, /* tp_getattr */ 0, /* tp_setattr */ 0, /* tp_compare */ 0, /* tp_repr */ &SillyNumber_as_number, /* tp_as_number */ 0, /* tp_as_sequence */ 0, /* tp_as_mapping */ 0, /* tp_hash */ 0, /* tp_call */ 0, /* tp_str */ 0, /* tp_getattro */ 0, /* tp_setattro */ 0, /* tp_as_buffer */ Py_TPFLAGS_DEFAULT |/* tp_flags */ Py_TPFLAGS_CHECKTYPES | /* PyNumberMethods do their own coercion */ Py_TPFLAGS_BASETYPE,/* SillyNumber_Type allows subclassing */ "Silly float numbers", /* tp_doc */ 0, /* tp_traverse */ 0, /* tp_clear */ 0, /* tp_richcompare */ 0, /* tp_weaklistoffset */ 0, /* tp_iter */ 0, /* tp_iternext */ SillyNumber_methods,/* tp_methods */ 0, /* tp_members */ 0, /* tp_getset */ 0, /* tp_base */ 0, /* tp_dict */ 0, /* tp_descr_get */ 0, /* tp_descr_set */ 0, /* tp_dictoffset */ 0, /* tp_init */ 0, /* tp_alloc */ SillyNumber_new,/* tp_new */ 0,
Re: Python 3000 idea: reversing the order of chained assignments
John Nagle wrote: > > That's fascinating. Is that a documented feature of the language, > or a quirk of the CPython interpreter? > Its a documented feature of the language. From the Reference Manual: "An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right." See: http://docs.python.org/ref/assignment.html Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: PyImport_ImportModule/embedding: surprising behaviors
David Abrahams wrote: > I'm seeing highly surprising (and different!) behaviors of > PyImport_ImportModule on Linux and Windows when used in a program with > python embedding. > > On Linux, when attempting to import a module xxx that's in the current > directory, I get > > ImportError: No module named xxx > > I can work around the problem by setting > > PYTHONPATH=. Python puts the current directory in sys.path only if it can't determine the directory of the main script. There was a bug on Windows that always added current directory to sys.path, but it was fixed in Python 2.5. This is documented in the library reference: http://docs.python.org/lib/module-sys.html#l2h-5149 > On Windows, I get: > > 'import site' failed; use -v for traceback > > I can work around the problem by setting PYTHONPATH to point to the > python library directory: > > set PYTHONPATH=c:\Python25\Lib This happens because Python calculates the initial import path by looking for an executable file named "python" along PATH. You can change this by calling Py_SetProgramName(filename) before calling Py_Initialize(). This is documented in API reference manual: http://docs.python.org/api/embedding.html That page also describes a few hooks that you can overwrite to modify the initial search path. They are described in more detail on this page: http://docs.python.org/api/initialization.html HTH, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: operator overloading
looping wrote: > Hi, > for the fun I try operator overloading experiences and I didn't > exactly understand how it works. > > Here is my try: > >>> class myint(int): > > def __pow__(self, value): > return self.__add__(value) > > >>> a = myint(3) > >>> a ** 3 > > 6 > > OK, it works. Now I try different way to achieve the same result but > without much luck: > > >>> class myint(int): > pass > >>> myint.__pow__ = myint.__add__ > > or: > > >>> class myint(int): > > __pow__ = int.__add__ > > or: > > >>> class myint(int): > pass > >>> a.__pow__ = a.__add__ > > but for every try the result was the same:>>> a = myint(3) > >>> a ** 3 > 27 > > Why it doesn't works ? This looks like a bug in Python. It works for all the other operators: >>> class MyInt(int): ... __sub__ = int.__add__ ... __mul__ = int.__add__ ... __div__ = int.__add__ ... __truediv__ = int.__add__ ... __floordiv__ = int.__add__ ... __mod__ = int.__add__ ... __lshift__ = int.__add__ ... __rshift__ = int.__add__ ... __and__ = int.__add__ ... __xor__ = int.__add__ ... __or__ = int.__add__ ... __pow__ = int.__add__ ... >>> i = MyInt(42) >>> i + 3 45 >>> i - 3 45 >>> i * 3 45 >>> i / 3 45 >>> i // 3 45 >>> i % 3 45 >>> i << 3 45 >>> i >> 3 45 >>> i & 3 45 >>> i ^ 3 45 >>> i | 3 45 >>> i ** 3 74088 You should submit a bug report to the bug tracker: http://sourceforge.net/bugs/?group_id=5470 Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: How to better pickle an extension type
dgdev wrote: > I would like to pickle an extension type (written in pyrex). I have > it working thus far by defining three methods: > > class C: > # for pickling > __getstate__(self): > ... # make 'state_obj' > return state_obj > > __reduce__(self): > return C,(args,to,__init__),me.__getstate__() > > # for unpickling > __setstate__(self,state_obj): > self.x=state_obj.x > ... > > This gets the class pickling and unpickling. > > However, I'd like to not specify arguments for __init__ (as I do now > in __reduce__), and so not have __init__ invoked during unpickling. > > I would like to have the pickling machinery somehow create an > uninitialized object, and then call its __setstate__, where I can re- > create it from 'state_obj'. > > Is there a kosher way to do so, that is without me having to have a > special mode in the constructor for when the object is being created > by the unpickler? Why are you overwriting the __reduce__() method? The default object.__reduce__() method, inherited by all new style classes, already does what you want. If you really must overwrite it, and you don't want __init__() to get called, then you should return a reconstructor named __newobj__() as the first item of reduce tuple. Something like this: >>> def __newobj__(cls, *args): ... return cls.__new__(cls, *args) ... >>> class C(object): ... def __init__(self): ... print "I shouldn't be called at reconstruction" ... def __reduce__(self): ... try: ... getnewargs = self.__getnewargs__ ... except AttributeError: ... newargs = (self.__class__,) ... else: ... newargs = (self.__class__,) + getnewargs() ... try: ... getstate = self.__getstate__ ... except AttributeError: ... # this ignores __slots__ complications ... state = self.__dict__ ... else: ... state = getstate() ... # this ignores list and dict subclasses ... return __newobj__, newargs, state ... >>> c = C() I shouldn't be called at reconstruction >>> import pickle >>> for proto in range(3): ... assert isinstance(pickle.loads(pickle.dumps(c, proto)), C) ... >>> Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Conflicting needs for __init__ method
Mark wrote: [a lot of valid, but long concerns about types that return an object of their own type from some of their methods] I think that the best solution is to use an alternative constructor in your arithmetic methods. That way users don't have to learn about two different factories for the same type of objects. It also helps with subclassing, because users have to override only a single method if they want the results of arithmetic operations to be of their own type. For example, if your current implementation looks something like this: class Rational(object): # a long __init__ or __new__ method def __add__(self, other): # compute new numerator and denominator return Rational(numerator, denominator) # other simmilar arithmetic methods then you could use something like this instead: class Rational(object): # a long __init__ or __new__ method def __add__(self, other): # compute new numerator and denominator return self.result(numerator, denominator) # other simmilar arithmetic methods @staticmethod def result(numerator, denominator): """ we don't use a classmethod, because users should explicitly override this method if they want to change the return type of arithmetic operations. """ result = object.__new__(Rational) result.numerator = numerator result.denominator = denominator return result Hope this helps, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Class data members in C
Nick Maclaren wrote: > Hmm. The extensions documentation describes how to add instance > members to a class (PyMemberDef), but I want to add a class member. > Yes, this is constant for all instances of the class. > > Any pointers? Add something like this to your PyMODINIT_FUNC after you have initialized your type with PyType_Ready: PyDictSetItemString(YourType.tp_dict, "attrname", attrvalue); Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Number methods
Nick Maclaren wrote: > I can't find any description of these. Most are obvious, but some > are not. Note that this is from the point of view of IMPLEMENTING > them, not USING them. Specifically: The Python equivalents of these methods are described in the reference manual: http://docs.python.org/ref/numeric-types.html More details can be founf in various PEPs: http://www.python.org/dev/peps/ > Does Python use classic division (nb_divide) and inversion (nb_invert) > or are they entirely historical? Note that I can very easily provide > the latter. Python uses classic divison by default. True divison is used only when the division __future__ directive is in effect. See PEP 238 for details: http://www.python.org/dev/peps/pep-0238/ The nb_invert method is used for the implementation of the bitwise inverse unary operator (~). I don't think that it is deprecated. See: http://docs.python.org/lib/bitstring-ops.html for details. > Is there any documentation on the coercion function (nb_coerce)? It > seems to have unusual properties. It is used for old style Python classes and extension types that don't have Py_TPFLAGS_CHECKTYPES in their tp_flags. See: http://docs.python.org/ref/coercion-rules.html and http://www.python.org/dev/peps/pep-0208/ for details. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: xml.dom.minidom.parseString segmentation fault on mod_python
On Jan 26, 10:41 am, [EMAIL PROTECTED] wrote: > Python 2.4.4 > mod_python 3.2.10 + Apache 2.0 > > def index( req, **params ): > from xml.dom.minidom import parseString > doc = parseString( "whatever" ) > > => blank screen, _no_any_exception_; Apache error_log: > [Fri Jan 26 10:18:48 2007] [notice] child pid 17596 exit signal > Segmentation fault (11) > > Outside mod_python code works well. Any ideas? I would be grateful. http://www.python.org/sf/1558223 http://www.python.org/sf/1295808 http://www.python.org/sf/1075984 Try to compile all your dependencies against the same version of Expat or upgrade to python 2.5. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Conditional expressions - PEP 308
Colin J. Williams wrote: > It would be helpful if the rules of the game were spelled out more clearly. > > The conditional expression is defined as X if C else Y. > We don't know the precedence of the "if" operator. From the little test > below, it seem to have a lower precedence than "or". The rules are specified in the Python Reference Manual: http://docs.python.org/ref/Booleans.html Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Partial 1.0 - Partial classes for Python
Thomas Heller wrote: > > Do you have a pointer to that post? > I think that he was refering to this post: http://mail.python.org/pipermail/python-list/2006-December/416241.html If you are interested in various implementations there is also this: http://mail.python.org/pipermail/python-list/2006-August/396835.html and a module from PyPy: http://mail.python.org/pipermail/python-dev/2006-July/067501.html which was moved to a new location: https://codespeak.net/viewvc/pypy/dist/pypy/tool/pairtype.py?view=markup Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: cmath, __float__ and __complex__
Mark Dickinson wrote: > Does anyone know of a good reason for the above behaviour? Would a > patch to complexobject.c that `fixes' this be of any interest to > anyone but me? Or would it likely break something else? I think this is a bug in the PyComplex_AsCComplex function. To get more feedback, submit your patch to the Python patch tracker: http://sourceforge.net/patch/?group_id=5470 Patch submission guidelines can be found here: http://www.python.org/dev/patches/ Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Bypassing __setattr__ for changing special attributes
George Sakkis wrote: > I was kinda surprised that setting __class__ or __dict__ goes through > the __setattr__ mechanism, like a normal attribute: > > class Foo(object): > def __setattr__(self, attr, value): > pass > > class Bar(object): > pass > > >>> f = Foo() > >>> f.__class__ = Bar > >>> print f.__class__ is Foo > True > > Is there a way (even hackish) to bypass this, or at least achieve > somehow the same goal (change f's class) ? > > George >>> object.__setattr__(f, '__class__', Bar) >>> f.__class__ is Bar True Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: SystemError: new style getargs format but argument is not a tuple
zefciu wrote: > Ok. Now I do it this way: > > c_real = PyFloat_AsDouble(PyTuple_GetItem(coord,0)); > c_imag = PyFloat_AsDouble(PyTuple_GetItem(coord,1)); > > And it worked... once. The problem is really funny - in the interactive > the function fails every second time. > > >>> mandelpixel((1.5, 1.5), 9, 2.2) > > args parsed > coord parsed > ii3>>> mandelpixel((1.5, 1.5), 9, 2.2) > > TypeError: bad argument type for built-in operation>>> mandelpixel((1.5, > 1.5), 9, 2.2) > > args parsed > coord parsed > ii3>>> mandelpixel((1.5, 1.5), 9, 2.2) > > TypeError: bad argument type for built-in operation > > etcaetera (the "args parsed" "coord parsed" and "i" are effect of > printfs in the code, as you see when it fails, it doesn't even manage to > parse the arguments. The direct solution to your problem is to use the "tuple unpacking" feature of PyArg_ParseTuple by using "(dd)id" as format argument. This is shown in the first example. The second example uses your approach and is a bit more cumbersome, but still works. Could you post your current version of the code? I don't understand where your problem could be. #include "Python.h" static PyObject * mandelpixel1(PyObject *self, PyObject *args) { double z_real = 0, z_imag = 0, z_real2 = 0, z_imag2 = 0; double c_real, c_imag, bailoutsquare; int iteration_number; register int i; if (!PyArg_ParseTuple(args, "(dd)id", &c_real, &c_imag, &iteration_number, &bailoutsquare)) return NULL; for (i = 1; i <= iteration_number; i++) { z_imag = 2 * z_real * z_imag + c_imag; z_real = z_real2 - z_imag2 + c_real; z_real2 = z_real * z_real; z_imag2 = z_imag * z_imag; if (z_real2 + z_imag2 > bailoutsquare) return Py_BuildValue("i", i); } return Py_BuildValue("i", 0); } static PyObject * mandelpixel2(PyObject *self, PyObject *args) { double z_real = 0, z_imag = 0, z_real2 = 0, z_imag2 = 0; double c_real, c_imag, bailoutsquare; int iteration_number; PyObject *coord; register int i; if (!PyArg_ParseTuple(args, "Oid", &coord, &iteration_number, &bailoutsquare)) return NULL; if (!PyTuple_Check(coord)) { PyErr_SetString(PyExc_TypeError, "something informative"); return NULL; } if (!PyArg_ParseTuple(coord, "dd", &c_real, &c_imag)) return NULL; for (i = 1; i <= iteration_number; i++) { z_imag = 2 * z_real * z_imag + c_imag; z_real = z_real2 - z_imag2 + c_real; z_real2 = z_real * z_real; z_imag2 = z_imag * z_imag; if (z_real2 + z_imag2 > bailoutsquare) return Py_BuildValue("i", i); } return Py_BuildValue("i", 0); } static PyMethodDef MandelcMethods[] = { {"mandelpixel1", mandelpixel1, METH_VARARGS, "first version"}, {"mandelpixel2", mandelpixel2, METH_VARARGS, "second version"}, {NULL, NULL, 0, NULL}, }; PyMODINIT_FUNC initmandelc(void) { Py_InitModule("mandelc", MandelcMethods); } Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: 2.4->2.5 current directory change?
On Feb 26, 7:44 pm, "Chris Mellon" <[EMAIL PROTECTED]> wrote: > This appears to be a change in behavior from Python 2.4 to Python 2.5, > which I can't find documented anywhere. It may be windows only, or > related to Windows behavior. > > In 2.4, the current directory (os.curdir) was on sys.path. In 2.5, it > appears to be the base directory of the running script. For example, > if you execute the file testme.py in your current working directory, > '' is on sys.path. If you execute c:\Python25\Scripts\testme.py, '' is > *not* on sys.path, and C:\Python25\Scripts is. > > That means if you run a Python script located in another directory, > modules/etc in your current working directory will not be found. This > makes .py scripts in the PYTHONHOME\Scripts file moderately useless, > because they won't find anything in the current working directory. > > I first noticed this because it breaks Trial, but I'm sure there are > other scripts affected by it. Is this desirable behavior? Is there > anything to work around it except by pushing os.curdir onto sys.path? The change was intentional and is mentioned in the NEWS file: - Patch #1232023: Stop including current directory in search path on Windows. This unifies Python's behaviour across different platforms; the docs always said that the current directory is inserted *only* if the script directory is unavailable: As initialized upon program startup, the first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), path[0] is the empty string, which directs Python to search modules in the current directory first. Notice that the script directory is inserted before the entries inserted as a result of PYTHONPATH. The old behaviour was never intentional and wasn't desired, because users could break an application simply by running it from a directory that contained inappropriately named files. For details see the bug report and patch submission: http://www.python.org/sf/1526785 http://www.python.org/sf/1232023 Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: 2.4->2.5 current directory change?
Chris Mellon wrote: > Considering that it's a backwards incompatible breaking change > (although I understand why it was done), you'd think it deserved > mention in the more prominent "Whats new in Python 2.5" section on the > website, in addition to a one-liner in the NEWS file. Ah well, while > I'm sure I'm not the only one who ran into it, it doesn't seem to be > causing mass calamity and I know now. I guess that most of the scripts that want curdir on path and work on different platforms already have to include current directory manualy. Twisted's preamble in Trial does that too, but it is too cautious to work on Windows (line 15 in the trial script): if hasattr(os, "getuid") and os.getuid() != 0: sys.path.insert(0, os.curdir) Maybe that can be changed to: if not hasattr(os, "getuid") or os.getuid() != 0: sys.path.insert(0, os.curdir) I'm no security expert, and I don't know if there are other operating systems that don't have getuid() function but have a superuser, but this doesn't look that less secure to me. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: pickle problem - frexp() out of range
ahaldar wrote: > Hi: > > I have some large data structure objects in memory, and when I attempt > to pickle them, I get the following error: > > SystemError: frexp() out of range > > Are there some objects that are just too large to serialize, and if > so, is there an easy workaround without breaking up the object and > reconstructing it during deserialization? > > Here's the code I use to pickle the object: > > f = open(dir+file, "w+b") > pickle.dump(structure, f, protocol=2) # throws error > f.close() > > - abhra You are probably trying to pickle Inf or NaN. This was fixed in Python 2.5, see this revision: http://svn.python.org/view?rev=38893&view=rev and this patch: http://www.python.org/sf/1181301 Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Automatic reloading, metaclasses, and pickle
Andrew Felch wrote: > Hello all, > > I'm using the metaclass trick for automatic reloading of class member > functions, found > at:http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/160164 > > My problem is that if I > 1) pickle an object that inherits from "AutoReloader" > 2) unpickle the object > 3) modify one of the pickled' object's derived class methods > 4) reload the module holding the class > > ... then the changes don't affect the unpickled object. If I unpickle > the object again, of course the changes take effect. > > My friend that loves Smalltalk is laughing at me. I thought I had the > upperhand when I discovered the metaclasses but now I am not sure what > to do. I really don't want to have to unpickle again, I'm processing > video and it can take a long time. > > By the way, I used to avoid all of these problems by never making > classes, and always building complex structures of lists, > dictionaries, and tuples with global functions. It's going to take me > a while to kick those horrible habits (during my transition, I'm > deriving from list, dict, etc. hehe), perhaps a link to the metaclass > trick is in order in the tutorial's comments on reload? > > Any help that avoids having to unpickle again is appreciated! > > Thanks, > Andrew Felch This happens because unpickling doesn't recreate your object by calling its type. MetaInstanceTracker registers an instance only when it is created by calling a class. You can solve this by moving the instance registration to AutoReloader.__new__ and using pickle protocol version 2, but the best solution is to avoid both pickle (old pickles break if you change your code) and autoreloading (it's meant to be used in interactive console and entertaining ircbots, not in normal code). Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: gmpy moving to code.google.com
Alex Martelli wrote: > On Feb 27, 2007, at 2:59 AM, Daniel Nogradi wrote: > > > Hi Alex, > > > I did another test, this time with python 2.4 on suse and things are > > worse than in the previous case (which was python 2.5 on fedora 3), > > ouput of 'python gmp_test.py' follows: > > Interesting! gmpy interacts with decimal.Decimal by "monkey- > patching" that class on the fly; clearly the monkey-patching isn't > working with 2.4 on SuSE, so all the addition attempts are failing > (all 6 of them). > > So the issue is finding out why this strategy is failing there, while > succeeding on other Linux distros, Mac, and Windows. This is a bug in Python's decimal module in release 2.4.0. It was fixed in release 2.4.1: http://svn.python.org/view?rev=38708&view=rev Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Automatic reloading, metaclasses, and pickle
Andrew Felch wrote: > > Thanks Ziga. I use pickle protocol 2 and binary file types with the > command: "cPickle.dump(obj, file, 2)" > > I did your suggestion, i commented out the "__call__" function of > MetaInstanceTracker and copied the text to the __new__ function of > AutoReloader (code appended). I got a crazy recursive error message > (also appended below). In my code, I am creating a new instance, > rather than using the pickled object (it needs to work in both modes). > > Thanks very much for helping me get through this. With my development > approach, finding a solution to this problem is really important to > me. Here is a version that should work. It should work with all protocols, see InstanceTracker.__reduce_ex__. Note that all subclasses of InstanceTracker and AutoReloader should be cautious when overriding the __new__ method. They must call their base class' __new__ method, preferably by using super(), or the tracking won't work. # adapted from http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/160164 import weakref, inspect class MetaInstanceTracker(type): def __init__(cls, name, bases, ns): super(MetaInstanceTracker, cls).__init__(name, bases, ns) cls.__instance_refs__ = [] def __instances__(cls): instances = [] validrefs = [] for ref in cls.__instance_refs__: instance = ref() if instance is not None: instances.append(instance) validrefs.append(ref) cls.__instance_refs__ = validrefs return instances class InstanceTracker(object): __metaclass__ = MetaInstanceTracker def __new__(*args, **kwargs): cls = args[0] self = super(InstanceTracker, cls).__new__(*args, **kwargs) cls.__instance_refs__.append(weakref.ref(self)) return self def __reduce_ex__(self, proto): return super(InstanceTracker, self).__reduce_ex__(2) class MetaAutoReloader(MetaInstanceTracker): def __init__(cls, name, bases, ns): super(MetaAutoReloader, cls).__init__(name, bases, ns) f = inspect.currentframe().f_back for d in [f.f_locals, f.f_globals]: if name in d: old_class = d[name] for instance in old_class.__instances__(): instance.change_class(cls) cls.__instance_refs__.append(weakref.ref(instance)) for subcls in old_class.__subclasses__(): newbases = [] for base in subcls.__bases__: if base is old_class: newbases.append(cls) else: newbases.append(base) subcls.__bases__ = tuple(newbases) break class AutoReloader(InstanceTracker): __metaclass__ = MetaAutoReloader def change_class(self, new_class): self.__class__ = new_class Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Automatic reloading, metaclasses, and pickle
Andrew Felch wrote: > I pasted the code into mine and replaced the old. It seems not to > work for either unpickled objects or new objects. I add methods to a > class that inherits from AutoReloader and reload the module, but the > new methods are not callable on the old objects. Man! It seems we're > so close, it will be huge if this ends up working. This stuff is so > over my head, I wish I could help in some additional way. > > -Andrew Did you copy and paste the entire module? I fiddled almost all parts of the original code. I did some base testing and it worked for me. Could you post the traceback? Note that Google Groups messed up indentation; in MetaAutoReloader.__init__, the line starting with cls.__instance_refs__ should be at the same level as previous line. Did you restart Python? InstanceTracker, MetaInstanceTracker and MetaAutoReloader are not auto reloaded :). Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Automatic reloading, metaclasses, and pickle
Andrew Felch wrote: > Thanks for checking. I think I narrowed the problem down to > inheritance. I inherit from list or some other container first: > > class PointList( list, AutoReloader ): > def PrintHi1(self): > print "Hi2" > > class MyPrintingClass( AutoReloader ): > def PrintHi2(self): > print "Hi2v2" > > Automatic reloading works for MyPrintingClass but not for PointList. > Any ideas? > > -Andrew Ah yes, this is the problem of list.__new__ not calling the next class in MRO. Try to switch the bases, so that AutoReloader's __new__ method will be called first. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: __reduce__(1) vs __reduce__(2)
Kirill Simonov wrote: > Could someone explain why __reduce__(2) works for files while > __reduce__(1) doesn't? I think it is a bug. Both should raise an error. __reduce__ and __reduce_ex__ are part of the pickle protocol. Files are not meant to be pickable, since they are already persistent. With protocol 0 and 1, an error is raised when you try to pickle a builtin object that doesn't have a custom __reduce__() method. However, protocol 2 changed that, and now almost all objects can be pickled, but not necessarily unpickled to their original state. Files are one such example: you can pickle them, but the unpickled object won't be useful; you will get uninitialised file object. Basically, the result is equivalent to the following: unpickled = file.__new__(file) [snipped] > What is a correct procedure of getting state and restoring Python objects? You should never call __reduce__() with the protocol argument. It works only for objects that don't override the default __reduce__() method, and is possible only because object.__reduce_ex__ and object.__reduce__ represent the same underlying function. For example, sets override this method, and don't expect the protocol argument: >>> s = set([1,2,3]) >>> s.__reduce__(2) Traceback (most recent call last): ... TypeError: __reduce__() takes no arguments (1 given) The correct way is to try to call obj.__reduce_ex__(protocol) first, and fall back to obj.__reduce__() if that method is not available. Example: if hasattr(obj, '__reduce_ex__'): state_tuple = obj.__reduce_ex__(2) else: state_tuple = obj.__reduce__() For more details check the pickle protocol documentation: http://docs.python.org/lib/pickle-protocol.html as well as Extensions to the pickle protocol PEP: http://www.python.org/dev/peps/pep-0307/ > -- > xi Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: are docstrings for variables a bad idea?
jelle wrote: > Hi Michele, > > Thanks for pointing that out, cool! > > I would argue -even- that is too much programming effort. > Like method docstring, variables docstrings should be effortless to > write. I don't know what exactly do you mean with variable docstrings, but if you just want to add docstrings to instance's data attributes, you can use something like this: """ This module is useful for documenting data attributes. Example: >>> class C(object): ... foo = attr('a common attribute in examples') ... bar = attr('this one has a name for nicer errors', 'bar') ... baz = attr('and this one has a default value', default=1) ... def __init__(self, foo=None, bar=None, baz=None): ... if foo is not None: ... self.foo = foo ... if bar is not None: ... self.bar = bar ... if baz is not None: ... self.baz = baz ... >>> C.foo attr(doc='a common attribute in examples', name='', default=NoDefault) >>> C.foo.__doc__ 'a common attribute in examples' >>> C.bar.__doc__ 'this one has a name for nicer errors' >>> C.baz.__doc__ 'and this one has a default value' >>> c = C() >>> c.foo Traceback (most recent call last): ... AttributeError: 'C' object has no attribute '' >>> c.bar Traceback (most recent call last): ... AttributeError: 'C' object has no attribute 'bar' >>> c.baz 1 >>> d = C(1, 2, 3) >>> d.foo 1 >>> d.bar 2 >>> d.baz 3 """ class Marker(object): def __init__(self, representation): self.representation = representation def __repr__(self): return self.representation _NoDefault = Marker('NoDefault') class attr(object): def __init__(self, doc='', name='', default=_NoDefault): self.__doc__ = doc self.name = name self.default = default def __repr__(self): s = "attr(doc=%r, name=%r, default=%r)" return s % (self.__doc__, self.name, self.default) def __get__(self, obj, objtype): if obj is None: return self if self.default is _NoDefault: msg = "%r object has no attribute %r" raise AttributeError(msg % (objtype.__name__, self.name)) return self.default if __name__ == '__main__': import doctest doctest.testmod() -- http://mail.python.org/mailman/listinfo/python-list
Re: list*list
BBands wrote: > There must be a better way to multiply the elements of one list by > another: [snipped] > Perhaps a list comprehension or is this better addressed by NumPy? If you have a large amount of numerical code, it is definetly better to use numpy, since it is intended just for that purpose: >>> import numpy >>> a = numpy.array([1, 2, 3]) >>> b = numpy.array([1, 2, 3]) >>> c = a * b >>> c array([1, 4, 9]) Otherwise, you can use the builtin function map and functions in the operator module: >>> import operator >>> a = [1, 2, 3] >>> b = [1, 2, 3] >>> c = map(operator.mul, a, b) >>> c [1, 4, 9] >>> d = map(operator.add, a, b) >>> d [2, 4, 6] > Thanks, > > jab Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Finding defining class in a decorator
lcaamano wrote: > We have a tracing decorator that automatically logs enter/exits to/from > functions and methods and it also figures out by itself the function > call arguments values and the class or module the function/method is > defined on. Finding the name of the class where the method we just > entered was defined in is a bit tricky. [snipped] You might find this helpful: import sys def tracer(func): """ A decorator that prints the name of the class from which it was called. The name is determined at class creation time. This works only in CPython, since it relies on the sys._getframe() function. The assumption is that it can only be called from a class statement. The name of the class is deduced from the code object name. """ classframe = sys._getframe(1) print classframe.f_code.co_name return func if __name__ == '__main__': # this should print Test1 class Test1(object): @tracer def spam(self): pass # this should print Test2 class Test2(Test1): @tracer def spam(self): pass > -- > Luis P Caamano > Atlanta, GA, USA Hope this helps, Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: how to change sys.path?
Michael Yanowitz wrote: > Is there something like a .pythoninitrc which can run whenever we start > Python > that can load a file with many sys.path.append(), etc? > If not is there some way to modify the Python shell constructor and > destructor? > > Thanks in advance: > Michael yanowitz Yes, there is the user module: http://docs.python.org/lib/module-user.html which you have to explicitly import and which will look for .pythonrc.py file in user's home directory and execute it. The other option is a sitecustomize module, which should be put somewhere on the initial search path. It will be imported automatically during the interpreter initialization. See: http://docs.python.org/lib/module-site.html for details. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Trace dynamically compiled code?
Ed Leafe wrote: > Hi, > > Thanks to the help of many on this list, I've been able to take code > that is created by the user in my app and add it to an object as an > instance method. The technique used is roughly: Just some notes about your code: > nm = "myMethod" > code = """def myMethod(self): > print "Line 1" > print "My Value is %s" % self.Value > return > """ > compCode = compile(code, "", "exec") > exec compCode Try not using bare exec statements, since they pollute the local scope. In your example you could use: compCode = compile(code, "", "exec") d = {} exec compCode in d func = d[nm] See http://docs.python.org/ref/exec.html for details. > exec "self.%s = %s.__get__(self)" % (nm, nm) You don't need dynamic execution here; you can simply use setattr and the new module: import new method = new.instancemethod(func, self) setattr(self, nm, method) and yes, I remember that I was the one who suggested you the __get__ hack. > This is working great, but now I'm wondering if there is a way to > enable pdb tracing of the code as it executes? When tracing "normal" > code, pdb will show you the name of the script being executed, the > line number and the source code for the line about to be executed. > But when stepping through code compiled dynamically as above, the > current line's source code is not available to pdb, and thus does not > display. > > Does anyone know a way to compile the dynamic code so that pdb can > 'see' the source? I suppose I could write it all out to a bunch of > temp files, but that would be terribly messy. Are there any neater > solutions? You should check py lib: http://codespeak.net/py/current/doc/ , specifically the py.code "module". Then you can modify the function from above: import inspect f = inspect.currentframe() lineno = f.f_lineno - 5 # or some other constant filename = f.f_code.co_filename import py co = py.code.Code(func) new_code = co.new(co_lineno=lineno, co_filename=filename) new_func = new.function(new_code, func.func_globals, nm, func.func_defaults, func.func_closure) > -- Ed Leafe > -- http://leafe.com > -- http://dabodev.com Ziga Seilnacht -- http://mail.python.org/mailman/listinfo/python-list
Re: apache config file parser
David Bear wrote: > I was wondering if anyone has written an apache config file parser in > python. There seem to be a number of perl mods to do this. But I don't seem > to be able to find anything in python. > > -- > David Bear > -- let me buy your intellectual property, I want to own your thoughts -- ZConfig http://www.zope.org/Members/fdrake/zconfig/ seems to support similar syntax. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: __dict__ strangeness
Georg Brandl wrote: > Hi, > > can someone please tell me that this is correct and why: > > >>> class C(object): > ... pass > ... > >>> c = C() > >>> c.a = 1 > >>> c.__dict__ > {'a': 1} > >>> c.__dict__ = {} > >>> c.a > Traceback (most recent call last): > File "", line 1, in ? > AttributeError: 'C' object has no attribute 'a' > >>> > >>> class D(object): > ... __dict__ = {} > ... > >>> d = D() > >>> d.a = 1 > >>> d.__dict__ > {} > >>> d.__dict__ = {} > >>> d.a > 1 > > Thanks, > Georg Here is another example that might help: >>> class E(object): ... __dict__ = {'a': 1} ... >>> e = E() >>> e.__dict__ {'a': 1} >>> E.__dict__ >>> E.__dict__['a'] Traceback (most recent call last): File "", line 1, in ? KeyError: 'a' >>> E.__dict__['__dict__'] {'a': 1} Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: ** Operator
Christoph Zwerschke wrote: > Alex Martelli wrote: > > Sathyaish wrote: > > > >> I tried it on the interpreter and it looks like it is the "to the power > >> of" operator symbol/function. Can you please point me to the formal > >> definition of this operator in the docs? > > > > http://docs.python.org/ref/power.html > > I think this should be also mentioned in the Built-In Functions section > of the Library Reference. Probably most users do not read the Language > Reference (since the main menu says it's "for language lawyers" and yes, > it is not really fun to read). > > In the explanation about pow() at > http://docs.python.org/lib/built-in-funcs.html, the notation 10**2 is > suddenly used, without explaining that it is equivalent to pow(10,2). I > think this could be improved in the docs. > > -- Christoph It is: http://docs.python.org/lib/typesnumeric.html Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: user-supplied locals dict for function execution?
Lonnie Princehouse wrote: > Occaisionally, the first two lines of The Zen of Python conflict with > one another. > > An API I'm working on involves a custom namespace implementation using > dictionaries, and I want a pretty syntax for initializing the custom > namespaces. The fact that these namespaces are implemented as > dictionaries is an implementation detail, and I don't want the users to > access them directly. I find the "implicit update" syntax to be much > cleaner: > This can be easier achieved with a custom metaclass: >>> class MetaNamespace(type): ... def __new__(metaclass, name, bases, dict): ... try: ... Namespace ... except NameError: ... return type.__new__(metaclass, name, bases, dict) ... dict.pop('__module__', None) ... return dict ... >>> class Namespace(object): ... __metaclass__ = MetaNamespace ... Now whenever you want to create your dictionary you simply declare a class that inherits from Namespace: >>> class MyNamespace(Namespace): ... x = 5 ... y = 'spam' ... z = 'eggs' ... >>> print sorted(MyNamespace.items()) [('x', 5), ('y', 'spam'), ('z', 'eggs')] Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Per instance descriptors ?
bruno at modulix wrote: > Hi > > I'm currently playing with some (possibly weird...) code, and I'd have a > use for per-instance descriptors, ie (dummy code): > Now the question: is there any obvious (or non-obvious) drawback with > this approach ? Staticmethods won't work anymore: >>> class Test(object): ... @staticmethod ... def foo(): ... pass ... def __getattribute__(self, name): ... v = object.__getattribute__(self, name) ... if hasattr(v, '__get__'): ... return v.__get__(self, self.__class__) ... return v ... >>> test = Test() >>> test.foo() Traceback (most recent call last): File "", line 1, in ? TypeError: foo() takes no arguments (1 given) > TIA > -- > bruno desthuilliers > python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for > p in '[EMAIL PROTECTED]'.split('@')])" Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: __slots__
David Isaac wrote: > 1. "Without a __dict__ variable, > instances cannot be assigned new variables not listed in the __slots__ > definition." > > So this seemed an interesting restriction to impose in some instances, > but I've noticed that this behavior is being called by some a side effect > the reliance on which is considered unPythonic. Why? If you want to restrict attribute asignment, you should use the __setattr__ special method, see: http://docs.python.org/ref/attribute-access.html > 2. What is a simple example where use of slots has caused "subtle" problems, > as some claim it will? The first point is true only if all bases use __slots__: >>> class A(object): ... pass ... >>> class B(A): ... __slots__ = ('spam',) ... >>> b = B() >>> b.eggs = 1 >>> b.eggs 1 > 3. What is a simple example of a Pythonic use of __slots__ that does NOT > involved the creation of **many** instances. > > Thanks, > Alan Isaac Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Strange metaclass behaviour
Christian Eder wrote: > Hi, > > I think I have discovered a problem in context of > metaclasses and multiple inheritance in python 2.4, > which I could finally reduce to a simple example: I don't know if this is a bug; but I will try to expain what is happening; here is an example similar to yours: >>> class M_A(type): ... def __new__(meta, name, bases, dict): ... print 'metaclass:', meta.__name__, 'class:', name ... return super(M_A, meta).__new__(meta, name, bases, dict) ... >>> class M_B(M_A): ... pass ... >>> class A(object): ... __metaclass__ = M_A ... metaclass: M_A class: A >>> class B(object): ... __metaclass__ = M_B ... metaclass: M_B class: B So far everything is as expected. >>> class C(A, B): ... __metaclass__ = M_B ... metaclass: M_B class: C If we explicitly declare that our derived class inherits from the second base, which has a more derived metaclass, everything is OK. >>> class D(A, B): ... pass ... metaclass: M_A class: D metaclass: M_B class: D Now this is where it gets interesting; what happens is the following: - Since D does not have a __metaclass__ attribute, its type is determined from its bases. - Since A is the first base, its type (M_A) is called; unfortunately this is not the way metaclasses are supposed to work; the most derived metaclass should be selected. - M_A's __new__ method calls the __new__ method of the next class in MRO; that is, super(M_1, meta).__new__ is equal to type.__new__. - In type.__new__, it is determined that M_A is not the best type for D class; it should be actually M_B. - Since type.__new__ was called with wrong metaclass as the first argument, call the correct metaclass. - This calls M_B.__new__, which again calls type.__new__, but this time with M_B as the first argument, which is correct. As I said, I don't know if this is a bug or not, but you can achieve what is expected if you do the following in your __new__ method (warning, untested code): >>> from types import ClassType >>> class AnyMeta(type): ... """ ... Metaclass that follows type's behaviour in "metaclass resolution". ... ... Code is taken from Objects/typeobject.c and translated to Python. ... """ ... def __new__(meta, name, bases, dict): ... winner = meta ... for cls in bases: ... candidate = type(cls) ... if candidate is ClassType: ... continue ... if issubclass(winner, candidate): ... continue ... if issubclass(candidate, winner): ... winner = candidate ... continue ... raise TypeError("metaclass conflict: ...") ... if winner is not meta and winner.__new__ != AnyMeta.__new__: ... return winner.__new__(winner, name, bases, dict) ... # Do what you actually meant from here on ... print 'metaclass:', winner.__name__, 'class:', name ... return super(AnyMeta, winner).__new__(winner, name, bases, dict) ... >>> class OtherMeta(AnyMeta): ... pass ... >>> class A(object): ... __metaclass__ = AnyMeta ... metaclass: AnyMeta class: A >>> class B(object): ... __metaclass__ = OtherMeta ... metaclass: OtherMeta class: B >>> class C(A, B): ... pass ... metaclass: OtherMeta class: C > Does anyone have a detailed explanation here ? > Is this problem already known ? > > regards > chris I hope that above explanation helps. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Strange metaclass behaviour
Michele Simionato wrote: There is a minor bug in your code: > def thisclass(proc, *args, **kw): >""" Example: >>>> def register(cls): print 'registered' >... >>>> class C: >...thisclass(register) >... >registered >""" ># basic idea stolen from zope.interface, which credits P.J. Eby >frame = sys._getframe(1) >assert '__module__' in frame.f_locals # > <--- here >def makecls(name, bases, dic): > try: > cls = type(name, bases, dic) > except TypeError, e: > if "can't have only classic bases" in str(e): > cls = type(name, bases + (object,), dic) > else: # other strange errors, such as __slots__ conflicts, etc > raise > del cls.__metaclass__ > proc(cls, *args, **kw) > return cls >frame.f_locals["__metaclass__"] = makecls > > Figured you would like this one ;) > > Michele Simionato See this example: >>> import sys >>> def in_class_statement1(): ... frame = sys._getframe(1) ... return '__module__' in frame.f_locals ... >>> def in_class_statement2(): ... frame = sys._getframe(1) ... return '__module__' in frame.f_locals and not \ ...'__module__' in frame.f_code.co_varnames ... >>> class A(object): ... print in_class_statement1() ... print in_class_statement2() ... True True >>> def f(): ... __module__ = 1 ... print in_class_statement1() ... print in_class_statement2() ... >>> f() True False -- http://mail.python.org/mailman/listinfo/python-list
Re: property docstrings
Darren Dale wrote: > I am trying to work with properties, using python 2.4.2. I can't get the > docstrings to work, can someone suggest what I'm doing wrong? I think the > following script should print "This is the doc string.", but instead it > prints: > > "float(x) -> floating point number > > Convert a string or number to a floating point number, if possible." > > Thanks, > Darren > a=Example() > print 'myattr docstring:\n', a.myattr.__doc__ > print 'foo docstring:\n', a.foo.__doc__ > print 'bar docstring:\n', a.bar.__doc__ change this part to: print 'myattr docstring:\n', Example.myattr.__doc__ print 'foo docstring:\n', Example.foo.__doc__ print 'bar docstring:\n', Example.bar.__doc__ What happens is that when property is accessed from an instance, it returns whatever the fget function returns, and the __doc__ attribute is then looked up on that object. To get to the actual property object (and its __doc__ attribute) you have to access it from a class. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Comparisons and singletons
Steven Watanabe wrote: > PEP 8 says, "Comparisons to singletons like None should always be done > with 'is' or 'is not', never the equality operators." I know that "is" > is an identity operator, "==" and "!=" are the equality operators, but > I'm not sure what other singletons are being referred to here. Other builtin singeltons are NotImplemented and Ellipsis, see: http://docs.python.org/ref/types.html for details. > > Also, I've seen code that does things like: > > if foo is 3: > if foo is not '': > > Are these valid uses of "is"? No. Try this examples: >>> a = 'spam' >>> b = ''.join(list(a)) >>> b 'spam' >>> a == b True >>> a is b False >>> a = 1 >>> b = 1 >>> a == b True >>> a is b False > > Thanks in advance. > -- > Steven. Hope this helps. Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Comparisons and singletons
David Isaac wrote: > "Ziga Seilnacht" <[EMAIL PROTECTED]> wrote in message > news:[EMAIL PROTECTED] > > >>> a = 1 > > >>> b = 1 > > >>> a == b > > True > > >>> a is b > > False > > Two follow up questions: > > 1. I wondered about your example, > and noticed > >>> a = 10 > >>> b = 10 > >>> a is b > True > > Why the difference? Python has a special internal list of integers in which it caches numbers smaller than 1000 (I'm not sure that the number is correct), but that is an implementation detail and you should not rely on it. > 2. If I really want a value True will I ever go astray with the test: > if a is True: > >>> a = True > >>> b = 1. > >>> c = 1 > >>> a is True, b is True, c is True > (True, False, False) I think that True and False, although they were added in version 2.3, were not true singeltons until version 2.4. You should finish reading the PEP, see especially this part: - Don't compare boolean values to True or False using == Yes: if greeting: No:if greeting == True: Worse: if greeting is True: > > Thanks, > Alan Isaac Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: Why are so many built-in types inheritable?
Fabiano Sidler wrote: [snipped] > The problem with this is that the func_code attribute would contain > the code of PrintingFunction instead of func. What I wanted to do, is > to keep the original behaviour, i.e. set the variable __metaclass__ to > DebugMeta and so get debug output, without changing a function and > getting the original function's code object by the func_code > attribute, not PrintigFunction's one. That's why I *must* inherit from > . No, you don't have to: >>> import new >>> import types >>> class DebugFunction(object): ... def __init__(self, func): ... object.__setattr__(self, 'func', func) ... def __get__(self, obj, objtype): ... return new.instancemethod(self, obj, objtype) ... def __call__(self, *args, **namedargs): ... print args, namedargs ... func = object.__getattribute__(self, 'func') ... return func(*args, **namedargs) ... def __getattribute__(self, name): ... func = object.__getattribute__(self, 'func') ... return getattr(func, name) ... def __setattr__(self, name, value): ... func = object.__getattribute__(self, 'func') ... setattr(func, name, value) ... def __delattr__(self, name): ... func = object.__getattribute__(self, 'func') ... delattr(func, name) ... >>> class DebugMeta(type): ... def __new__(meta, name, bases, dict): ... for name, obj in dict.iteritems(): ... if isinstance(obj, types.FunctionType): ... dict[name] = DebugFunction(obj) ... return type.__new__(meta, name, bases, dict) ... >>> class Example(object): ... __metaclass__ = DebugMeta ... def spam(self, *args, **namedargs): ... """Spam spam spam spam. Lovely spam! Wonderful spam!""" ... pass ... >>> e = Example() >>> e.spam('eggs', anwser=42) (<__main__.spam object at ...>, 'eggs') {'anwser': 42} >>> e.spam.__doc__ 'Spam spam spam spam. Lovely spam! Wonderful spam!' >>> e.spam.im_func.func_code ", line 3> > Greetings, > F. Sidler Ziga -- http://mail.python.org/mailman/listinfo/python-list
Re: good style guides for python-style documentation ?
Fredrik Lundh wrote: > (reposted from doc-sig, which seems to be mostly dead > these days). > > over at the pytut wiki, "carndt" asked: > > Are there any guidelines about conventions concerning > punctuation, text styles and language style (e.g. how > to address the reader)? > > any suggestions from this list ? > > Documenting Python http://docs.python.org/dev/doc/style-guide.html recommends Apple Publications Style Guide: http://developer.apple.com/referencelibrary/API_Fundamentals/UserExperience-fund-date.html GNOME Documentation Style Guide is also quite useful: http://developer.gnome.org/documents/style-guide/ . Ziga -- http://mail.python.org/mailman/listinfo/python-list