On 2:59 PM, Carl Banks wrote:
On Sep 5, 1:19 pm, Spencer Pearson<speeze.pear...@gmail.com> wrote:
Hi! I'm writing a package with several files in it, and I've found
that "isinstance" doesn't work the way I expect under certain
circumstances.
Short example: here are two files.
# fileone.py
import filetwo
class AClass( object ):
pass
if __name__ ='__main__':
a =Class()
filetwo.is_aclass( a )
# filetwo.py
import fileone
def is_aclass( a ):
print "The argument is", ("" if isinstance(a, fileone.AClass) else
"not"), "an instance of fileone.AClass"
If you run fileone.py, it will tell you that "The argument is not an
instance of fileone.AClass", which seems strange to me, given that the
fileone module is the one that CREATES the object with its own AClass
class. And if you replace "if __name__ ='__main__'" with "def
main()", start Python, import fileone, and call fileone.main(), it
tells you that the argument IS an instance of AClass.
So, the module's name change to __main__ when you run it on its own...
well, it looks like it puts all of the things defined in fileone in
the __main__ namespace INSTEAD of in the fileone module's namespace,
and then when filetwo imports fileone, the class is created again,
this time as fileone.AClass, and though it's identical in function to
__main__.AClass, one "is not" the other.
Correct. Python always treats the main script as a module called
__main__. If you then try to import the main script file from another
module, Python will actually import it again with whatever its usual
name is.
This is easily one of the most confusing and unfortunate aspects of
Python.
Is this kind of doubled-back 'isinstance' inherently sinful? I mean, I
could solve this problem by giving all of my classes "classname"
attributes or something, but maybe it's just a sign that I shouldn't
have to do this in the first place.
Even if there are better ways than isinstance, the weird behavior of
__main__ shouldn't be the reason not to use it.
My recommendation for most programmers is to treat Python files either
as scripts (which you start Python interpreter with) or modules (which
you import from within Python); never both. Store most functionality
in modules and keep startup scripts small. If you do this, the weird
semantics of __main__ is a moot point.
If you want to be able to run a module as a script while avoiding side
effects due to it being named __main__, the easiest thing to do is to
put something like the following boilerplate at the top of the module
(this causes the module to rename itself).
import sys
if __name__ ='__main__':
is_main =rue # since you're overwriting __name__ you'll need
this later
__name__ =foo'
sys.modules['foo'] =ys.modules['__main__']
else:
is_main =alse
All of this gets a lot more complicated when packages are involved.
Carl Banks
Perhaps a better answer would be to import __main__ from the second module.
But to my way of thinking, the answer should be to avoid ever having
circular imports. This is just the most blatant of the problems that
circular imports can cause.
I don't know of any cases where circular dependencies are really
necessary, but if one decides to use them, then two things should be done:
1) do almost nothing in top-level code in any module involved in such
circular dependency. Top-level should have all of the imports, and none
of the executable code.
2) do not ever involve the startup script in the loop. If necessary,
make it two lines, importing,then calling the real mainline.
DaveA
--
http://mail.python.org/mailman/listinfo/python-list