On Mon, 27 Apr 2009 23:11:11 -0700, Aaron Brady wrote: > What is the rationale for considering all instances true of a user- > defined type? Is it strictly a practical stipulation, or is there > something conceptually true about objects?
Seven years ago, in an attempt to convince Guido *not* to include booleans in Python, Laura Creighton wrote a long, detailed post explaining her opposition. At the heart of her argument is the observation that Python didn't need booleans, at least not the ints-in-fancy-hats booleans that we've ended up with, because Python already made a far more useful and fundamental distinction: between Something and Nothing. http://groups.google.com/group/comp.lang.python/msg/2de5e1c8384c0360?hl=en All objects are either Something or Nothing. The instances of some classes are always Something, just as the instances of some classes are always Nothing. By default, instances are Something, unless __nonzero__ returns False, or __len__ returns 0, then they are Nothing. In a boolean (or truth) context, Something and Nothing behave like True and False in languages with real booleans: if obj: print "I am Something" else: print "I am Nothing" To steal an idiom from Laura: Python has a float-shaped Nothing 0.0, a list-shaped Nothing [], a dict-shaped Nothing {}, an int-shaped Nothing 0, a singleton Nothing None, and so forth. It also has many corresponding Somethings. All bool() does is convert Something or Nothing into a canonical form, the subclassed ints True and False. I'm not sure whether Guido ever used the terms Something vs Nothing when describing Python's truth-testing model, but it is clearly there, at the heart of Python. Python didn't even get a boolean type until version 2.3. -- Steven -- http://mail.python.org/mailman/listinfo/python-list