On Aug 2, 4:52 pm, Andreas Pfrengle <a.pfren...@gmail.com> wrote: > I'm trying to define a subclass of int called int1. An int1-object > shall behave exactly like an int-object, with the only difference that > the displayed value shall be value + 1 (it will be used to display > array indices starting at 1 instead of 0). Right now I have: > > class int1(int): > def __str__(self): > return int.__str__(self + 1) > > However, if I calculate with int1 and int- (or other number) objects, > the result is always coerced to an int (or other number object), e.g: > a = int1(5) > b = 5 > print a # "6" > print a+b #"10" > > How can I tell int1 to be the "default integer object"? Do I need to > overload *every* mathematical operation method of int, or is there an > easier way?
(Preface: I normally don't offer recommendations without answering the question as asked, but once in a while it has to be done.) I **highly** recommend against this approach. You are creating an object that differs from a built-in, int, in a highly misleading way that only makes sense in a very limited context, and this object's modified behavior gives no clue that it's been modified in such as way. (That is, it's not possible to tell if the object's not a regular int just by looking at __str__()'s return value.) To make matters worse, you want to program this object to coerce other integers, so there's a risk of these objects escaping from the context where they make sense. This is just a bad idea. The type is not the place to implement behavior that makes sense only in a limited context. Instead, do something like this: print "Item %d is %s." % (i+1, s[i]) Carl Banks -- http://mail.python.org/mailman/listinfo/python-list