This is semi-experimental and I'd appreciate opinions of whether it's the correct design approach or not. It seems like a good idea, but it doesn't mean it is.
I have a class 'A', this provides standard support functions and exception handling. I have 'B' and 'C' which specialise upon 'A' What I'd like to achieve is something similar to: @inject(B): def some_function(a, b): pass # something useful The name 'some_function' is completely redundant -- don't need it, don't actually care about the function afterwards, as long as it becomes a __call__ of a 'B' *instance*. I've basically got a huge list of functions, which need to be the callable method of an object, and possibly at run-time, so I don't want to do: class Something(B): def __call__(self, etc.. etc...): pass # do something I've got as far as type(somename, (B,), {}) -- do I then __init__ or __new__ the object or... In short, the function should be the __call__ method of an object that is already __init__'d with the function arguments -- so that when the object is called, I get the result of the the function (based on the objects values). Hope that makes sense, Cheers, Jon. -- http://mail.python.org/mailman/listinfo/python-list