On Wed, 22 Dec 2010 13:53:20 -0800, Carl Banks wrote: > On Dec 22, 8:52 am, kj <no.em...@please.post> wrote: >> In <mailman.65.1292517591.6505.python-l...@python.org> Robert Kern >> <robert.k...@gmail.com> writes: >> >> >Obfuscating the location that an exception gets raised prevents a lot >> >of debugging... >> >> The Python interpreter does a lot of that "obfuscation" already, and I >> find the resulting tracebacks more useful for it. >> >> An error message is only useful to a given audience if that audience >> can use the information in the message to modify what they are doing to >> avoid the error. > > So when the audience files a bug report it's not useful for them to > include the whole traceback?
Well, given the type of error KJ has been discussing, no, it isn't useful. Fault: function raises documented exception when passed input that is documented as being invalid What steps will reproduce the problem? 1. call the function with invalid input 2. read the exception that is raised 3. note that it is the same exception as documented What is the expected output? What do you see instead? Excepted somebody to hit me on the back of the head and tell me not to call the function with invalid input. Instead I just got an exception. You seem to have completely missed that there will be no bug report, because this isn't a bug. (Or if it is a bug, the bug is elsewhere, external to the function that raises the exception.) It is part of the promised API. The fact that the exception is generated deep down some chain of function calls is irrelevant. The analogy is this: imagine a function that delegates processing of the return result to different subroutines: def func(arg): if arg > 0: return _inner1(arg) else: return _inner2(arg) This is entirely irrelevant to the caller. When they receive the return result from calling func(), they have no way of knowing where the result came from, and wouldn't care even if they could. Return results hide information about where the result was calculated, as they should. Why shouldn't deliberate, explicit, documented exceptions be treated the same? Tracebacks expose the implementation details of where the exception was generated. This is the right behaviour if the exception is unexpected -- a bug internal to func -- since you need knowledge of the implementation of func in order to fix the unexpected exception. So far so good -- we accept that Python's behaviour under these circumstances is correct. But this is not the right behaviour when the exception is expected, e.g. an explicitly raised exception in response to an invalid argument. In this case, the traceback exposes internal details of no possible use to the caller. What does the caller care if func() delegates (e.g.) input checking to a subroutine? The subroutine is an irrelevant implementation detail. The exception is promised output of the function, just as much so as if it were a return value. Consider the principle that exceptions should be dealt with as close as possible to the actual source of the problem: >>> f('good input') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in f File "<stdin>", line 2, in g File "<stdin>", line 2, in h File "<stdin>", line 2, in i File "<stdin>", line 2, in j File "<stdin>", line 2, in k <=== error occurs here, and shown here ValueError But now consider the scenario where the error is not internal to f, but external. The deeper down the stack trace you go, the further away from the source of the error you get. The stack trace now obscures the source of the error, rather than illuminating it: >>> f('bad input') <=== error occurs here Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in f File "<stdin>", line 2, in g File "<stdin>", line 2, in h File "<stdin>", line 2, in i File "<stdin>", line 2, in j File "<stdin>", line 2, in k <=== far from the source of error ValueError There's no point in inspecting function k for a bug when the problem has nothing to do with k. The problem is that the input fails to match the pre-conditions for f. From the perspective of the caller, the error has nothing to do with k, k is a meaningless implementation detail, and the source of the error is the mismatch between the input and what f expects. And so by the principle of dealing with exceptions as close as possible to the source of the error, we should desire this traceback instead: >>> f('bad input') <=== error occurs here Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in f <=== matches where the error occurs ValueError In the absence of any practical way for function f to know whether an arbitrary exception in a subroutine is a bug or not, the least-worst decision is Python's current behaviour: take the conservative, risk- adverse path and assume the worst, treat the exception as a bug in the subroutine, and expose the entire stack trace. But, I suggest, we can do better using the usual Python strategy of implementing sensible default behaviour but allowing objects to customize themselves. Objects can declare themselves to be instances of some other class, or manipulate what names are reported by dir. Why shouldn't a function deliberately and explicitly take ownership of an exception raised by a subroutine? There should be a mechanism for Python functions to distinguish between unexpected exceptions (commonly known as "bugs"), which should be reported as coming from wherever they come from, and documented, expected exceptions, which should be reported as coming from the function regardless of how deep the function call stack really is. -- Steven -- http://mail.python.org/mailman/listinfo/python-list