I've condensed the advice from this thread into this.

import sys
import traceback

class ExceptionList(object):
   def __init__(self, msg, errors=None, *args):
       self.errortb = errors or []
       super(ExceptionList, self).__init__(msg, *args)

   def __str__(self):
       """Print a pretty traceback of captured tracebacks
       """

       tracebacks = '\n'.join(self.errortb)
       msg = super(ExceptionList, self).__str__()
       parts=(msg, "="*78, tracebacks)
       msg = '\n'.join(parts)
       return msg

def capture_traceback(limit=None):
    tb = traceback.format_exception(limit=None, *sys.exc_info())
    tb = "".join(traceback for traceback in tb)
    return tb

class SomeException(ExceptionList, Exception):
    pass

errors = []

for c in 'hello':
    try:
        int(c)
    except ValueError:
        errors.append(capture_traceback())
if errors:
    raise SomeException('Multiple Exceptions encountered, see below 
tracebacks', errors)

This is like Aahz's method but I wanted to include the whole traceback and his 
example only captures the exception's error message.  I changed the example 
from Chris to only capture the text of the traceback rather than the whole 
traceback.

As for whether this is terribly strange or not useful --

Steven said
> That's a nice trick, but I'd really hate to see it in real life code. 
> Especially when each traceback was deep rather than shallow. Imagine 
> having fifty errors, each one of which was ten or twenty levels deep!

I don't want to get buried in overlong stack traces either so I included the 
optional limit parameter to the format_traceback call --

> I also question how useful this would be in real life. Errors can cascade 
> in practice, and so you would very likely get spurious errors that were 
> caused by the first error. You see the same thing in doctests, e.g. if 
> you do this:


I think this has use as in the example case -- when sequentially processing the 
elements of a list with the processing of each element being sufficiently 
'independent' of the processing of the other elements so that the errors won't 
cascade.  This seems handy for debugging situations like that, as might be 
common when developing a command line tool...  Or as in my case, when 
'validating' a sequence of elements.

I'd be curious if others think this use seems odd or a bad idea ...

Thanks to all for the free advice!
Ben

On May 7, 2010, at 7:24 PM, Chris Rebert wrote:

>> On May 6, 2010, at 10:56 PM, Chris Rebert wrote:
>>> On Thu, May 6, 2010 at 8:50 PM, Ben Cohen <nco...@ucsd.edu> wrote:
>>>> Is there a pythonic way to collect and display multiple exceptions at the 
>>>> same time?
>>>> 
>>>> For example let's say you're trying to validate the elements of a list and 
>>>> you'd like to validate as many of the elements as possible in one run and 
>>>> still report exception's raised while validating a failed element.
>>>> 
>>>> eg -- I'd like to do something like this:
>>>> 
>>>> errors = []
>>>> for item in data:
>>>>        try:
>>>>                process(item)
>>>>        except ValidationError as e:
>>>>                errors.append(e)
>>>> raise MultipleValidationErrors(*errors)
>>>> 
>>>> where if the raised MultipleValidationErrors exception goes uncaught the 
>>>> interpreter will print a nice traceback that includes the tracebacks of 
>>>> each raised ValidationError.  But I don't know how 
>>>> MultipleValidationErrors should be written ...
> <my implementation snipped>
> 
> On Fri, May 7, 2010 at 6:06 PM, Ben Cohen <nco...@ucsd.edu> wrote:
>> Many thanks for the excellent example!!  You rock!
>> 
>> Ben
> 
> However, I do agree with Steven that this approach to error handling
> is unusual. But I assume you have your reasons for wanting to do it
> this way.
> 
> Cheers,
> Chris
> --
> Python + Bioinformatics = Win
> http://blog.rebertia.com

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to