Vinay Sajip <vinay_sa...@yahoo.co.uk> added the comment:

Please clarify exsctly what you mean by "multiprocessing logger". Note that 
logging does not support logging to the same file from concurrent processes 
(threads *are* supported). See

http://docs.python.org/library/logging.html#logging-to-a-single-file-from-multiple-processes

for more information.

Also, I don't believe your fix is appropriate for the core logging module, and 
it's not clear to me why a lock failure would occur if the disk was full. It 
might be that way on Solaris (don't have use of a Solaris box), but not in 
general. In fact, this appears from your stack trace to be a problem in some 
custom handler you are using (defined in the file cloghandler.py on your 
system).

In any event, if you believe you can recover from the error, the right thing to 
do is to subclass the file handler you are using and override its handleError 
method to attempt recovery.

Did you post this problem on comp.lang.python? There are bound to be other 
Solaris users there who may be able to reproduce your problem and/or give you 
more advice about it.

Closing, as this is not a logging bug AFAICT.

----------
assignee:  -> vinay.sajip
resolution:  -> invalid
stage: test needed -> 
status: open -> closed

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue7664>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to