Antoine Pitrou <pit...@free.fr> added the comment: Here is a new patch fixing most of your comments.
A couple of answers: > I believe we can support arbitrary values here, subject to floating > point rounding errors, by calling lock-with-timeout in a loop. I'm not > sure whether that's a good idea, but it fits better with python's > arbitrary-precision ints. I'm a bit wary of this, because we can't test it properly. > - task_handler.join(1e100) > + task_handler.join() > > Why is this change here? (Mostly curiosity) Because 1e100 would raise OverflowError :) > + if (timeout > PY_TIMEOUT_MAX) { > > I believe it's possible for this comparison to return false, but for > the conversion to PY_TIMEOUT_T to still overflow: Ok, I've replaced it with the following which should be ok: if (timeout >= (double) PY_TIMEOUT_MAX) [...] > + milliseconds = (microseconds + 999) / 1000; > > Can (microseconds+999) overflow? Indeed it can (I sincerely hoped that nobody would care...). I've replaced it with what might be a more appropriate construct. Please note that behaviour is undefined when microseconds exceeds the max timeout, though (this is the low-level C API). ---------- Added file: http://bugs.python.org/file16609/timedlock5.patch _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue7316> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com