> On 6 Mar 2022, at 07:19, [email protected] wrote: > > For reference, this request comes from running Dask[1] jobs. Dask handles > retrying and tracking tasks across machines but if you're dealing with a > batch of inputs that reliably kills a worker it is really hard to debug, > moreso if it only happens ~12 hours into your job. At certain scales it's > quite hard to log every processing event reliably, and the overhead may not > be worth it for a 1 in 10,000,000 failure.
I think that the core dump will get you an answer now. With your example you will have only 1 core dump to look at. > On 6 Mar 2022, at 08:29, [email protected] wrote: > > If anyone is interested, I had a play around with this and came up with a > pretty simple-ish implementation: https://github.com/orf/cpython/pull/1/files. I wonder if you should just raise a bug against python for this and provide your PR for the implementation. Barry
_______________________________________________ Python-ideas mailing list -- [email protected] To unsubscribe send an email to [email protected] https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/[email protected]/message/3FWB5HREIHJCCO23O3AZJPMLPI545FEC/ Code of Conduct: http://python.org/psf/codeofconduct/
