The problem isn’t with multiprocess.Queue. The problem is that
socketserver.ForkingMixIn calls os._exit() after the request is handled by the
forked process. Which I did not know until I dug into the internals of socket
server.ForkingMixIn.
This causes the Finalize() hooks inside of multiproces
On Sun, Oct 18, 2015 at 2:46 AM, James DeVincentis
wrote:
>
> I see, looks like I’ll have to use Queue.close()
>
> Didn’t think it would be necessary since I was assuming it would be
garbage collected. Sigh. Bug, fixed.
I'm not really following what the issue is here -- it sounds like it runs
pre
I’ve managed to partially confirm this theory. I switched the HTTPServer to use
the ThreadingMixIn instead of the ForkingMixIn. This causes the queue to behave
correctly.
The queue is created before any items are forked, then all of the processes are
forked out that support the HTTPServer. It a
I see, looks like I’ll have to use Queue.close()
Didn’t think it would be necessary since I was assuming it would be garbage
collected. Sigh. Bug, fixed.
Thanks everyone!
> On Oct 18, 2015, at 3:41 AM, James DeVincentis wrote:
>
> I get why it needs to be called, but this looks like a serious
I get why it needs to be called, but this looks like a serious annoyance.
Now I need help figuring this out.
socketserver.ForkingMixIn needs to use os._exit() so that the process never
makes it past handling the request. However, if there is a thread running
inside that process that manages a m
Seems I found the cause. os._exit() is used in ForkingMixIn for SocketServer
and it’s child classes.
Since os._exit() doesn’t flush buffers or clean anything up (hence not running
the Finalize hooks that multiprocess.Queue use to make sure data gets flushed),
this breaks multiprocess.Queue.
ht
So, whatever is causing this is a bit deeper in the multiprocessing.Queue
class. I tried using a non-blocking multiprocessing.Queue.get() by setting the
first parameter to false and then catching the queue.Empty exception. For some
reason even though there are objects in the queue (as evidenced
Looking into it, I seem to have found a race condition where a
multiprocessing.Queue.get() can get hung waiting for an object even if there
are objects in the Queue when being placed into the queue by a forked process
and then the process ending quickly.
I don’t know how to track this down any
I take that back. It’s not entirely fixed.
Something else strange is going on here. More debugging needed.
> On Oct 15, 2015, at 6:36 PM, James DeVincentis wrote:
>
> I think I tracked this down and resolved it.
>
> It appears taking an object from a multiprocess.Queue and placing it into
I think I tracked this down and resolved it.
It appears taking an object from a multiprocess.Queue and placing it into a
queue.Queue is a no-no even if the queue.Queue isn’t shared across processes.
I have a series of workers (multiprocessing) that share a multiprocess.Queue
across all process
On Thu, Oct 15, 2015 at 4:02 PM, James DeVincentis
wrote:
>
> Anyone have any ideas? I feel like this could be a bug with the garbage
collector across multiprocessing.
I'll second MRAB's response from yesterday: could it just be reusing space
that it has recently freed?
As a debugging measure, w
Anyone have any ideas? I feel like this could be a bug with the garbage
collector across multiprocessing.
From: James DeVincentis [mailto:ad...@hexhost.net]
Sent: Wednesday, October 14, 2015 12:41 PM
To: 'python-list@python.org'
Subject: Problem with copy.deepcopy and multiprocessing.Queue
On 2015-10-14 18:41, James DeVincentis wrote:
I’ve got a bit of a problem with copy.deepcopy and using
multiprocessing.Queue.
I have an HTTPAPI that gets exposed to add objects to a
multiprocessing.Qeue. Source code here:
https://github.com/jmdevince/cifpy3/blob/master/lib/cif/api/handler.py#L28
13 matches
Mail list logo