> On Apr 6, 2020, at 12:19 PM, David Raymond wrote:
>
> Attempting reply as much for my own understanding.
>
> Are you on Mac? I think this is the pertinent bit for you:
> Changed in version 3.8: On macOS, the spawn start method is now the default.
> The fork start method should be considered u
145
>
> -Original Message-
> From: David Raymond
> Sent: Monday, April 6, 2020 4:19 PM
> To: python-list@python.org
> Subject: RE: Multiprocessing queue sharing and python3.8
>
> Attempting reply as much for my own understanding.
>
> Are you on Mac? I think
= mp.Pool(initializer = pool_init, initargs = (mp_comm_queue,))
...
-Original Message-
From: David Raymond
Sent: Monday, April 6, 2020 4:19 PM
To: python-list@python.org
Subject: RE: Multiprocessing queue sharing and python3.8
Attempting reply as much for my own understanding.
Are you on
Attempting reply as much for my own understanding.
Are you on Mac? I think this is the pertinent bit for you:
Changed in version 3.8: On macOS, the spawn start method is now the default.
The fork start method should be considered unsafe as it can lead to crashes of
the subprocess. See bpo-33725.
On Tue, 28 Mar 2017 15:38:38 -0400, Terry Reedy wrote:
> On 3/28/2017 2:51 PM, Frank Miles wrote:
>> I tried running a bit of example code from the py2.7 docs
>> (16.6.1.2. Exchanging objects between processes)
>> only to have it fail. The code is simply:
>> #
>> from multiprocessi
On 3/28/2017 2:51 PM, Frank Miles wrote:
I tried running a bit of example code from the py2.7 docs
(16.6.1.2. Exchanging objects between processes)
only to have it fail. The code is simply:
#
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
if
On 2017-03-28 19:51, Frank Miles wrote:
I tried running a bit of example code from the py2.7 docs
(16.6.1.2. Exchanging objects between processes)
only to have it fail. The code is simply:
#
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
if _
On Sat, May 9, 2015 at 12:31 AM, Michael Welle wrote:
>> As a general rule, queues need to have both ends operating
>> simultaneously, otherwise you're likely to have them blocking. In
>> theory, your code should all work with ridiculously low queue sizes;
>> the only cost will be concurrency (sin
On Fri, May 8, 2015 at 8:08 PM, Michael Welle wrote:
> Hello,
>
> what's wrong with [0]? As num_tasks gets higher proc.join() seems to
> block forever. First I thought the magical frontier is around 32k tasks,
> but then it seemed to work with 40k tasks. Now I'm stuck around 7k
> tasks. I think I
On Wed, Jan 14, 2015 at 2:16 PM, Chris Angelico wrote:
> And then you seek to run multiple workers. If my reading is correct,
> one of them (whichever one happens to get there first) will read the
> STOP marker and finish; the others will all be blocked, waiting for
> more work (which will never
On Thu, Jan 15, 2015 at 8:55 AM, wrote:
> I am trying to run a series of scripts on the Amazon cloud, multiprocessing
> on the 32 cores of our AWS instance. The scripts run well, and the queuing
> seems to work BUT, although the processes run to completion, the script below
> that runs the qu
Hi, thanks for the answer.
I thought about that, but the problem is that I found the problem in code
that *was* using the Queue between processes. This code for example fails
around 60% of the time in one of our linux machines (raising an Empty
exception):
from processing import Queue, Process
im
On 15/09/2010 21:10, Bruno Oliveira wrote:
Hi list,
I recently found a bug in my company's code because of a strange
behavior using multiprocessing.Queue. The following code snippet:
from multiprocessing import Queue
queue = Queue()
queue.put('x')
print queue.get_nowait()
Fails with:
...
F
13 matches
Mail list logo