On Sat, Jul 06, 2019 at 04:54:42PM +1000, Chris Angelico wrote:
> But if I comment out the signal.signal line, there seem to be no ill
> effects. I suspect that what you're seeing here is the multiprocessing
> module managing its own subprocesses, telling some of them to shut
> down. I added a pri
On Sat, Jul 6, 2019 at 12:13 AM José María Mateos wrote:
>
> Hi,
>
> This is a minimal proof of concept for something that has been bugging me for
> a few days:
>
> ```
> $ cat signal_multiprocessing_poc.py
>
> import random
> import multiprocessing
> import signal
> import time
>
> def signal_ha
José María Mateos writes:
> This is a minimal proof of concept for something that has been bugging me for
> a few days:
>
> So basically I have some subprocesses that don't do anything, just sleep for
> a few milliseconds, and I capture SIGTERM signals. I don't expect
> ...
> Running round
Hi,
This is a minimal proof of concept for something that has been bugging me for a
few days:
```
$ cat signal_multiprocessing_poc.py
import random
import multiprocessing
import signal
import time
def signal_handler(signum, frame):
raise Exception(f"Unexpected signal {signum}!")
def proc
On Sat, Nov 23, 2013 at 3:38 AM, John Ladasky
wrote:
> On Thursday, November 21, 2013 8:24:05 PM UTC-8, Chris Angelico wrote:
>
>> Oh, that part's easy. Let's leave the multiprocessing module out of it
>> for the moment; imagine you spin up two completely separate instances
>> of Python. Create so
On Thursday, November 21, 2013 8:24:05 PM UTC-8, Chris Angelico wrote:
> Oh, that part's easy. Let's leave the multiprocessing module out of it
> for the moment; imagine you spin up two completely separate instances
> of Python. Create some object in one of them; now, transfer it to the
> other. H
On 22/11/2013 03:57, John Ladasky wrote:
...Richard submits his "hack" (his description) to Python 3.4 which pickles and
passes the string. When time permits, I'll try it out. Or maybe I'll wait, since Python
3.4.0 is still in alpha.
FTR beta 1 is due this Saturday 24/11/2013.
--
Python
On Fri, Nov 22, 2013 at 2:57 PM, John Ladasky
wrote:
> or, for that matter, why data needs to be pickled to pass it between
> processes.
Oh, that part's easy. Let's leave the multiprocessing module out of it
for the moment; imagine you spin up two completely separate instances
of Python. Create
On Thursday, November 21, 2013 2:32:08 PM UTC-8, Ethan Furman wrote:
> Check out bugs.python.org. Search for multiprocessing and tracebacks to see
> if anything is already there; if not, create a new issue.
And on Thursday, November 21, 2013 2:37:13 PM UTC-8, Terry Reedy wrote:
> 1. Use 3.3.3
On 11/21/2013 12:01 PM, John Ladasky wrote:
This is a case where you need to dig into the code (or maybe docs) a bit
File ".../evaluate.py", line 81, in evaluate
> result = pool.map(evaluate, bundles) File
"/usr/lib/python3.3/multiprocessing/pool.py", line 228, in map
> return self._map_
On 11/21/2013 01:49 PM, John Ladasky wrote:
So now, for anyone who is still reading this: is it your
opinion that the traceback that I obtained through
multiprocessing.pool._map_async().get() SHOULD have allowed
me to see what the ultimate cause of the exception was?
It would certainly be ni
Followup:
I didn't need to go as far as Chris Angelico's second suggestion. I haven't
looked at certain parts of my own code for a while, but it turns out that I
wrote it REASONABLY logically...
My evaluate() calls another function through pool.map_async() -- _evaluate(),
which actually proce
On Thursday, November 21, 2013 12:53:07 PM UTC-8, Chris Angelico wrote:
> What you could try is
Suggestion 1:
> printing out the __cause__ and __context__ of
> the exception, to see if there's anything useful in them;
Suggestion 2:
> if there's
> nothing, the next thing to try would be some
On Fri, Nov 22, 2013 at 5:25 AM, John Ladasky
wrote:
> On Thursday, November 21, 2013 9:24:33 AM UTC-8, Chris Angelico wrote:
>
>> Hmm. This looks like a possible need for the 'raise from' syntax.
>
> Thank you, Chris, that made me feel like a REAL Python programmer -- I just
> did some reading,
On Fri, Nov 22, 2013 at 4:01 AM, John Ladasky
wrote:
> Here is the end of the traceback, starting with the last line of my code:
> "result = pool.map(evaluate, bundles)". After that, I'm into Python itself.
>
> File ".../evaluate.py", line 81, in evaluate
> result = pool.map(evaluate, bund
On Thursday, November 21, 2013 9:24:33 AM UTC-8, Chris Angelico wrote:
> Hmm. This looks like a possible need for the 'raise from' syntax.
Thank you, Chris, that made me feel like a REAL Python programmer -- I just did
some reading, and the "raise from" feature was not implemented until Python
Hi folks,
Somewhat over a year ago, I struggled with implementing a routine using
multiprocessing.Pool and numpy. I eventually succeeded, but I remember finding
it very hard to debug. Now I have managed to provoke an error from that
routine again, and once again, I'm struggling.
Here is the
>
> I get this exception when I run the first program:
>
> Exception in thread Thread-1:
> Traceback (most recent call last):
> File "/usr/lib/python3.1/threading.py", line 516, in _bootstrap_inner
> self.run()
> File "/usr/lib/python3.1/threading.py", line 469, in run
> self._target(*s
2011/9/11 蓝色基因 :
> This is my first touch on the multiprocessing module, and I admit not
> having a deep understanding of parallel programming, forgive me if
> there's any obvious error. This is my test code:
>
> # deadlock.py
>
> import multiprocessing
>
> class MPTask:
> def __init__(self)
This is my first touch on the multiprocessing module, and I admit not
having a deep understanding of parallel programming, forgive me if
there's any obvious error. This is my test code:
# deadlock.py
import multiprocessing
class MPTask:
def __init__(self):
self._tseq= ran
On Sun, 2010-01-10 at 14:45 -0500, Adam Tauno Williams wrote:
> I have a Python multiprocessing application where a master process
> starts server sub-processes and communicates with them via Pipes; that
> works very well. But one of the subprocesses, in turn, starts a
> collection of HTTPServer
I have a Python multiprocessing application where a master process
starts server sub-processes and communicates with them via Pipes; that
works very well. But one of the subprocesses, in turn, starts a
collection of HTTPServer 'workers' (almost exactly as demonstrated in
the docs). This works pe
your help, I'm new to the multiprocessing module and this
was very helpful!
On Jul 17, 4:26 am, Piet van Oostrum wrote:
> There is stil something not clear in your description.
>
> >m> I'm using multiprocessing to spawn several subprocesses, each of which
> >m>
There is stil something not clear in your description.
>m> I'm using multiprocessing to spawn several subprocesses, each of which
>m> uses a very large data structure (making it impractical to pass it via
>m> pipes / pickling). I need to allocate this structure once
I think Diez' example show this work automatically in Unix. In my case
I use Windows. I use the multiprocessing.Array to share data in shared
memory. multiprocessing.Array has a limitation that it can only
reference simple C data types, not Python objects though.
Wai Yip Tung
--
http://mail.pytho
> mheavner (m) wrote:
>m> I realize that the Queue would be the best way of doing this, however
>m> that involves transferring the huge amount of data for each call - my
>m> hope was to transfer it once and have it remain in memory for the
>m> subprocess across run() calls.
Which huge amount
I realize that the Queue would be the best way of doing this, however
that involves transferring the huge amount of data for each call - my
hope was to transfer it once and have it remain in memory for the
subprocess across run() calls.
On Jul 16, 1:18 pm, Piet van Oostrum wrote:
> > mheavner
> mheavner (m) wrote:
>m> 'The process' refers to the subprocess. I could do as you say, load
>m> the data structure each time, but the problem is that takes a
>m> considerable amount of time compared to the the actual computation
>m> with the data it contains. I'm using these processes withi
On Jul 16, 9:18 am, mheavner wrote:
> On Jul 16, 8:39 am, Piet van Oostrum wrote:
>
>
>
> > >>>>> mheavner (m) wrote:
> > >m> I'm using multiprocessing to spawn several subprocesses, each of which
> > >m> uses a very large data
On Jul 16, 8:39 am, Piet van Oostrum wrote:
> >>>>> mheavner (m) wrote:
> >m> I'm using multiprocessing to spawn several subprocesses, each of which
> >m> uses a very large data structure (making it impractical to pass it via
> >m> pipes / pick
>>>>> mheavner (m) wrote:
>m> I'm using multiprocessing to spawn several subprocesses, each of which
>m> uses a very large data structure (making it impractical to pass it via
>m> pipes / pickling). I need to allocate this structure once when the
>m&
mheavner schrieb:
I'm using multiprocessing to spawn several subprocesses, each of which
uses a very large data structure (making it impractical to pass it via
pipes / pickling). I need to allocate this structure once when the
process is created and have it remain in memory for the durati
I'm using multiprocessing to spawn several subprocesses, each of which
uses a very large data structure (making it impractical to pass it via
pipes / pickling). I need to allocate this structure once when the
process is created and have it remain in memory for the duration of
the process. Th
On Tue, 16 Jun 2009 23:20:05 +0200
Piet van Oostrum wrote:
> > Matt (M) wrote:
>
> >M> Try replacing:
> >M> cmd = [ "ls /path/to/file/"+staname+"_info.pf" ]
> >M> with:
> >M> cmd = [ “ls”, “/path/to/file/"+staname+"_info.pf" ]
>
> In addition I would like to remark that -- if the o
> Matt (M) wrote:
>M> Try replacing:
>M> cmd = [ "ls /path/to/file/"+staname+"_info.pf" ]
>M> with:
>M> cmd = [ “ls”, “/path/to/file/"+staname+"_info.pf" ]
In addition I would like to remark that -- if the only thing you want to
do is to start up a new command with subprocess.Popen -
Thanks Matt - that worked.
Kind regards,
- Rob
On Jun 16, 2009, at 12:47 PM, Matt wrote:
Try replacing:
cmd = [ "ls /path/to/file/"+staname+"_info.pf" ]
with:
cmd = [ “ls”, “/path/to/file/"+staname+"_info.pf" ]
Basically, the first is the conceptual equivalent of executing the
following
Try replacing:
cmd = [ "ls /path/to/file/"+staname+"_info.pf" ]
with:
cmd = [ “ls”, “/path/to/file/"+staname+"_info.pf" ]
Basically, the first is the conceptual equivalent of executing the
following in BASH:
‘ls /path/to/file/FOO_info.pf’
The second is this:
‘ls’ ‘/path/to/file/FOO_info.pf
Hi All,
I am new to Python, and have a very specific task to accomplish. I
have a command line shell script that takes two arguments:
create_graphs.sh -v --sta=STANAME
where STANAME is a string 4 characters long.
create_graphs creates a series of graphs using Matlab (among other 3rd
party
On 6/02/2009 4:21 PM, Volodymyr Orlenko wrote:
In the patch I submitted, I simply check if the name of the supposed
module ends with ".exe". It works fine for my case, but maybe this is
too general. Is there a chance that a Python module would end in ".exe"?
IIRC, py2exe may create executables
On 05/02/2009 9:54 PM, James Mills wrote:
On Fri, Feb 6, 2009 at 3:21 PM, Volodymyr Orlenko wrote:
[...] Maybe there's another
way to fix the forking module?
I believe the best way to fix this is to fix the underlying
issue that Mark has pointed out (monkey-patching mp won't do).
On Fri, Feb 6, 2009 at 3:21 PM, Volodymyr Orlenko wrote:
> In the patch I submitted, I simply check if the name of the supposed module
> ends with ".exe". It works fine for my case, but maybe this is too general.
> Is there a chance that a Python module would end in ".exe"? If so, maybe we
> shoul
On 05/02/2009 8:26 PM, Mark Hammond wrote:
On 6/02/2009 2:50 PM, Mark Hammond wrote:
On 6/02/2009 11:37 AM, Volodya wrote:
Hi all,
I think I've found a small bug with multiprocessing package on
Windows.
I'd actually argue its a bug in pythonservice.exe - it should set
sys.argv[] to resembl
On 6/02/2009 2:50 PM, Mark Hammond wrote:
On 6/02/2009 11:37 AM, Volodya wrote:
Hi all,
I think I've found a small bug with multiprocessing package on
Windows.
I'd actually argue its a bug in pythonservice.exe - it should set
sys.argv[] to resemble a normal python process with argv[0] being t
On 6/02/2009 11:37 AM, Volodya wrote:
Hi all,
I think I've found a small bug with multiprocessing package on
Windows.
I'd actually argue its a bug in pythonservice.exe - it should set
sys.argv[] to resemble a normal python process with argv[0] being the
script. I'll fix it...
Cheers,
Mar
Hi all,
I think I've found a small bug with multiprocessing package on
Windows. If you try to start a multiprocessing.Process from a Python-
based Windows service, the child process will fail to run. When
running the parent process as a regular Python program, everything
works as expected.
I've t
On Oct 10, 10:48 pm, nhwarriors <[EMAIL PROTECTED]> wrote:
> On Oct 10, 10:52 pm, "Aaron \"Castironpi\" Brady"
>
>
>
> <[EMAIL PROTECTED]> wrote:
> > On Oct 10, 3:32 pm, nhwarriors <[EMAIL PROTECTED]> wrote:
>
> > > I am attempting to use the (new in 2.6) multiprocessing package to
> > > process 2
On Oct 10, 10:52 pm, "Aaron \"Castironpi\" Brady"
<[EMAIL PROTECTED]> wrote:
> On Oct 10, 3:32 pm, nhwarriors <[EMAIL PROTECTED]> wrote:
>
>
>
> > I am attempting to use the (new in 2.6) multiprocessing package to
> > process 2 items in a large queue of items simultaneously. I'd like to
> > be able
On Oct 10, 3:32 pm, nhwarriors <[EMAIL PROTECTED]> wrote:
> I am attempting to use the (new in 2.6) multiprocessing package to
> process 2 items in a large queue of items simultaneously. I'd like to
> be able to print to the screen the results of each item before
> starting the next one. I'm having
On Fri, Oct 10, 2008 at 4:32 PM, nhwarriors <[EMAIL PROTECTED]> wrote:
> I am attempting to use the (new in 2.6) multiprocessing package to
> process 2 items in a large queue of items simultaneously. I'd like to
> be able to print to the screen the results of each item before
> starting the next on
I am attempting to use the (new in 2.6) multiprocessing package to
process 2 items in a large queue of items simultaneously. I'd like to
be able to print to the screen the results of each item before
starting the next one. I'm having trouble with this so far.
Here is some (useless) example code th
50 matches
Mail list logo