On 21 April 2015 at 16:53, Paulo da Silva
wrote:
> On 21-04-2015 11:26, Dave Angel wrote:
>> On 04/20/2015 10:14 PM, Paulo da Silva wrote:
>>> I have program that generates about 100 relatively complex graphics and
>>> writes then to a pdf book.
>>> It takes a while!
>>> Is there any possibility o
On 21-04-2015 03:14, Paulo da Silva wrote:
> I have program that generates about 100 relatively complex graphics and
> writes then to a pdf book.
> It takes a while!
> Is there any possibility of using multiprocessing to build the graphics
> and then use several calls to savefig(), i.e. some kind o
On 04/21/2015 07:54 PM, Dennis Lee Bieber wrote:
On Tue, 21 Apr 2015 18:12:53 +0100, Paulo da Silva
declaimed the following:
Yes. fork will do that. I have just looked at it and it is the same as
unix fork (module os). I am thinking of launching several forks that
will produce .png images an
On Tue, 21 Apr 2015 03:14:09 +0100, Paulo da Silva wrote:
> I have program that generates about 100 relatively complex graphics and
> writes then to a pdf book.
> It takes a while!
> Is there any possibility of using multiprocessing to build the graphics
> and then use several calls to savefig(),
On 21-04-2015 16:58, Chris Angelico wrote:
> On Wed, Apr 22, 2015 at 1:53 AM, Paulo da Silva
> wrote:
>> Yes, I have 8 cores and the graphics' processes calculation are all
>> independent. The problem I have is that if there is any way to generate
>> independent figures in matplotlib. The logic se
On Wed, Apr 22, 2015 at 1:53 AM, Paulo da Silva
wrote:
> Yes, I have 8 cores and the graphics' processes calculation are all
> independent. The problem I have is that if there is any way to generate
> independent figures in matplotlib. The logic seems to be build the
> graphic and save it. I was t
uild the
graphic and save it. I was trying to know if there is any way to build
graphic objects that can be built in parallel and, at the end, saved by
the controller task.
May be using fork instead of multiprocessing may do the job, but I still
didn't look at fork in Python. Being it po
On 04/20/2015 10:14 PM, Paulo da Silva wrote:
I have program that generates about 100 relatively complex graphics and
writes then to a pdf book.
It takes a while!
Is there any possibility of using multiprocessing to build the graphics
and then use several calls to savefig(), i.e. some kind of gra
I have program that generates about 100 relatively complex graphics and
writes then to a pdf book.
It takes a while!
Is there any possibility of using multiprocessing to build the graphics
and then use several calls to savefig(), i.e. some kind of graphic's
objects?
Thanks for any help/comments.
-
On 30/01/15 23:25, Marko Rauhamaa wrote:
Sturla Molden :
Only a handful of POSIX functions are required to be "fork safe", i.e.
callable on each side of a fork without an exec.
That is a pretty surprising statement. Forking without an exec is a
routine way to do multiprocessing.
I understand
Sturla Molden :
> Only a handful of POSIX functions are required to be "fork safe", i.e.
> callable on each side of a fork without an exec.
That is a pretty surprising statement. Forking without an exec is a
routine way to do multiprocessing.
I understand there are things to consider, but all sy
Andres Riancho wrote:
> Spawn, and I took that from the multiprocessing 3 documentation, will
> create a new process without using fork().
> This means that no memory
> is shared between the MainProcess and the spawn'ed sub-process created
> by multiprocessing.
If you memory map a segment with
Skip Montanaro wrote:
> Can you explain what you see as the difference between "spawn" and "fork"
> in this context? Are you using Windows perhaps? I don't know anything
> obviously different between the two terms on Unix systems.
spawn is fork + exec.
Only a handful of POSIX functions are requ
On Wed, Jan 28, 2015 at 3:06 PM, Skip Montanaro
wrote:
>
> On Wed, Jan 28, 2015 at 7:07 AM, Andres Riancho
> wrote:
>>
>> The feature I'm specially interested in is the ability to spawn
>> processes [1] instead of forking, which is not present in the 2.7
>> version of the module.
>
>
> Can you ex
On Wed, Jan 28, 2015 at 10:06 AM, Skip Montanaro
wrote:
> On Wed, Jan 28, 2015 at 7:07 AM, Andres Riancho
> wrote:
>> The feature I'm specially interested in is the ability to spawn
>> processes [1] instead of forking, which is not present in the 2.7
>> version of the module.
>
> Can you explain
On Wed, Jan 28, 2015 at 7:07 AM, Andres Riancho
wrote:
> The feature I'm specially interested in is the ability to spawn
> processes [1] instead of forking, which is not present in the 2.7
> version of the module.
>
Can you explain what you see as the difference between "spawn" and "fork"
in thi
List,
I've been searching around for a multiprocessing module backport from
3 to 2.7.x and the closest thing I've found was celery's billiard [0]
which seems to be a work in progress.
The feature I'm specially interested in is the ability to spawn
processes [1] instead of f
On 10/11/2013 10:53 AM, William Ray Wing wrote:
I'm running into a problem in the multiprocessing module.
My code is running four parallel processes which are doing network access completely
independently of each other (gathering data from different remote sources). On rare
circumst
I'm running into a problem in the multiprocessing module.
My code is running four parallel processes which are doing network access
completely independently of each other (gathering data from different remote
sources). On rare circumstances, the code blows up when one of my processes
h
On 11 March 2013 14:57, Abhinav M Kulkarni wrote:
> Hi Jean,
>
> Below is the code where I am creating multiple processes:
>
> if __name__ == '__main__':
> # List all files in the games directory
> files = list_sgf_files()
>
> # Read board configurations
> (intermediateBoards, fina
hanks,
Abhinav
On 03/11/2013 04:14 AM, Jean-Michel Pichavant wrote:
- Original Message -
Dear all,
I need some advice regarding use of the multiprocessing module.
Following is the scenario:
* I am running gradient descent to estimate parameters of a pairwise
grid CRF (or a grid based g
On 03/11/2013 01:57 AM, Abhinav M Kulkarni wrote:
* My laptop has quad-core Intel i5 processor, so I thought using
multiprocessing module I can parallelize my code (basically
calculate gradient in parallel on multiple cores simultaneously).
* As a result I end up creating
- Original Message -
> Dear all,
> I need some advice regarding use of the multiprocessing module.
> Following is the scenario:
> * I am running gradient descent to estimate parameters of a pairwise
> grid CRF (or a grid based graphical model). There are 106 data
>
Dear all,
I need some advice regarding use of the multiprocessing module.
Following is the scenario:
* I am running gradient descent to estimate parameters of a pairwise
grid CRF (or a grid based graphical model). There are 106 data
points. Each data point can be analyzed in parallel
hi all,
suppose I have a func F, list [args1,args2,args3,...,argsN] and want
to obtain r_i = F(args_i) in parallel mode. My difficulty is: if F
returns not None, than I should break calculations, and I can't dig in
multiprocessing module documentation how to do it. Order doesn't
matter
t me, it is almost all
>>> about the mechanism of multiprocessing module.
>>
>> [snip]
>>
>>> So the workflow is like this,
>>
>>> get() --> fork a subprocess to process the query request in
>>> async_func() -> when async_func() returns, callba
it is almost all
> > about the mechanism of multiprocessing module.
>
> [snip]
>
> > So the workflow is like this,
>
> > get() --> fork a subprocess to process the query request in
> > async_func() -> when async_func() returns, callback_func uses the
> >
On 3/8/2011 3:34 PM, Philip Semanchuk wrote:
On Mar 8, 2011, at 3:25 PM, Sheng wrote:
This looks like a tornado problem, but trust me, it is almost all
about the mechanism of multiprocessing module.
[snip]
So the workflow is like this,
get() --> fork a subprocess to process the qu
On Mar 8, 2011, at 3:25 PM, Sheng wrote:
> This looks like a tornado problem, but trust me, it is almost all
> about the mechanism of multiprocessing module.
[snip]
> So the workflow is like this,
>
> get() --> fork a subprocess to process the query request in
>
This looks like a tornado problem, but trust me, it is almost all
about the mechanism of multiprocessing module.
I borrowed the idea from http://gist.github.com/312676 to implement an
async db query web service using tornado.
p = multiprocessing.Pool(4)
class QueryHandler
Does it make sense to be able to substitute the pickling action in the
multiprocessing module with google protocol buffers instead? If so,
has anyone thought how to do it? I wanted some operation more compact/
faster than pickling for ipc of data.
Also, has anyone built any wrappers for the
On Dec 15 2009, 10:56 am, makobu wrote:
> I have a function that makes two subprocess.Popen() calls on a file.
>
> I have 8 cores. I need 8 instances of that function running in
> parallel at any given time till all the files are worked on.
> Can the multiprocessing module do thi
e files are worked on.
>Can the multiprocessing module do this? If so, whats the best method?
You don't quite explicitly say so, but it sounds like you have multiple
files. In which case, yes, it should be reasonably straightforward to
use multiprocessing; I haven't used it myself, but
I have a function that makes two subprocess.Popen() calls on a file.
I have 8 cores. I need 8 instances of that function running in
parallel at any given time till all the files are worked on.
Can the multiprocessing module do this? If so, whats the best method?
A technical overview of how the
En Fri, 10 Apr 2009 06:46:47 -0300, Deepak Rokade
escribió:
Since this application is going to be commercial one I want to know at
this
stage if there are any known serious bugs (not limitations) in the
multiprocessing module?
Go to http://bugs.python.org/ click on Search on the left
Hi All,
I have decided to use multiprocessing module in my application. In brief, my
application fetches files from multiple remote directories and distributes
the received files to one or more remote directories using SFTP.
Since this application is going to be commercial one I want to know
dmitrey wrote:
> This doesn't work for
> costlyFunction2 = lambda x: 11
> as well; and it doesn't work for imap, apply_async as well (same
> error).
> So, isn't it a bug, or it can be somehow fixed?
> Thank you in advance, D.
It's not a bug but a limitation of the pickle protocol. Pickle can't
han
dmitrey wrote:
# THIS WORKS OK
from multiprocessing import Pool
N = 400
K = 800
processes = 2
def costlyFunction2(z):
r = 0
for k in xrange(1, K+2):
r += z ** (1 / k**1.5)
return r
class ABC:
def __init__(self): pass
def testParallel(self):
po = Pool(processe
# THIS WORKS OK
from multiprocessing import Pool
N = 400
K = 800
processes = 2
def costlyFunction2(z):
r = 0
for k in xrange(1, K+2):
r += z ** (1 / k**1.5)
return r
class ABC:
def __init__(self): pass
def testParallel(self):
po = Pool(processes=processes)
On Feb 22, 12:52 pm, Joshua Judson Rosen wrote:
> Graham Dumpleton writes:
>
> > On Feb 21, 4:20 pm, Joshua Judson Rosen wrote:
> > > Jesse Noller writes:
>
> > > > On Tue, Feb 17, 2009 at 10:34 PM, Graham Dumpleton
> > > > wrote:
Graham Dumpleton writes:
>
> On Feb 21, 4:20 pm, Joshua Judson Rosen wrote:
> > Jesse Noller writes:
> >
> > > On Tue, Feb 17, 2009 at 10:34 PM, Graham Dumpleton
> > > wrote:
> > > > Why is the multiprocessing module, ie., multiproc
On Feb 21, 4:20 pm, Joshua Judson Rosen wrote:
> Jesse Noller writes:
>
> > On Tue, Feb 17, 2009 at 10:34 PM, Graham Dumpleton
> > wrote:
> > > Why is the multiprocessing module, ie., multiprocessing/process.py, in
> > > _bootstrap() doing:
>
> > &
Jesse Noller writes:
>
> On Tue, Feb 17, 2009 at 10:34 PM, Graham Dumpleton
> wrote:
> > Why is the multiprocessing module, ie., multiprocessing/process.py, in
> > _bootstrap() doing:
> >
> > os.close(sys.stdin.fileno())
> >
> > rather than:
> &g
On Feb 19, 1:16 pm, Jesse Noller wrote:
> On Tue, Feb 17, 2009 at 10:34 PM, Graham Dumpleton
>
>
>
> wrote:
> > Why is the multiprocessing module, ie., multiprocessing/process.py, in
> > _bootstrap() doing:
>
> > os.close(sys.stdin.fileno())
>
On Tue, Feb 17, 2009 at 10:34 PM, Graham Dumpleton
wrote:
> Why is the multiprocessing module, ie., multiprocessing/process.py, in
> _bootstrap() doing:
>
> os.close(sys.stdin.fileno())
>
> rather than:
>
> sys.stdin.close()
>
> Technically it is feasible that
Why is the multiprocessing module, ie., multiprocessing/process.py, in
_bootstrap() doing:
os.close(sys.stdin.fileno())
rather than:
sys.stdin.close()
Technically it is feasible that stdin could have been replaced with
something other than a file object, where the replacement doesn't
On Oct 22, 5:14 pm, Philip Semanchuk <[EMAIL PROTECTED]> wrote:
> On Oct 22, 2008, at 11:37 AM, Jesse Noller wrote:
>
> > On Wed, Oct 22, 2008 at 11:06 AM, Philip Semanchuk <[EMAIL PROTECTED]
> > > wrote:
> >>> One oversight I noticed the multiprocess
On Oct 22, 2008, at 11:37 AM, Jesse Noller wrote:
On Wed, Oct 22, 2008 at 11:06 AM, Philip Semanchuk <[EMAIL PROTECTED]
> wrote:
One oversight I noticed the multiprocessing module docs is that a
semaphore's acquire() method shouldn't have a timeout on OS X as
sem_timedwait()
of Free-BSD. OpenBSD
>> support is a non-starter.
>
> Hi Jesse,
> I wasn't aware of the multiprocessing module. It looks slick! Well done.
>
The credit goes to R. Oudkerk, the original author of the pyprocessing
library - I'm simply a rabid user who managed to wrangle it
On Oct 22, 2008, at 10:11 AM, Jesse Noller wrote:
On Tue, Oct 21, 2008 at 6:45 PM, <[EMAIL PROTECTED]> wrote:
It seems that the multiprocessing module in 2.6 is broken for *BSD;
I've seen issue 3770 regarding this. I'm curious if there are more
details on this issue since t
On Wed, Oct 22, 2008 at 10:31 AM, <[EMAIL PROTECTED]> wrote:
> On Oct 22, 8:11 am, "Jesse Noller" <[EMAIL PROTECTED]> wrote:
>> On Tue, Oct 21, 2008 at 6:45 PM, <[EMAIL PROTECTED]> wrote:
>> > It seems that the multiprocessing module in 2.6 is broken
On Oct 22, 8:11 am, "Jesse Noller" <[EMAIL PROTECTED]> wrote:
> On Tue, Oct 21, 2008 at 6:45 PM, <[EMAIL PROTECTED]> wrote:
> > It seems that the multiprocessing module in 2.6 is broken for *BSD;
> > I've seen issue 3770 regarding this. I'm curi
On Oct 21, 8:08 pm, Philip Semanchuk <[EMAIL PROTECTED]> wrote:
> On Oct 21, 2008, at 6:45 PM, [EMAIL PROTECTED] wrote:
>
> > It seems that the multiprocessing module in 2.6 is broken for *BSD;
> > I've seen issue 3770 regarding this. I'm curious if there are mo
On Tue, Oct 21, 2008 at 6:45 PM, <[EMAIL PROTECTED]> wrote:
> It seems that the multiprocessing module in 2.6 is broken for *BSD;
> I've seen issue 3770 regarding this. I'm curious if there are more
> details on this issue since the posts in 3770 were a bit unclear
On Oct 21, 2008, at 6:45 PM, [EMAIL PROTECTED] wrote:
It seems that the multiprocessing module in 2.6 is broken for *BSD;
I've seen issue 3770 regarding this. I'm curious if there are more
details on this issue since the posts in 3770 were a bit unclear. For
example, one post claime
It seems that the multiprocessing module in 2.6 is broken for *BSD;
I've seen issue 3770 regarding this. I'm curious if there are more
details on this issue since the posts in 3770 were a bit unclear. For
example, one post claimed that the problem was that sem_open isn't
implement
sturlamolden wrote:
On Jun 5, 11:02 am, pataphor <[EMAIL PROTECTED]> wrote:
This is probably not very central to the main intention of your post,
but I see a terminology problem coming up here. It is possible for
python objects to share a reference to some other object. This has
nothing to do w
On Jun 5, 11:02 am, pataphor <[EMAIL PROTECTED]> wrote:
> This is probably not very central to the main intention of your post,
> but I see a terminology problem coming up here. It is possible for
> python objects to share a reference to some other object. This has
> nothing to do with threads or
In article <877a5774-d3cc-49d3-bb64-5cab8505a419
@m3g2000hsc.googlegroups.com>, [EMAIL PROTECTED] says...
> I don't see pyprocessing as a drop-in replacement for the threading
> module. Multi-threading and multi-processing code tend to be
> different, unless something like mutable objects in share
Christian Heimes wrote:
> Can you provide a C implementation that compiles under VS 2008? Python
> 2.6 and 3.0 are using my new VS 2008 build system and we have dropped
> support for 9x, ME and NT4. If you can provide us with an
> implementation we *might* consider using it.
You'd have to at leas
sturlamolden schrieb:
> There is a well known C++ implementation of cow-fork on Windows, which
> I have slightly modified and ported to C. But as the new WDK (Windows
> driver kit) headers are full of syntax errors, the compiler choke on
> it. :( I am seriously considering re-implementing the whole
On Jun 4, 11:29 pm, Paul Boddie <[EMAIL PROTECTED]> wrote:
> tested the executable on Windows. COW (copy-on-write, for those still
> thinking that we're talking about dairy products) would be pretty
> desirable if it's feasible, though.
There is a well known C++ implementation of cow-fork on Wind
On 4 Jun, 20:06, sturlamolden <[EMAIL PROTECTED]> wrote:
>
> Even a non-COWfork
> would be preferred. I will strongly suggest something is done to add
> support for os.fork to Python on Windows. Either create a full cow
> fork using ZwCreateProces
I sometimes read python-dev, but never contribute. So I'll post my
rant here instead.
I completely support adding this module to the standard lib. Get it in
as soon as possible, regardless of PEP deadlines or whatever.
I don't see pyprocessing as a drop-in replacement for the threading
module. Mu
64 matches
Mail list logo