Fabien writes:
> I am developing a tool which works on individual entities (glaciers)
> and do a lot of operations on them. There are many tasks to do, one
> after each other, and each task follows the same interface: ...
If most of the resources will be spent on computation and the
communication
On 06/20/2015 05:14 AM, Cameron Simpson wrote:
I would keep your core logic Pythonic, raise exceptions. But I would
wrap each task in something to catch any Exception subclass and report
back to the queue. Untested example:
def subwrapper(q, callable, *args, **kwargs):
try:
q.put( ('
On 06/19/2015 10:58 PM, Chris Angelico wrote:
AIUI what he's doing is all the subparts of task1 in parallel, then
all the subparts of task2:
pool.map(task1, dirs, chunksize=1)
pool.map(task2, dirs, chunksize=1)
pool.map(task3, dirs, chunksize=1)
task1 can be done on all of dirs in parallel, as
On 19Jun2015 18:16, Fabien wrote:
On 06/19/2015 04:25 PM, Andres Riancho wrote:
My recommendation is that you should pass some extra arguments to the task:
* A unique task id
* A result multiprocessing.Queue
When an exception is raised you put (unique_id, exception) to the
queue
On Sat, Jun 20, 2015 at 1:41 AM, Steven D'Aprano wrote:
> On Sat, 20 Jun 2015 12:01 am, Fabien wrote:
>
>> Folks,
>>
>> I am developing a tool which works on individual entities (glaciers) and
>> do a lot of operations on them. There are many tasks to do, one after
>> each other, and each task fol
On 06/19/2015 04:25 PM, Andres Riancho wrote:
Fabien,
My recommendation is that you should pass some extra arguments to the task:
* A unique task id
* A result multiprocessing.Queue
When an exception is raised you put (unique_id, exception) to the
queue. When it succeeds you
On 06/19/2015 05:41 PM, Steven D'Aprano wrote:
On Sat, 20 Jun 2015 12:01 am, Fabien wrote:
>Folks,
>
>I am developing a tool which works on individual entities (glaciers) and
>do a lot of operations on them. There are many tasks to do, one after
>each other, and each task follows the same inter
On Sat, 20 Jun 2015 12:01 am, Fabien wrote:
> Folks,
>
> I am developing a tool which works on individual entities (glaciers) and
> do a lot of operations on them. There are many tasks to do, one after
> each other, and each task follows the same interface:
I'm afraid your description is contrad
- Original Message -
> From: "Oscar Benjamin"
> A simple way to approach this could be something like:
>
> #!/usr/bin/env python3
>
> import math
> import multiprocessing
>
> def sqrt(x):
> if x < 0:
> return 'error', x
> else:
> return 'success', math.sqrt(x)
>
On 19 June 2015 at 15:01, Fabien wrote:
> Folks,
>
> I am developing a tool which works on individual entities (glaciers) and do
> a lot of operations on them. There are many tasks to do, one after each
> other, and each task follows the same interface:
>
> def task_1(path_to_glacier_dir):
> o
- Original Message -
> From: "Fabien"
> To: python-list@python.org
> Sent: Friday, 19 June, 2015 4:01:02 PM
> Subject: Catching exceptions with multi-processing
>
> Folks,
>
> I am developing a tool which works on individual entities (glaciers)
>
Fabien,
My recommendation is that you should pass some extra arguments to the task:
* A unique task id
* A result multiprocessing.Queue
When an exception is raised you put (unique_id, exception) to the
queue. When it succeeds you put (unique_id, None). In the main process
you consu
Folks,
I am developing a tool which works on individual entities (glaciers) and
do a lot of operations on them. There are many tasks to do, one after
each other, and each task follows the same interface:
def task_1(path_to_glacier_dir):
open file1 in path_to_glacier_dir
do stuff
i
13 matches
Mail list logo