On 10/11/2013 4:41 AM, Peter Cacioppi wrote:

I should add that the computational heavy lifting is done in a third party 
library. So a worker thread looks roughly like this (there is a subtle race 
condition I'm glossing over).

while len(jobs) :
    job = jobs.pop()
    model = Model(job)      # Model is py interface for a lib written in C
    newJobs = model.solve() # This will take a long time
    for each newJob in newJobs :
      jobs.add(newJob)

Here jobs is a thread safe object that is shared across each worker thread. It 
holds a priority queue of jobs that can be solved in parallel.

Model is a py class that provides the API to a 3rd party library written in C.I 
know model.solve() will be the bottleneck operation for all but trivial 
problems.

So, my hope is that the GIL restrictions won't be problematic here. That is to 
say, I don't need **Python** code to ever run concurrently. I just need Python 
to allow a different Python worker thread to execute when all the other worker 
threads are blocking on the model.solve() task. Once the algorithm is in full 
swing, it is typical for all the worker threads should be blocking on 
model.Solve() at the same time.

It's a nice algorithm for high level languages. Java worked well here, I'm 
hoping py can be nearly as fast with a much more elegant and readable code.

Given that model.solve takes a 'long time' (seconds, at least), the extra time to start a process over the time to start a thread will be inconsequential. I would therefore look at the multiprocessing module.

--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to