I have a portion of code I need to speed up, there are 3 api calls to an 
external system
where the first enumerates a large collection of objects I then loop through 
and perform
two additional api calls each. The first call is instant, the second and third 
per object are
very slow. Currently after accumulating all the data I write the relevant data 
into a database.

I have the ability to hold all this in memory and dump it once fully 
accumulated, so performing
the second and third call in parallel with fixed batches would be great,

I took a look at coroutines and some skeleton code worked fine, but I am not 
sure how to
perform the acquisition in fixed groups like I might for example with 
multiprocessing and
a pool of workers.

Anyone done something like this or have an opinion?

Thanks,
jlc
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to