On 04/20/2015 10:14 PM, Paulo da Silva wrote:
I have program that generates about 100 relatively complex graphics and
writes then to a pdf book.
It takes a while!
Is there any possibility of using multiprocessing to build the graphics
and then use several calls to savefig(), i.e. some kind of graphic's
objects?


To know if this is practical, we have to guess about the code you describe, and about the machine you're using.

First, if you don't have multiple cores on your machine, then it's probably not going to be any faster, and might be substantially slower. Ditto if the code is so large that multiple copies of it will cause swapping.

But if you have 4 cores, and a processor=bound algorithm, it can indeed save time to run 3 or 4 processes in parallel. You'll have to write code to parcel out the parts that can be done in parallel, and make a queue that each process can grab its next assignment from.

There are other gotchas, such as common code that has to be run before any of the subprocesses. If you discover that each of these 100 pieces has to access data from earlier pieces, then you could get bogged down in communications and coordination.

If the 100 plots are really quite independent, you could also consider recruiting time from multiple machines. As long as the data that needs to go between them is not too large, it can pay off big time.

--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to