En Tue, 07 Oct 2008 13:25:01 -0300, Terry Reedy <[EMAIL PROTECTED]>
escribió:
Lawrence D'Oliveiro wrote:
In message <[EMAIL PROTECTED]>,
Gabriel Genellina wrote:
Usually it's more efficient to create all the MAX_THREADS at once, and
continuously feed them with tasks to be done.
Given that the bottleneck is most likely to be the internet
connection, I'd
say the "premature optimization is the root of all evil" adage applies
here.
There is also the bottleneck of programmer time to understand, write,
and maintain. In this case, 'more efficient' is simpler, and to me,
more efficient of programmer time. Feeding a fixed pool of worker
threads with a Queue() is a standard design that is easy to understand
and one the OP should learn. Re-using tested code is certainly
efficient of programmer time. Managing a variable pool of workers that
die and need to be replaced is more complex (two loops nested within a
loop) and error prone (though learning that alternative is probably not
a bad idea also).
I'd like to add that debugging a program that continuously creates and
destroys threads is a real PITA.
--
Gabriel Genellina
--
http://mail.python.org/mailman/listinfo/python-list