On Friday 22 October 2010 11:34:19 [email protected] wrote:

> In fact IIRC the additional overhead follows the square of the number
> of CPUs.  I seem to recall this was called Amdahl's Law after Gene
> Amdahl of IBM (and later his own company)

  Either that's not it, or there's more than one "Amdahl's law" --
the oen I know is about diminishing returns from increasing effort
to parallelize code.  I don't know it in its pithy form, but
the gist of it is that you can only parallelize *some* of your
code, because all algorithms have a certain amount of set-up
and tear-down overhead that's typically serial.  Even if you
perfectly parallelize the parallelizable part of the code, 
so it runs N times faster, your application as a whole will
run something less than N times faster, and as N gets large,
this "serial offset" contribution will come to dominate the 
execution time, at which point additional investments in 
parallelization are probably wasted.

                                -- A.
-- 
Andrew Reid / [email protected]


-- 
To UNSUBSCRIBE, email to [email protected] 
with a subject of "unsubscribe". Trouble? Contact [email protected]
Archive: http://lists.debian.org/[email protected]

Reply via email to