Sam Watkins wrote:
I calculated roughly that encoding a 2-hour video could be parallelized by a
factor of perhaps 20 trillion, using pipelining and divide-and-conquer, with a
longest path length of 10000 operations in series. Such a system running at
1Ghz could encode a single 2-hour video in 1/100000 second (latency), or 2
billion hours of video per second (throughput).
I know you are using video / audio encoding as an example and there are
probably datasets that make sense but in this case, what use is it?
You can't watch 2 hours of video per second and you can't write it to
disk fast enough to empty the pipeline. So you'll process all the video
and then sit there keeping it powered while you wait to do something
with it. I suppose you could keep filtering it.
Add into that the datarate of full 10 bit uncompressed 1920x1080/60i HD
is 932Mbit so your 1Ghz clockspeed might not be fast enough to play it :)
You've got to feed in 2 hours of source material - 820Gb per stream, how?
Once you have your uncompressed stream, MPEG-2 encoding requires seeking
through the time dimension with keyframes every n frames and out of
order macro blocks, so we have to wait for n frames to be composited.
For the best quality the datarate is unconstrained on the first
processing run and then macro blocks best-fitted and re-ordered on the
second to match the desired output datarate, but again, this is n frames
at a time.
Amdahl is punching you in the face every time you say "see, it's easy".