Ross Ridge wrote:
>Umm... those 80 processors that Intel is talking about are more like the
>8 coprocessors in the Cell CPU. 

Michael Eager wrote:
>No, the Cell is asymmetrical (vintage 2000) architecture.

The Cell CPU as a whole is asymmetrical, but I'm only comparing the
design to the 8 identical coprocessors (of which only 7 are enabled in
the CPU used in the PlayStation 3).

>Intel & AMD have announced that they are developing large multi-core
>symmetric processors.  The timelines I've seen say that the number of
>cores on each chip will double every year or two. 

This doesn't change that fact that SMP systems don't scale well after
16 processors or so.  To go beyond that you need a different design.
Clustering and NUMA have been ways of solving the problem outside the
chip.  Intel's plan for solving it inside the chip involves giving each
of the 80 cores it's own 32 MB of SRAM and only connecting each core to
its immediate neighbours.  This is similiar to the Cell SPE's.  Each has
256K of local memory and they're all connected together in a ring.

> Moore's law hasn't stopped.

While Moore's Law may still be holding on, bus and memory speeds aren't
doubling every two years.  You can't design an 80 core CPU like an 4 core
CPU with 20 times as many cores.  Having 80 processors all competing over
the same bus for the same memory won't work.  Neither will "make -j80".
You need to do more than just divide up the work between different
processes or threads.  You need to divide up the program and data into
chunks that will fit into each core's local memory and orchestrate
everything so that the data propagates smoothly between cores.

> The number of gates per chip doubles every 18 months.

Actually, in fact it's closer to doubling every 24 months and Gordon
Moore never said it would double every 18 months.  Originaly in 1965
he said that the number of components doubled every year, in 1975 after
things slowed down he revised it to doubling every two years.

                                        Ross Ridge

Reply via email to