On 6/5/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> it seems that now we move right into this direction with GPUs
They are no good.
GPU's have no synchronisation between them which is needed for graph
reduction.
GPUs are intrinsically parallel devices and might work very well for
par
it seems that now we move right into this direction with GPUs
I was just thinking that GPUs might make a good target for a reduction
language like Haskell. They are hugely parallel, and they have the
commercial momentum to keep them current. It also occurred to me that
the cell processor (
but more efficient computational model exists. if cpu consists from
huge amount of execution engines which synchronizes their operations
only when one unit uses results produces by another then we got
processor with huge level of natural parallelism and friendlier for FP
programs. it seems that
Bulat Ziganshin wrote:
it seems that now we move right into this direction with GPUs
I was just thinking that GPUs might make a good target for a reduction
language like Haskell. They are hugely parallel, and they have the
commercial momentum to keep them current. It also occurred to me that
"Neil Davies" <[EMAIL PROTECTED]> writes:
> Bulat
>
> That was done to death as well in the '80s - data flow architectures
> where the execution was data-availability driven. The issue becomes
> one of getting the most of the available silicon area. Unfortunately
> with very small amounts of comp
"Claus Reinke" <[EMAIL PROTECTED]> writes:
> > either be slower than mainstream hardware or would be
> > overtaken by it in a very short space of time.
>
> i'd like to underline the last of these two points, and i'm
> impressed that you came to that conclusion as early as the
> eighties.
Well, S
Bulat
That was done to death as well in the '80s - data flow architectures
where the execution was data-availability driven. The issue becomes
one of getting the most of the available silicon area. Unfortunately
with very small amounts of computation per work unit you:
a) spend a lot of time/ar
Hello Jon,
Friday, June 1, 2007, 11:17:07 PM, you wrote:
> (we had the possiblity of funding to make something). We
> had lots of ideas, but after much arguing back and forth the
> conclusion we reached was that anything we could do would
> either be slower than mainstream hardware or would be
either be slower than mainstream hardware or would be
overtaken by it in a very short space of time.
i'd like to underline the last of these two points, and i'm impressed
that you came to that conclusion as early as the eighties. i'm not
into hardware research myself, but while i was working
Andrew Coppin <[EMAIL PROTECTED]> writes:
> OK, so... If you were going to forget everything we humans
> know about digital computer design - the von Neuman
> architecture, the fetch/decode/execute loop, the whole
> shooting match - and design a computer *explicitly* for the
> purpose of executing
10 matches
Mail list logo