Malcolm Wallace:
(Also... Haskell on the GPU. It's been talked about for years, but
will
it ever actually happen?)
gpu is just set of simd-like instructions. so the reason why you will
never see haskell on gpu is the same as why you will never see it
implemented via simd instructions :D
Because SIMD/GPU deals only with numbers, not pointers, you will not
see much _symbolic_ computation being offloaded to these arithmetic
units. But there are still great opportunities to improve Haskell's
speed at numerics using them. And some symbolic problems can be
encoded using integers.
There are at least two current (but incomplete) projects in this
area: Sean Lee at UNSW has targetted Data Parallel Haskell for an
Nvidia GPGPU, and Joel Svensson at Chalmers is developing a Haskell-
embedded language for GPU programming called Obsidian.
We have a paper about the UNSW project now. It is rather high-level,
but has some performance figures of preliminary benchmarks:
http://www.cse.unsw.edu.au/~chak/papers/LCGK09.html
BTW, this is currently independent of Data Parallel Haskell. It is a
flat data-parallel array language embedded in Haskell. The language
is restricted in a manner that we can generate GPU code (CUDA to be
precise) from it. In the longer run, we want to turn this into a
backend of Data Parallel Haskell, but that will require quite a bit
more work.
Manuel
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe