If you are running pure arithmetic code over primitive arrays, and not
doing allocation (so no seqs, higher order functions, etc.) then it's
pretty straight forward to translate Clojure code to GPU code. Now notice
that all the above qualifications are directly opposite to "normal" Clojure
practices, so yes, it's often easier said than done.

That being said, I've written some CUDA code in the past and haven't found
it all that bad to use if you have a good wrapper. It's kind of a catch-22
though. The faster the code, the less like Clojure it will look. If you
want easier porting, you'll pay a performance price. And in GPU
programming, a simple re-alignment of memory loads can mean a 10x
performance boost at times, so that performance "price" could end up
meaning you blow away most of the usefulness of programming a GPU in the
first place.


Timothy


On Wed, Nov 21, 2012 at 9:07 AM, Jim - FooBar(); <jimpil1...@gmail.com>wrote:

>  Hi all,
>
> I just came back from a seminar at Manchester University where the speaker
> from ARM spent an hour talking about GPUs, OPENCL etc etc. The thing that
> is stuck in my head is that, apparently ARM  is trying to create this
> language (PENCIL) which at the moment is a c-variant but they are pushing
> for something more like Scala or OCaml and which basically is a
> 'Platform-Neutral Compute Intermediate Language' for Compute accelerators
> like GPUs.
>
> The entire talk brought back to my mind some thoughts i was having a
> couple of months ago: isn't it possible to do a parallel 'fold' using GPUs?
> We already have reducers fork-join ready (Rich took care of that :)), why
> not deploy the fork-join tree on GPUs?
>
>
> I just had a look at Zach Tellman's "calx" [1] and seems really nice...Of
> course it is just a wrapper so presumably it doesn't add anything new (you
> still have to write your C strings) but it hides away some horrible stuff
> in a clojury way. Now, assuming that I am a descent C programmer (which I'm
> not!), could I take some of my loop/recurs in my code or even a reducer and
> run them over GPUs?
>
> In theory I would expect a 'yes', whereas in practise I'd expect a
> 'no'...am I right? Perhaps Zach can enlighten me? He seems to have studied
> this area...
>
> any thoughts?
>
>
>
> [1] https://github.com/ztellman/calx
>
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en




-- 
“One of the main causes of the fall of the Roman Empire was that–lacking
zero–they had no way to indicate successful termination of their C
programs.”
(Robert Firth)

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to