Folks,

I am starting to explore Spark framework and hopefully contribute to it in 
future. I was wondering if you have any documentation or tips to get 
understanding the inner workings of the code quickly. 

I am new to both Spark and Scala and am taking a look at the *Rdd*.scala files 
in the source tree.

My ultimate goal is to offload some of the compute done on a partition to the 
GPU cores available on the node. Any prior attempt or design discussion done on 
that aspect?

Bests,
-Monir


Reply via email to