I have spark program with a custom optimised rdd for hbase scans and
updates. I have a small library of objects in scala to support efficient
serialisation, partitioning etc. I would like to use R as an analysis and
visualisation front-end. I have tried to use rJava (i.e. not using sparkR)
and I got as far as initialising the spark context but I have encountered
problems with hbase dependencies (HBaseConfiguration : Unsupported
major.minor version 51.0) so tried sparkR but I can't figure out how to
make my custom scala classes available to sparkR other than re-implementing
them in R. Is there a way to include and invoke additional scala objects
and RDDs within sparkR shell/job ? Something similar to additional jars and
init script in normal spark submit/shell..

-- 
Michal Haris
Technical Architect
direct line: +44 (0) 207 749 0229
www.visualdna.com | t: +44 (0) 207 734 7033
31 Old Nichol Street
London
E2 7HR

Reply via email to