On Thursday 23 April 2015 12:22 PM, Akhil Das wrote:
Here's a complete scala example
https://github.com/bbux-proteus/spark-accumulo-examples/blob/1dace96a115f29c44325903195c8135edf828c86/src/main/scala/org/bbux/spark/AccumuloMetadataCount.scala
Thanks
Best Regards
On Thu, Apr 23, 2015 at 12:19
Here's a complete scala example
https://github.com/bbux-proteus/spark-accumulo-examples/blob/1dace96a115f29c44325903195c8135edf828c86/src/main/scala/org/bbux/spark/AccumuloMetadataCount.scala
Thanks
Best Regards
On Thu, Apr 23, 2015 at 12:19 PM, Akhil Das
wrote:
> Change your import from mapred
Change your import from mapred to mapreduce. like :
import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
Thanks
Best Regards
On Wed, Apr 22, 2015 at 2:42 PM, madhvi wrote:
> Hi,
>
> I am creating a spark RDD through accumulo writing like:
>
> JavaPairRDD accumuloRDD =
> sc.new
Hi, SparkContext.newAPIHadoopRDD() is for working with new Hadoop mapreduce API.
So, you should import import
org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
Instead of import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
-Original Message-
From: madhvi [mailt