Hi Ram, Thanks very much it worked.
Puneet
From: ram kumar [mailto:ramkumarro...@gmail.com]
Sent: Thursday, July 07, 2016 6:51 PM
To: Puneet Tripathi
Cc: user@spark.apache.org
Subject: Re: Spark with HBase Error - Py4JJavaError
Hi Puneet,
Have you tried appending
--jars $SPARK_HOME/lib/spark
Hi Puneet,
Have you tried appending
--jars $SPARK_HOME/lib/spark-examples-*.jar
to the execution command?
Ram
On Thu, Jul 7, 2016 at 5:19 PM, Puneet Tripathi <
puneet.tripa...@dunnhumby.com> wrote:
> Guys, Please can anyone help on the issue below?
>
>
>
> Puneet
>
>
>
> *From:* Puneet Tripath
Guys, Please can anyone help on the issue below?
Puneet
From: Puneet Tripathi [mailto:puneet.tripa...@dunnhumby.com]
Sent: Thursday, July 07, 2016 12:42 PM
To: user@spark.apache.org
Subject: Spark with HBase Error - Py4JJavaError
Hi,
We are running Hbase in fully distributed mode. I tried to co
In case you are still looking for help, there has been multiple discussions
in this mailing list that you can try searching for. Or you can simply use
https://github.com/unicredit/hbase-rdd :-)
Thanks,
Aniket
On Wed Dec 03 2014 at 16:11:47 Ted Yu wrote:
> Which hbase release are you running ?
>
Which hbase release are you running ?
If it is 0.98, take a look at:
https://issues.apache.org/jira/browse/SPARK-1297
Thanks
On Dec 2, 2014, at 10:21 PM, Jai wrote:
> I am trying to use Apache Spark with a psuedo distributed Hadoop Hbase
> Cluster and I am looking for some links regarding the
You could go through these to start with
http://www.vidyasource.com/blog/Programming/Scala/Java/Data/Hadoop/Analytics/2014/01/25/lighting-a-spark-with-hbase
http://stackoverflow.com/questions/25189527/how-to-process-a-range-of-hbase-rows-using-spark
Thanks
Best Regards
On Wed, Dec 3, 2014 at 11
this two posts should be good for setting up spark+hbase environment and use
the results of hbase table scan as RDD
settings
http://www.abcn.net/2014/07/lighting-spark-with-hbase-full-edition.html
some samples:
http://www.abcn.net/2014/07/spark-hbase-result-keyvalue-bytearray.html
--
View this
You can download and compile spark against your existing hadoop version.
Here's a quick start
https://spark.apache.org/docs/latest/cluster-overview.html#cluster-manager-types
You can also read a bit here
http://docs.sigmoidanalytics.com/index.php/Installing_Spark_andSetting_Up_Your_Cluster
( the
ll facing this issue...
> We could able to connect to HBase with independent code, but getting issue
> with Spark integration.
>
> Thx,
> Ravi
>
> --
> From: nvn_r...@hotmail.com
> To: u...@spark.incubator.apache.org; user@spark.apache.org
&g
Hi,
Any update on the solution? We are still facing this issue...
We could able to connect to HBase with independent code, but getting issue with
Spark integration.
Thx,
Ravi
From: nvn_r...@hotmail.com
To: u...@spark.incubator.apache.org; user@spark.apache.org
Subject: RE: Spark with HBase
+user@spark.apache.org
From: nvn_r...@hotmail.com
To: u...@spark.incubator.apache.org
Subject: Spark with HBase
Date: Sun, 29 Jun 2014 15:28:43 +0530
I am using follwoing versiongs ..
spark-1.0.0-bin-hadoop2
hbase-0.96.1.1-hadoop2
When executing Hbase Test , i am facing foll
11 matches
Mail list logo