Hi Team,
We are trying to read hbase table from spark using hbase-spark connector.
But our job is failing in the pushdown part of the filter in stage 0, due
the below error. kindly help us to resolve this issue.
caused by : java.lang.NoClassDefFoundError:
scala/collection/immutable/StringOps
at
message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/hbase-spark-hdfs-tp28661.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Hi everybody.
I'm totally new in Spark and I wanna know one stuff that I do not manage to
find. I have a full ambary install with hbase, Hadoop and spark. My code
reads and writes in hdfs via hbase. Thus, as I understood, all data stored
are in bytes format in hdfs. Now, I know that it's possible
ources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:232)
>> at
>> org.apache.spark.sql.execution.datasources.hbase.HBaseRelation.(HBaseRelation.scala:77)
>> at
>> org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(HBaseRelation
gt;> }
>>>>
>>>> val df = withCatalog(cat)
>>>> df.show
>>>>
>>>>
>>>> It gives me this error.
>>>>
>>>> java.lang.NoSuchMethodError: scala.runtime.ObjectRef.
;>> org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:232)
>>> at
>>> org.apache.spark.sql.execution.datasources.hbase.HBaseRelation.(HBaseRelation.scala:77)
>>> at
>>> org.apache.spark.sql.executio
;>> (Ljava/lang/Object;)Lscala/runtime/ObjectRef;
>>> at org.apache.spark.sql.execution.datasources.hbase.HBaseTableC
>>> atalog$.apply(HBaseTableCatalog.scala:232)
>>> at org.apache.spark.sql.execution.d
elation.scala:77)
>> at
>> org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(HBaseRelation.scala:51)
>> at
>> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>> at org.apache.s
a:77)
> at
> org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(HBaseRelation.scala:51)
> at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119
ObjectRef;
>> at org.apache.spark.sql.execution.datasources.hbase.HBaseTableC
>> atalog$.apply(HBaseTableCatalog.scala:232)
>> at org.apache.spark.sql.execution.datasources.hbase.HBaseRelati
>> on.(HBaseRelation.scala:77)
>> at org.apache.spark.sql.execution.datasources.
$.apply(ResolvedDataSource.scala:158)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>
> If you can please help, I would be grateful.
>
> Cheers,
> Ben
>
>
>> On Jan 31, 2017, at 1:02 PM, Marton, Elek > <mailto:h...@anzix.net
park.sql.execution.datasources.hbase.
> DefaultSource.createRelation(HBaseRelation.scala:51)
> at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(
> ResolvedDataSource.scala:158)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>
>
ltSource.createRelation(HBaseRelation.scala:51)
at
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
If you can please help, I would be grateful.
Cheers,
Ben
> O
Elek,
If I cannot use the HBase Spark module, then I’ll give it a try.
Thanks,
Ben
> On Jan 31, 2017, at 1:02 PM, Marton, Elek wrote:
>
>
> I tested this one with hbase 1.2.4:
>
> https://github.com/hortonworks-spark/shc
>
> Marton
>
> On 01/31/2017 09:17 P
I tested this one with hbase 1.2.4:
https://github.com/hortonworks-spark/shc
Marton
On 01/31/2017 09:17 PM, Benjamin Kim wrote:
Does anyone know how to backport the HBase Spark module to HBase 1.2.0? I tried
to build it from source, but I cannot get it to work.
Thanks,
Ben
Does anyone know how to backport the HBase Spark module to HBase 1.2.0? I tried
to build it from source, but I cannot get it to work.
Thanks,
Ben
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Hi Ben,
This seems more like a question for community.cloudera.com. However, it would
be in hbase not spark I believe.
https://repository.cloudera.com/artifactory/webapp/#/artifacts/browse/tree/General/cloudera-release-repo/org/apache/hbase/hbase-spark
David Newberger
-Original Message
I would like to know if anyone has tried using the hbase-spark module? I tried
to follow the examples in conjunction with CDH 5.8.0. I cannot find the
HBaseTableCatalog class in the module or in any of the Spark jars. Can someone
help?
Thanks,
Ben
.scala
> [2]
> https://github.com/apache/spark/blob/master/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala
>
>
> From: John Trengrove [mailto:john.trengr...@servian.com.au]
> Sent: 19 May 2016 08:09
> To: philipp.meyerhoe...@thomsonreuters.com
&g
edentials” and the
.count() on my HBase RDD works fine.
From: Ellis, Tom (Financial Markets IT) [mailto:tom.el...@lloydsbanking.com]
Sent: 19 May 2016 09:51
To: 'John Trengrove'; Meyerhoefer, Philipp (TR Technology & Ops)
Cc: user
Subject: RE: HBase / Spark Kerberos problem
Yeah
ngrove [mailto:john.trengr...@servian.com.au]
Sent: 19 May 2016 08:09
To: philipp.meyerhoe...@thomsonreuters.com
Cc: user
Subject: Re: HBase / Spark Kerberos problem
-- This email has reached the Bank via an external source --
Have you had a look at this issue?
https://issues.apache.org/jira/browse/SPARK-12279
Have you had a look at this issue?
https://issues.apache.org/jira/browse/SPARK-12279
There is a comment by Y Bodnar on how they successfully got Kerberos and
HBase working.
2016-05-18 18:13 GMT+10:00 :
> Hi all,
>
> I have been puzzling over a Kerberos problem for a while now and wondered
> if
t an
intended recipient, please notify the sender by return e-mail and delete this
e-mail and any attachments. Certain required legal entity disclosures can be
accessed on our website.<http://site.thomsonreuters.com/site/disclosures/>
--
View this message in context:
http://apache
Hi all,
I have been puzzling over a Kerberos problem for a while now and wondered if
anyone can help.
For spark-submit, I specify --keytab x --principal y, which creates my
SparkContext fine.
Connections to Zookeeper Quorum to find the HBase master work well too.
But when it comes to a .count()
I see that the new CDH 5.7 has been release with the HBase Spark module
built-in. I was wondering if I could just download it and use the hbase-spark
jar file for CDH 5.5. Has anyone tried this yet?
Thanks,
Ben
-
To unsubscribe
gt; BoxedUnit]])
>>>
>>> rdd.map ( record =>(new ImmutableBytesWritable,{
>>>
>>>
>>> var maprecord = new HashMap[String, String];
>>> val mapper = new ObjectMapper();
>>>
>>> /
var ts:Long= maprecord.get("ts").toLong
>> var tweetID:Long= maprecord.get("id").toLong
>> val key=ts+"_"+tweetID;
>> val put=new Put(Bytes.toBytes(key))
>>maprecord.foreach(kv => {
>> //
println(kv._1+" - "+kv._2)
>
>
> put.add(Bytes.toBytes(colfamily.value),Bytes.toBytes(kv._1),Bytes.toBytes(kv._2))
>
>
> }
>)
>put
>
> }
>
).saveAsNewAPIHadoopDataset(hconf)
})
help me out in solving this as it is urgent for me
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/HBase-Spark-Streaming-giving-error-after-restore-tp25090.html
Sent from the Apache Spark
e),Bytes.toBytes(kv._1),Bytes.toBytes(kv._2))
}
)
put
}
)
).saveAsNewAPIHadoopDataset(hconf)
})
help me out in solving this as it is urgent for me
--
View this message in context:
http:/
Looks like you have an incompatible hbase-default.xml in some place. You
can use the following code to find the location of "hbase-default.xml"
println(Thread.currentThread().getContextClassLoader().getResource("hbase-default.xml"))
Best Regards,
Shixiong Zhu
2015-09-21 15:46 GMT+08:00 Siva :
>
Hi,
I m seeing some strange error while inserting data from spark streaming to
hbase.
I can able to write the data from spark (without streaming) to hbase
successfully, but when i use the same code to write dstream I m seeing the
below error.
I tried setting the below parameters, still didnt hel
32 matches
Mail list logo