I Could not able to use Insert , update and delete command in HiveContext.
i am using spark 1.6.1 version and hive 1.1.0
Please find the error below.
scala> hc.sql("delete from trans_detail where counter=1");
16/04/12 14:58:45 INFO ParseDriver: Parsing command: delete from
trans_detail wher
This sqlContext is one instance of hive context, do not be confused by the
name.
> On Feb 16, 2016, at 12:51, Prabhu Joseph wrote:
>
> Hi All,
>
> On creating HiveContext in spark-shell, fails with
>
> Caused by: ERROR XSDB6: Another instance of Derby may ha
ext]
>
> res0: Boolean = true
>
>
>
> On Mon, Feb 15, 2016 at 8:51 PM, Prabhu Joseph > wrote:
>
>> Hi All,
>>
>> On creating HiveContext in spark-shell, fails with
>>
>> Caused by: ERROR XSDB6: Another instance of Derby may have already
Type :help for more information.
scala> sqlContext.isInstanceOf[org.apache.spark.sql.hive.HiveContext]
res0: Boolean = true
On Mon, Feb 15, 2016 at 8:51 PM, Prabhu Joseph
wrote:
> Hi All,
>
> On creating HiveContext in spark-shell, fails with
>
> Caused by: ERROR XSDB6: An
Hi All,
On creating HiveContext in spark-shell, fails with
Caused by: ERROR XSDB6: Another instance of Derby may have already booted
the database /SPARK/metastore_db.
Spark-Shell already has created metastore_db for SqlContext.
Spark context available as sc.
SQL context available as sqlContext
On 25 January 2016 at 21:09, Deenar Toraskar <
deenar.toras...@thinkreactive.co.uk> wrote:
> No I hadn't. This is useful, but in some cases we do want to share the
> same temporary tables between jobs so really wanted a getOrCreate
> equivalent on HIveContext.
>
> Deenar
>
>
>
> On 25 January 2016
Have you noticed the following method of HiveContext ?
* Returns a new HiveContext as new session, which will have separated
SQLConf, UDF/UDAF,
* temporary tables and SessionState, but sharing the same CacheManager,
IsolatedClientLoader
* and Hive client (both of execution and metadata) w
Hi
I am using a shared sparkContext for all of my Spark jobs. Some of the jobs
use HiveContext, but there isn't a getOrCreate method on HiveContext which
will allow reuse of an existing HiveContext. Such a method exists on
SQLContext only (def getOrCreate(sparkContext: SparkContext): SQLContext).
I built spark-1.1.0 in a new fresh machine. This issue is gone! Thank you all
for your help.
Thanks & Regards
Tridib
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-use-HiveContext-in-spark-shell-tp18261p18324.html
Sent from the Apache Spark
Yes. I have org.apache.hadoop.hive package in spark assembly.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-use-HiveContext-in-spark-shell-tp18261p18322.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
ect: RE: Unable to use HiveContext in spark-shell
I am using spark 1.1.0.
I built it using:
./make-distribution.sh -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive
-DskipTests
My ultimate goal is to execute a query on parquet file with nested structure
and cast a date string to Date. This i
ilable.
It may be completely missing from the current classpath, or the version on
the classpath might be incompatible with the version used when compiling
HiveContext.class.
That entry seems to have slain the compiler. Shall I replay
your session? I can re-run each line except the last
What version of Spark are you using? Did you compile your Spark version
and if so, what compile options did you use?
On 11/6/14, 9:22 AM, "tridib" wrote:
>Help please!
>
>
>
>--
>View this message in context:
>http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-use-HiveCont
>ext-in-spa
.*
*E*: ji...@sellpoints.com
*M*: *510.303.7751*
On Thu, Nov 6, 2014 at 9:22 AM, tridib wrote:
> Help please!
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-use-HiveContext-in-spark-shell-tp18261p18280.html
> Se
Help please!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-use-HiveContext-in-spark-shell-tp18261p18280.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
slain the compiler. Shall I replay
your session? I can re-run each line except the last one.
[y/n]
Thanks
Tridib
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-use-HiveContext-in-spark-shell-tp18261.html
Sent from the Apache Spark User L
Hi Cheng,
I replaced Guava 15.0 with Guava 14.0.1 in my spark classpath, the problem was
solved. So your method is correct. It proved that this issue was caused by AWS
EMR (ami-version 3.1.0) libs which include Guava 15.0.
Many thanks and see you in the first Spark User Beijing Meetup tomorrow.
Hey Zhun,
Thanks for the detailed problem description. Please see my comments inlined
below.
On Thu, Aug 7, 2014 at 6:18 PM, Zhun Shen wrote:
Caused by: java.lang.IllegalAccessError: tried to access method
> com.google.common.collect.MapMaker.makeComputingMap(Lcom/google/common/base/Function;)L
Hi,
When I try to use HiveContext in Spark shell on AWS, I got the error
"java.lang.IllegalAccessError: tried to access method
com.google.common.collect.MapMaker.makeComputingMap(Lcom/google/common/base/Function;)Ljava/util/concurrent/ConcurrentMap".
I follow the steps below to c
19 matches
Mail list logo