Hi Sandeep,
Any inputs on this?
Regards
Surya
From: Garlapati, Suryanarayana (Nokia - IN/Bangalore)
Sent: Saturday, July 21, 2018 6:50 PM
To: Sandeep Katta
Cc: dev@spark.apache.org; u...@spark.apache.org
Subject: RE: Query on Spark Hive with kerberos Enabled on Kubernetes
Hi Sandeep,
Thx for
the hive metastore. Please let me know, if doing wrong.
Regards
Surya
From: Sandeep Katta [mailto:sandeep0102.opensou...@gmail.com]
Sent: Friday, July 20, 2018 9:59 PM
To: Garlapati, Suryanarayana (Nokia - IN/Bangalore)
Cc: dev@spark.apache.org; u...@spark.apache.org
Subject: Re: Query on Spark
Can you please tell us what exception you ve got,any logs for the same ?
On Fri, 20 Jul 2018 at 8:36 PM, Garlapati, Suryanarayana (Nokia -
IN/Bangalore) wrote:
> Hi All,
>
> I am trying to use Spark 2.2.0 Kubernetes(
> https://github.com/apache-spark-on-k8s/spark/tree/v2.2.0-kubernetes-0.5.0)
>
Hi All,
I am trying to use Spark 2.2.0
Kubernetes(https://github.com/apache-spark-on-k8s/spark/tree/v2.2.0-kubernetes-0.5.0)
code to run the Hive Query on Kerberos Enabled cluster. Spark-submit's fail
for the Hive Queries, but pass when I am trying to access the hdfs. Is this a
known limitation
Hi,
I noticed that spark standalone (locally for development) will no longer
support the integrated hive megastore as some driver classes for derby seem
to be missing from 2.2.1 and onwards (2.3.0). It works just fine for 2.2.0
or previous versions to execute the following script:
spark.sql("CREA
On 2 Dec 2016, at 19:09, Reynold Xin
mailto:r...@databricks.com>> wrote:
ThriftHttpCLIService.java code is actually in Spark. That pull request is
basically no-op. Overall we are moving away from Hive dependency by
implementing almost everything in Spark, so the need to change that repo is
ge
ThriftHttpCLIService.java code is actually in Spark. That pull request is
basically no-op. Overall we are moving away from Hive dependency by
implementing almost everything in Spark, so the need to change that repo is
getting less and less.
On Fri, Dec 2, 2016 at 10:03 AM, Marcelo Vanzin wrote:
I believe the latest one is actually in Josh's repository. Which kinda
raises a more interesting question:
Should we create a repository managed by the Spark project, using the
Apache infrastructure, to handle that fork? It seems not very optimal
to have this lie in some random person's github acc
What's the process for PR review for the Hive JAR?
I ask as I've had a PR for fixing a kerberos problem outstanding for a while,
without much response
https://github.com/pwendell/hive/pull/2
I'm now looking at the one line it would take for the JAR to consider Hadoop
3.x compatible at the API
om/pwendell/hive/tree/release-1.2.1-spark
>
> On Mon, Oct 5, 2015 at 1:03 PM, weoccc wrote:
>
>> Hi,
>>
>> I would like to know where is the spark hive github location where spark
>> build depend on ? I was told it used to be here
>> https://github.co
I think this is the most up to date branch (used in Spark 1.5):
https://github.com/pwendell/hive/tree/release-1.2.1-spark
On Mon, Oct 5, 2015 at 1:03 PM, weoccc wrote:
> Hi,
>
> I would like to know where is the spark hive github location where spark
> build depend on ? I was told i
Hi,
I would like to know where is the spark hive github location where spark
build depend on ? I was told it used to be here
https://github.com/pwendell/hive but it seems it is no longer there.
Thanks a lot,
Weide
Spark SQL is not the same as Hive on Spark.
Spark SQL is a query engine that is designed from ground up for Spark
without the historic baggage of Hive. It also does more than SQL now -- it
is meant for structured data processing (e.g. the new DataFrame API) and
SQL. Spark SQL is mostly compatible
I'm a little confused around Hive & Spark, can someone shed some light ?
Using Spark, I can access the Hive metastore and run Hive queries. Since I
am able to do this in stand-alone mode, it can't be using map-reduce to run
the Hive queries and I suppose it's building a query plan and executing it
14 matches
Mail list logo