I am facing an exception "Hive.get() called without a hive db setup" in the
executor. I wanted to understand how Hive object is initialized in the
executor threads? I only see Hive.get(hiveconf) in two places in spark 1.3
code.
In HiveContext.scala - I dont think this is created on the executor
In
Hello Spark developers,
I want to understand the procedure to create the org.spark-project.hive
jars. Is this documented somewhere? I am having issues with -Phive-provided
with my private hive13 jars and want to check if using spark's procedure
helps.
Yash,
This is exactly what I wanted! Thanks a bunch.
On 27 April 2015 at 15:39, yash datta wrote:
> Hi,
>
> you can build spark-project hive from here :
>
> https://github.com/pwendell/hive/tree/0.13.1-shaded-protobuf
>
> Hope this helps.
>
>
> On Mon, Apr 27
The problem was in my hive-13 branch. So ignore this.
On 27 April 2015 at 10:34, Manku Timma wrote:
> I am facing an exception "Hive.get() called without a hive db setup" in
> the executor. I wanted to understand how Hive object is initialized in the
> executor threads?
Looks like there is a case in TableReader.scala where Hive.get() is being
called without already setting it via Hive.get(hiveconf). I am running in
yarn-client mode (compiled with -Phive-provided and with hive-0.13.1a).
Basically this means the broadcasted hiveconf is not getting used and the
defau
sc.applicationId gives the yarn appid.
On 11 May 2015 at 08:13, Mridul Muralidharan wrote:
> We had a similar requirement, and as a stopgap, I currently use a
> suboptimal impl specific workaround - parsing it out of the
> stdout/stderr (based on log config).
> A better means to get to this is i