Unable to create temp file for insert values java.net.URISyntaxException

2015-12-24 Thread Sateesh Karuturi
Hello everyone... I am getting* Exception in thread "main" org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: SemanticException [Error 10293]: Unable to create temp file for insert values java.net.URISyntaxException: Relative path in absolute URI: hdfs://localhos

Re: Executor getting killed when running Hive on Spark

2015-12-24 Thread Sofia
I am not sure which other log file to look into. I have one master and one worker and in the previous mail I showed the hive and spark worker’s log. The master log contains something like the following (extracted from an execution I just did) 15/12/24 18:19:01 INFO master.Master: Launching exe

Re: How does this work

2015-12-24 Thread Harsh J
Hue and Beeline access your warehouse data and metadata via the HiveServer2 APIs. The HiveServer2 service runs as the 'hive' user. On Wed, Dec 23, 2015 at 9:42 PM Kumar Jayapal wrote: > Hi, > > My environment has Kerbros and Senry for authentication and authorisation. > > we have the following

RE: Executor getting killed when running Hive on Spark

2015-12-24 Thread Mich Talebzadeh
Hi Sofia. I don’t think version 1.5.2 of spark can be used as Hive engine. I tried it many times. What works is you download spark 1.3.1 anf build it as you did. You then create spark-assembly-1.3.1-hadoop2.4.0.jar (after unzip and untar the result file) and put it in $HIVE_HOME/lib

Re: Executor getting killed when running Hive on Spark

2015-12-24 Thread Jörn Franke
Have you checked what the issue is with the log file causing troubles? Enough space available? Access rights (what is the user of the spark worker?)? Does directory exist? Can you provide more details how the table is created? Does the query work with mr or tez as an execution engine? Does a n

Executor getting killed when running Hive on Spark

2015-12-24 Thread Sofia
Hello and happy holiday to those who are already enjoying it! I am still having trouble running Hive with Spark. I downloaded Spark 1.5.2 and built it like this (my Hadoop is version 2.7.1): ./make-distribution.sh --name "hadoop2-without-hive" --tgz "-Pyarn,hadoop-provided,hadoop-2.4,parquet-p