[ 
https://issues.apache.org/jira/browse/HIVE-9659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356117#comment-14356117
 ] 

Hive QA commented on HIVE-9659:
-------------------------------



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12703809/HIVE-9659.4-spark.patch

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/779/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/779/console
Test logs: 
http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-779/

Messages:
{noformat}
**** This message was trimmed, see log for full details ****
As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_NULL KW_AND" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_TIMESTAMP StringLiteral" using 
multiple alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN CharSetName CharSetLiteral" using 
multiple alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_NULL LESSTHANOREQUALTO" using 
multiple alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN LPAREN StringLiteral" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_NULL LESSTHAN" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_CASE KW_EXISTS" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_NULL GREATERTHANOREQUALTO" using 
multiple alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_DATE StringLiteral" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_NULL GREATERTHAN" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_NULL BITWISEXOR" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_CASE KW_ARRAY" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_NULL KW_BETWEEN" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:68:4: 
Decision can match input such as "LPAREN KW_CASE KW_STRUCT" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:115:5: 
Decision can match input such as "KW_CLUSTER KW_BY LPAREN" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:127:5: 
Decision can match input such as "KW_PARTITION KW_BY LPAREN" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:138:5: 
Decision can match input such as "KW_DISTRIBUTE KW_BY LPAREN" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:149:5: 
Decision can match input such as "KW_SORT KW_BY LPAREN" using multiple 
alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:166:7: 
Decision can match input such as "STAR" using multiple alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input
warning(200): IdentifiersParser.g:194:5: 
Decision can match input such as "KW_ARRAY" using multiple alternatives: 2, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:194:5: 
Decision can match input such as "KW_UNIONTYPE" using multiple alternatives: 5, 
6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:194:5: 
Decision can match input such as "KW_STRUCT" using multiple alternatives: 4, 6

As a result, alternative(s) 6 were disabled for that input
warning(200): IdentifiersParser.g:288:5: 
Decision can match input such as "KW_NULL" using multiple alternatives: 1, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:288:5: 
Decision can match input such as "KW_FALSE" using multiple alternatives: 2, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:288:5: 
Decision can match input such as "KW_TRUE" using multiple alternatives: 2, 8

As a result, alternative(s) 8 were disabled for that input
warning(200): IdentifiersParser.g:420:5: 
Decision can match input such as "KW_BETWEEN KW_MAP LPAREN" using multiple 
alternatives: 8, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:420:5: 
Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_LATERAL 
KW_VIEW" using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:420:5: 
Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_DISTRIBUTE 
KW_BY" using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:420:5: 
Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_INTO" using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:420:5: 
Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_GROUP 
KW_BY" using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:420:5: 
Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_SORT KW_BY" 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:420:5: 
Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_CLUSTER 
KW_BY" using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:420:5: 
Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_ORDER 
KW_BY" using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:420:5: 
Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_MAP LPAREN" 
using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:420:5: 
Decision can match input such as "{KW_LIKE, KW_REGEXP, KW_RLIKE} KW_INSERT 
KW_OVERWRITE" using multiple alternatives: 2, 9

As a result, alternative(s) 9 were disabled for that input
warning(200): IdentifiersParser.g:545:5: 
Decision can match input such as "{AMPERSAND..BITWISEXOR, DIV..DIVIDE, 
EQUAL..EQUAL_NS, GREATERTHAN..GREATERTHANOREQUALTO, KW_AND, KW_ARRAY, 
KW_BETWEEN..KW_BOOLEAN, KW_CASE, KW_DOUBLE, KW_FLOAT, KW_IF, KW_IN, KW_INT, 
KW_LIKE, KW_MAP, KW_NOT, KW_OR, KW_REGEXP, KW_RLIKE, KW_SMALLINT, 
KW_STRING..KW_STRUCT, KW_TINYINT, KW_UNIONTYPE, KW_WHEN, 
LESSTHAN..LESSTHANOREQUALTO, MINUS..NOTEQUAL, PLUS, STAR, TILDE}" using 
multiple alternatives: 1, 3

As a result, alternative(s) 3 were disabled for that input
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-exec ---
Downloading: 
http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark_2.10-1.3-rc1/org/pentaho/pentaho-aggdesigner-algorithm/5.1.5-jhyde/pentaho-aggdesigner-algorithm-5.1.5-jhyde.pom
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Hive .............................................. SUCCESS [2.832s]
[INFO] Hive Shims Common ................................. SUCCESS [3.410s]
[INFO] Hive Shims 0.20S .................................. SUCCESS [0.838s]
[INFO] Hive Shims 0.23 ................................... SUCCESS [2.620s]
[INFO] Hive Shims Scheduler .............................. SUCCESS [0.853s]
[INFO] Hive Shims ........................................ SUCCESS [0.733s]
[INFO] Hive Common ....................................... SUCCESS [2.944s]
[INFO] Hive Serde ........................................ SUCCESS [5.015s]
[INFO] Hive Metastore .................................... SUCCESS [13.274s]
[INFO] Hive Ant Utilities ................................ SUCCESS [0.345s]
[INFO] Spark Remote Client ............................... SUCCESS [5.613s]
[INFO] Hive Query Language ............................... FAILURE [1:11.160s]
[INFO] Hive Service ...................................... SKIPPED
[INFO] Hive Accumulo Handler ............................. SKIPPED
[INFO] Hive JDBC ......................................... SKIPPED
[INFO] Hive Beeline ...................................... SKIPPED
[INFO] Hive CLI .......................................... SKIPPED
[INFO] Hive Contrib ...................................... SKIPPED
[INFO] Hive HBase Handler ................................ SKIPPED
[INFO] Hive HCatalog ..................................... SKIPPED
[INFO] Hive HCatalog Core ................................ SKIPPED
[INFO] Hive HCatalog Pig Adapter ......................... SKIPPED
[INFO] Hive HCatalog Server Extensions ................... SKIPPED
[INFO] Hive HCatalog Webhcat Java Client ................. SKIPPED
[INFO] Hive HCatalog Webhcat ............................. SKIPPED
[INFO] Hive HCatalog Streaming ........................... SKIPPED
[INFO] Hive HWI .......................................... SKIPPED
[INFO] Hive ODBC ......................................... SKIPPED
[INFO] Hive Shims Aggregator ............................. SKIPPED
[INFO] Hive TestUtils .................................... SKIPPED
[INFO] Hive Packaging .................................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:50.541s
[INFO] Finished at: Tue Mar 10 22:32:16 EDT 2015
[INFO] Final Memory: 90M/878M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project hive-exec: Error resolving project artifact: Could not transfer 
artifact org.pentaho:pentaho-aggdesigner-algorithm:pom:5.1.5-jhyde from/to 
spark-1.3 
(http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark_2.10-1.3-rc1/):
 Connection to http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com refused 
for project org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.5-jhyde: 
Connection timed out -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hive-exec
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12703809 - PreCommit-HIVE-SPARK-Build

> 'Error while trying to create table container' occurs during hive query case 
> execution when hive.optimize.skewjoin set to 'true' [Spark Branch]
> -----------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-9659
>                 URL: https://issues.apache.org/jira/browse/HIVE-9659
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Xin Hao
>            Assignee: Rui Li
>         Attachments: HIVE-9659.1-spark.patch, HIVE-9659.2-spark.patch, 
> HIVE-9659.3-spark.patch, HIVE-9659.4-spark.patch
>
>
> We found that 'Error while trying to create table container'  occurs during 
> Big-Bench Q12 case execution when hive.optimize.skewjoin set to 'true'.
> If hive.optimize.skewjoin set to 'false', the case could pass.
> How to reproduce:
> 1. set hive.optimize.skewjoin=true;
> 2. Run BigBench case Q12 and it will fail. 
> Check the executor log (e.g. /usr/lib/spark/work/app-XXXX/2/stderr) and you 
> will found error 'Error while trying to create table container' in the log 
> and also a NullPointerException near the end of the log.
> (a) Detail error message for 'Error while trying to create table container':
> {noformat}
> 15/02/12 01:29:49 ERROR SparkMapRecordHandler: Error processing row: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Error while trying to 
> create table container
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:118)
>       at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:193)
>       at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:219)
>       at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1051)
>       at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>       at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>       at 
> org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1055)
>       at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:486)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
>       at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>       at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
>       at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>       at org.apache.spark.scheduler.Task.run(Task.scala:56)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error while 
> trying to create table container
>       at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:158)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HashTableLoader.load(HashTableLoader.java:115)
>       ... 21 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error, not a 
> directory: 
> hdfs://bhx1:8020/tmp/hive/root/d22ef465-bff5-4edb-a822-0a9f1c25b66c/hive_2015-02-12_01-28-10_008_6897031694580088767-1/-mr-10009/HashTable-Stage-6/MapJoin-mapfile01--.hashtable
>       at 
> org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:106)
>       ... 22 more
> 15/02/12 01:29:49 INFO SparkRecordHandler: maximum memory = 40939028480
> 15/02/12 01:29:49 INFO PerfLogger: <PERFLOG method=SparkInitializeOperators 
> from=org.apache.hadoop.hive.ql.exec.spark.SparkRecordHandler>
> {noformat}
> (b) Detail error message for NullPointerException:
> {noformat}
> 5/02/12 01:29:50 ERROR MapJoinOperator: Unexpected exception: null
> java.lang.NullPointerException
>       at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.setMapJoinKey(MapJoinOperator.java:227)
>       at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:271)
>       at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>       at 
> org.apache.hadoop.hive.ql.exec.FilterOperator.processOp(FilterOperator.java:120)
>       at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>       at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
>       at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
>       at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:493)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
>       at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>       at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
>       at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>       at org.apache.spark.scheduler.Task.run(Task.scala:56)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> 15/02/12 01:29:50 INFO Executor: Executor is trying to kill task 144.2 in 
> stage 3.0 (TID 1500)
> 15/02/12 01:29:50 INFO MapOperator: Initializing Self MAP[1800]
> 15/02/12 01:29:50 ERROR SparkMapRecordHandler: Error processing row: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
> {"wcs_click_date_sk":37793,"wcs_click_time_sk":null,"wcs_sales_sk":null,"wcs_item_sk":51402,"wcs_web_page_sk":null,"wcs_user_sk":2541920}
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
> {"wcs_click_date_sk":37793,"wcs_click_time_sk":null,"wcs_sales_sk":null,"wcs_item_sk":51402,"wcs_web_page_sk":null,"wcs_user_sk":2541920}
>       at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:503)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
>       at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>       at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
>       at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>       at org.apache.spark.scheduler.Task.run(Task.scala:56)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception: null
>       at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:314)
>       at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>       at 
> org.apache.hadoop.hive.ql.exec.FilterOperator.processOp(FilterOperator.java:120)
>       at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>       at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
>       at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
>       at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:493)
>       ... 14 more
> Caused by: java.lang.NullPointerException
>       at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.setMapJoinKey(MapJoinOperator.java:227)
>       at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:271)
>       ... 20 more
> 15/02/12 01:29:50 INFO MapOperator: MAP[1797]: records read - 1
> 15/02/12 01:29:50 INFO Executor: Executor is trying to kill task 96.3 in 
> stage 3.0 (TID 1515)
> 15/02/12 01:29:50 INFO PerfLogger: <PERFLOG method=SparkInitializeOperators 
> from=org.apache.hadoop.hive.ql.exec.spark.SparkRecordHandler>
> 15/02/12 01:29:50 INFO MapOperator: Initialization Done 1800 MAP
> 15/02/12 01:29:50 INFO SparkRecordHandler: processing 1 rows: used memory = 
> 12023782616
> 15/02/12 01:29:50 ERROR Executor: Exception in task 16.2 in stage 3.0 (TID 
> 1488)
> java.lang.RuntimeException: Error processing row: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row 
> {"wcs_click_date_sk":37793,"wcs_click_time_sk":null,"wcs_sales_sk":null,"wcs_item_sk":51402,"wcs_web_page_sk":null,"wcs_user_sk":2541920}
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:153)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:47)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:98)
>       at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41)
>       at 
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:217)
>       at 
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:65)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>       at org.apache.spark.scheduler.Task.run(Task.scala:56)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row 
> {"wcs_click_date_sk":37793,"wcs_click_time_sk":null,"wcs_sales_sk":null,"wcs_item_sk":51402,"wcs_web_page_sk":null,"wcs_user_sk":2541920}
>       at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:503)
>       at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:141)
>       ... 13 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected 
> exception: null
>       at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:314)
>       at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>       at 
> org.apache.hadoop.hive.ql.exec.FilterOperator.processOp(FilterOperator.java:120)
>       at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
>       at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95)
>       at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
>       at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:493)
>       ... 14 more
> Caused by: java.lang.NullPointerException
>       at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.setMapJoinKey(MapJoinOperator.java:227)
>       at 
> org.apache.hadoop.hive.ql.exec.MapJoinOperator.processOp(MapJoinOperator.java:271)
>       ... 20 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to