[ 
https://issues.apache.org/jira/browse/HIVE-7210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-7210:
-----------------------------

    Attachment: HIVE-7210.1.patch

Patch to prevent getSplits() from removing cached plans from other queries. 
Talked to Gunther and he said he can eliminate the call to clear the cached 
plan from getSplits() altogether, so this may not be the final fix.

> NPE with "No plan file found" when running Driver instances on multiple 
> threads
> -------------------------------------------------------------------------------
>
>                 Key: HIVE-7210
>                 URL: https://issues.apache.org/jira/browse/HIVE-7210
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Jason Dere
>            Assignee: Gunther Hagleitner
>         Attachments: HIVE-7210.1.patch
>
>
> Informatica has a multithreaded application running multiple instances of 
> CLIDriver.  When running concurrent queries they sometimes hit the following 
> error:
> {noformat}
> 2014-05-30 10:24:59 <pool-10-thread-1> INFO: Hadoop_Native_Log :INFO 
> org.apache.hadoop.hive.ql.exec.Utilities: No plan file found: 
> hdfs://ICRHHW21NODE1:8020/tmp/hive-qamercury/hive_2014-05-30_10-24-57_346_890014621821056491-2/-mr-10002/6169987c-3263-4737-b5cb-38daab882afb/map.xml
> 2014-05-30 10:24:59 <pool-10-thread-1> INFO: Hadoop_Native_Log :INFO 
> org.apache.hadoop.mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/qamercury/.staging/job_1401360353644_0078
> 2014-05-30 10:24:59 <pool-10-thread-1> INFO: Hadoop_Native_Log :ERROR 
> org.apache.hadoop.hive.ql.exec.Task: Job Submission failed with exception 
> 'java.lang.NullPointerException(null)'
> java.lang.NullPointerException
>                 at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
>                 at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:271)
>                 at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:520)
>                 at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:512)
>                 at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>                 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>                 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>                 at java.security.AccessController.doPrivileged(Native Method)
>                 at javax.security.auth.Subject.doAs(Subject.java:415)
>                 at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
>                 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>                 at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
>                 at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
>                 at java.security.AccessController.doPrivileged(Native Method)
>                 at javax.security.auth.Subject.doAs(Subject.java:415)
>                 at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557)
>                 at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
>                 at 
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
>                 at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420)
>                 at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
>                 at 
> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)
>                 at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>                 at 
> org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1504)
>                 at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1271)
>                 at 
> org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1089)
>                 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:912)
>                 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
>                 at 
> com.informatica.platform.dtm.executor.hive.impl.AbstractHiveDriverBaseImpl.run(AbstractHiveDriverBaseImpl.java:86)
>                 at 
> com.informatica.platform.dtm.executor.hive.MHiveDriver.executeQuery(MHiveDriver.java:126)
>                 at 
> com.informatica.platform.dtm.executor.hive.task.impl.HiveTaskHandlerImpl.executeQuery(HiveTaskHandlerImpl.java:358)
>                 at 
> com.informatica.platform.dtm.executor.hive.task.impl.HiveTaskHandlerImpl.executeScript(HiveTaskHandlerImpl.java:247)
>                 at 
> com.informatica.platform.dtm.executor.hive.task.impl.HiveTaskHandlerImpl.executeMainScript(HiveTaskHandlerImpl.java:194)
>                 at 
> com.informatica.platform.ldtm.executor.common.workflow.taskhandler.impl.BaseTaskHandlerImpl.run(BaseTaskHandlerImpl.java:126)
>                 at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>                 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>                 at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>                 at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>                 at java.lang.Thread.run(Thread.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to