[ 
https://issues.apache.org/jira/browse/HIVE-1872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12987395#action_12987395
 ] 

Bharath R  commented on HIVE-1872:
----------------------------------

Based on the thread  HIVE-549 ,  there was a comment to cleanup the running 
jobs if one of them have failed. 

        * Shall we immediately stop all other running jobs if one of them have 
failed?
                + console.printError(errorMessage);
                + taskCleanup(runnable); 

This comment was implemented by calling System.exit(9). This makes the JVM to 
exit. 

Rather than relying on the ShutdownHook , we could directly invoke that 
functionality in the taskCleanUp

As per my proposed solution , in taskCleanup. 

                 1)  Sending an interrupt to runningJobs "TaskRunner" so that 
the task associated will be interrupted. 
                 2)  And  if the task is "MapRed" task, then the task cleanup 
should post the "kill url" from "runningJobKillURIs" hashmap in ExecDriver

Comments?  

Thanks

> Hive process is exiting on executing ALTER query
> ------------------------------------------------
>
>                 Key: HIVE-1872
>                 URL: https://issues.apache.org/jira/browse/HIVE-1872
>             Project: Hive
>          Issue Type: Bug
>          Components: CLI, Server Infrastructure
>    Affects Versions: 0.6.0
>         Environment: SUSE Linux Enterprise Server 10 SP2 (i586) - Kernel 
> 2.6.16.60-0.21-smp (3)
> Hadoop 0.20.1
> Hive 0.6.0
>            Reporter: Bharath R 
>            Assignee: Bharath R 
>         Attachments: HIVE-1872.1.patch
>
>
> Hive process is exiting on executing the below queries in the same order as 
> mentioned
> 1) CREATE TABLE SAMPLETABLE(IP STRING , showtime BIGINT ) partitioned by (ds 
> string,ipz int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\040'
> 2) ALTER TABLE SAMPLETABLE add Partition(ds='sf') location 
> '/user/hive/warehouse' Partition(ipz=100) location '/user/hive/warehouse'
> After the second query execution , the hive throws the below exception and 
> exiting the process
> 10:09:03 ERROR exec.DDLTask: FAILED: Error in metadata: table is partitioned 
> but partition spec is not specified or tab: {ipz=100}
> org.apache.hadoop.hive.ql.metadata.HiveException: table is partitioned but 
> partition spec is not specified or tab: {ipz=100}
>         at 
> org.apache.hadoop.hive.ql.metadata.Table.isValidSpec(Table.java:341)
>         at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:902)
>         at 
> org.apache.hadoop.hive.ql.exec.DDLTask.addPartition(DDLTask.java:282)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:191)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
>         at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:633)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:506)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:384)
>         at 
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:114)
>         at 
> org.apache.hadoop.hive.service.ThriftHive$Processor$execute.process(ThriftHive.java:378)
>         at 
> org.apache.hadoop.hive.service.ThriftHive$Processor.process(ThriftHive.java:366)
>         at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:252)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:619)
> As the "alter" query is incorrect the exception was thrown, ideally it should 
> be "ALTER TABLE SAMPLETABLE add Partition(ds='sf',ipz=100) location 
> '/user/hive/warehouse'". 
> It is not good to exit the HIVE process when the query is incorrect.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to