[ https://issues.apache.org/jira/browse/HIVE-15142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15723869#comment-15723869 ]
Wei Zheng commented on HIVE-15142: ---------------------------------- [~ekoifman] Is the backtrace you posted from Hive 1? I tried to reproduce on master, and got the following backtrace from hivemetastore.log. It looks we're doing the right thing, which is to error out on file creation during compaction when there's not enough permission. {code} java.lang.Exception: java.io.IOException: Mkdirs failed to create file:/Users/wzheng/hivetmp/warehouse/acid.db/t1/_tmp_4711984b-2a04-4d7c-aa63-7065726587d4/base_0000001 (exists=false, cwd=file:/Users/wzheng) at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) ~[hadoop-mapreduce-client-common-2.6.0.jar:?] at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) [hadoop-mapreduce-client-common-2.6.0.jar:?] Caused by: java.io.IOException: Mkdirs failed to create file:/Users/wzheng/hivetmp/warehouse/acid.db/t1/_tmp_4711984b-2a04-4d7c-aa63-7065726587d4/base_0000001 (exists=false, cwd=file:/Users/wzheng) at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:442) ~[hadoop-common-2.6.0.jar:?] at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:428) ~[hadoop-common-2.6.0.jar:?] at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908) ~[hadoop-common-2.6.0.jar:?] at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889) ~[hadoop-common-2.6.0.jar:?] at org.apache.orc.impl.WriterImpl.getStream(WriterImpl.java:2468) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] at org.apache.orc.impl.WriterImpl.flushStripe(WriterImpl.java:2485) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] at org.apache.orc.impl.WriterImpl.close(WriterImpl.java:2787) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] at org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:313) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$1.close(OrcOutputFormat.java:315) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.close(CompactorMR.java:692) ~[hive-exec-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) ~[hadoop-mapreduce-client-core-2.6.0.jar:?] at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) ~[hadoop-mapreduce-client-core-2.6.0.jar:?] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) ~[hadoop-mapreduce-client-core-2.6.0.jar:?] at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) ~[hadoop-mapreduce-client-common-2.6.0.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112] at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_112] 2016-12-05T16:09:37,101 INFO [hw11217.local-26] mapreduce.Job: Job job_local1901112071_0001 running in uber mode : false 2016-12-05T16:09:37,103 INFO [hw11217.local-26] mapreduce.Job: map 0% reduce 0% 2016-12-05T16:09:37,106 INFO [hw11217.local-26] mapreduce.Job: Job job_local1901112071_0001 failed with state FAILED due to: NA 2016-12-05T16:09:37,111 INFO [hw11217.local-26] mapreduce.Job: Counters: 0 2016-12-05T16:09:37,111 ERROR [hw11217.local-26] compactor.Worker: Caught exception while trying to compact id:1,dbname:acid,tableName:t1,partName:null,state:^@,type:MAJOR,properties:null,runAs:null,tooManyAborts:false,highestTxnId:0. Marking failed to avoid repeated failures, java.io.IOException: Job failed ! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836) at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.launchCompactionJob(CompactorMR.java:310) at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR.run(CompactorMR.java:272) at org.apache.hadoop.hive.ql.txn.compactor.Worker$1.run(Worker.java:173) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:170) {code} > CompactorMR fails with FileNotFoundException > -------------------------------------------- > > Key: HIVE-15142 > URL: https://issues.apache.org/jira/browse/HIVE-15142 > Project: Hive > Issue Type: Bug > Components: Transactions > Affects Versions: 1.0.0 > Reporter: Eugene Koifman > Assignee: Wei Zheng > Priority: Critical > > {noformat} > No of maps and reduces are 0 job_1478131263487_0009 > Job commit failed: java.io.FileNotFoundException: File > hdfs://....../_tmp_80e50691-c9fe-485a-8e1e-ba20331c7d97 does not exist. > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:904) > at > org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:113) > at > org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:966) > at > org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:962) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:962) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorOutputCommitter.commitJob(CompactorMR.java:776) > at > org.apache.hadoop.mapred.OutputCommitter.commitJob(OutputCommitter.java:291) > at > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:285) > at > org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > when Compactor doesn't have permissions to write to the table dir. > Evidently we loose the exception when creating CompactorMR.TMP_LOCATION and > only see that the file is not there on commit of the job -- This message was sent by Atlassian JIRA (v6.3.4#6332)