[ 
https://issues.apache.org/jira/browse/FLINK-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162044#comment-17162044
 ] 

Rui Li commented on FLINK-18659:
--------------------------------

I managed to reproduce the issue and did some debugging. In Hive 2.3.4, the 
in-progress file is created when Hive creates the Orc writer. In Hive 1.1.0, it 
seems the file is not created even after the first record is written. When a 
{{Bucket}} receives the first record, it creates the writer and writes this 
record with the writer. And on the 2nd record, the {{Bucket}} checks the 
underlying file size to see if rolling is needed and this is when we hit the 
exception because the file is not yet created.

> FileNotFoundException when writing Hive orc tables
> --------------------------------------------------
>
>                 Key: FLINK-18659
>                 URL: https://issues.apache.org/jira/browse/FLINK-18659
>             Project: Flink
>          Issue Type: Bug
>    Affects Versions: 1.11.1
>            Reporter: Jingsong Lee
>            Priority: Blocker
>             Fix For: 1.11.2
>
>
> Writing Hive orc tables with Hive 1.1 version, will be:
> {code:java}
> Caused by: java.io.FileNotFoundException: File does not exist: 
> hdfs://xxx/warehouse2/tmp_table/.part-6b51dbc2-e169-43a8-93b2-eb8d2be45054-0-0.inprogress.d77fa76c-4760-4cb6-bb5b-97d70afff000
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1218)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1210)
>       at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1210)
>   at 
> org.apache.flink.connectors.hive.write.HiveBulkWriterFactory$1.getSize(HiveBulkWriterFactory.java:54)
>         at 
> org.apache.flink.formats.hadoop.bulk.HadoopPathBasedPartFileWriter.getSize(HadoopPathBasedPartFileWriter.java:84)
>     at 
> org.apache.flink.table.filesystem.FileSystemTableSink$TableRollingPolicy.shouldRollOnEvent(FileSystemTableSink.java:451)
>      at 
> org.apache.flink.table.filesystem.FileSystemTableSink$TableRollingPolicy.shouldRollOnEvent(FileSystemTableSink.java:421)
>      at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:193)
>        at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:282)
>  at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.onElement(StreamingFileSinkHelper.java:104)
>  at 
> org.apache.flink.table.filesystem.stream.StreamingFileWriter.processElement(StreamingFileWriter.java:118)
>     at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
>  at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
>       at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
> {code}
> This maybe due to lazy init in Orc writer. Until first record comes, orc 
> writer not create this file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to