[
https://issues.apache.org/jira/browse/HIVE-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
binlijin updated HIVE-2680:
---------------------------
Attachment: HIVE-2680.patch
> In FileSinkOperator if the RecordWriter.write throw IOException, we should
> call RecordWriter's close method.
> ------------------------------------------------------------------------------------------------------------
>
> Key: HIVE-2680
> URL: https://issues.apache.org/jira/browse/HIVE-2680
> Project: Hive
> Issue Type: Improvement
> Reporter: binlijin
> Fix For: 0.9.0
>
> Attachments: HIVE-2680.patch
>
>
> Dynamic-partition Insert, if the partition that will be created is large,
> many files will be created and if the input is large the DataNode's
> xceiverCount will easily exceeds the limit of concurrent xcievers (default
> 1024), The RecordWriter.write(recordValue) will throw the Exception "Could
> not read from stream". After an hour the lease timeout, there will many
> commitBlockSynchronization requests and the namenode's load will be very
> high, so abortWriters should be called.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira