[ 
https://issues.apache.org/jira/browse/HIVE-2680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2680:
-----------------------------------

    Affects Version/s: 0.9.0
        Fix Version/s:     (was: 0.9.0)

Unlinking from 0.9 
                
> In FileSinkOperator if the RecordWriter.write throw IOException, we should 
> call RecordWriter's close method.
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-2680
>                 URL: https://issues.apache.org/jira/browse/HIVE-2680
>             Project: Hive
>          Issue Type: Improvement
>    Affects Versions: 0.9.0
>            Reporter: binlijin
>         Attachments: HIVE-2680.patch
>
>
> Dynamic-partition Insert, if the partition that will be created is large, 
> many files will be created and if the input is large the DataNode's 
> xceiverCount will easily exceeds the limit of concurrent xcievers (default 
> 1024), The RecordWriter.write(recordValue) will throw the Exception "Could 
> not read from stream". After an hour the lease timeout, there will many 
> commitBlockSynchronization requests and the namenode's load will be very 
> high, so abortWriters should be called.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to