[ 
https://issues.apache.org/jira/browse/HDFS-4418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-4418.
-------------------------------

      Resolution: Fixed
    Hadoop Flags: Reviewed

Committed to branch.
                
> HDFS-347: increase default FileInputStreamCache size
> ----------------------------------------------------
>
>                 Key: HDFS-4418
>                 URL: https://issues.apache.org/jira/browse/HDFS-4418
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode, hdfs-client, performance
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hdfs-4418.txt
>
>
> The FileInputStreamCache currently defaults to holding only 10 input stream 
> pairs (corresponding to 10 blocks). In many HBase workloads, the region 
> server will be issuing random reads against a local file which is 2-4GB in 
> size or even larger (hence 20+ blocks).
> Given that the memory usage for caching these input streams is low, and 
> applications like HBase tend to already increase their ulimit -n 
> substantially (eg up to 32,000), I think we should raise the default cache 
> size to 50 or more. In the rare case that someone has an application which 
> uses local reads with hundreds of open blocks and can't feasibly raise their 
> ulimit -n, they can lower the limit appropriately.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to