hadoop zlib compression does not fully utilize the buffer
-
Key: HADOOP-6662
URL: https://issues.apache.org/jira/browse/HADOOP-6662
Project: Hadoop Common
Issue Type: Improvement
BlockDecompressorStream get EOF exception when decompressing the file
compressed from empty file
Key: HADOOP-6663
URL: https://issues.apache.org/jira/browse/HADOOP-66
fs.inmemory.size.mb not listed in conf. Cluster setup page gives wrong advice.
--
Key: HADOOP-6664
URL: https://issues.apache.org/jira/browse/HADOOP-6664
Project: Hadoop Commo
Stack wrote:
Getting a release out is critical. Otherwise, IMO, the project is
dead but for the stiffening.
Thanks Tom for stepping up to play the RM role for a 0.21.
Regarding Steve's call for what we can offer Tom to help along the
release, the little flea hbase can test its use case on 0.21
[
https://issues.apache.org/jira/browse/HADOOP-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hong Tang resolved HADOOP-6662.
---
Resolution: Duplicate
> hadoop zlib compression does not fully utilize the buffer
>
[
https://issues.apache.org/jira/browse/HADOOP-6664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Chris Douglas resolved HADOOP-6664.
---
Resolution: Invalid
> fs.inmemory.size.mb not listed in conf. Cluster setup page gives wrong
I am interested in a few things, all pertaining to hdfs block
locations for running map tasks. I have spent several days looking
through the hadoop source code and have arrived at a couple of
questions that are still plaguing me.
1) When the jobtracker assigns a task to a tasktracker, it determine
[
https://issues.apache.org/jira/browse/HADOOP-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang reopened HADOOP-6662:
---
Thanks for Hong Tang. Patch is not attached in HADOOP-4196 and the issue is
still unresolved in release