Hey Daren,
Your idea has some pedigree in the Hadoop universe: it was proposed in early
2006 at https://issues.apache.org/jira/browse/HADOOP-106 and closed as
"won't fix". The suggestion there is to pad out the rest of the block for
very large records, as the complexity added to the file system fo
While running HADOOP "Exception in thread "main" java.lang.NullPointerException
" occurs
Key: HADOOP-6806
URL: https://issues.apache.org/jira/browse/HADOOP-6806
Log directly from jetty to commons logging, bypassing SLF4J
---
Key: HADOOP-6807
URL: https://issues.apache.org/jira/browse/HADOOP-6807
Project: Hadoop Common
Issue Type: Improvement
Document steps to enable {File|Ganglia}Context for kerberos metrics
---
Key: HADOOP-6808
URL: https://issues.apache.org/jira/browse/HADOOP-6808
Project: Hadoop Common
Issue Type
rpc allow creating arbitrary size of objects
Key: HADOOP-6809
URL: https://issues.apache.org/jira/browse/HADOOP-6809
Project: Hadoop Common
Issue Type: Bug
Components: io
Rep
[
https://issues.apache.org/jira/browse/HADOOP-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jakob Homan resolved HADOOP-6661.
-
Resolution: Fixed
I've committed this. Thanks, Jitendra! Resolving as fixed.
> User document