Spill can fail with bad call to Random
--
Key: HADOOP-6766
URL: https://issues.apache.org/jira/browse/HADOOP-6766
Project: Hadoop Common
Issue Type: Bug
Components: fs
Affects Versions: 0.20.
[
https://issues.apache.org/jira/browse/HADOOP-6074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Konstantin Shvachko resolved HADOOP-6074.
-
Resolution: Duplicate
This has been fixed by HDFS-459.
> TestDFSIO does not use
I agree, thanks for looking into it, and this sounds entirely reasonable. I
imagine a great number of the contributors are committers on some apache
project or another (if not Hadoop itself) so we'll only need to make special
exemptions occasionally.
-Todd
On Fri, May 14, 2010 at 1:19 PM, Konstan
This is good news!
I found SureLogic stack useful for finding bugs.
It was especially helpful in detecting synchronization issues.
Good that the licensing issues are cleared out.
Thanks, Cos.
--Konstantin
On 5/14/2010 12:59 PM, Konstantin Boudnik wrote:
Here's an update from SureLogic on the li
Here's an update from SureLogic on the licensing of the software to the
broader contributors community.
1) For now we should be able to use 'committers' license (the one we already
have)
to share with contributors on per case basis (a contributor needs to ask for
it and after
a considerati
[
https://issues.apache.org/jira/browse/HADOOP-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Aaron Kimball resolved HADOOP-6708.
---
Resolution: Won't Fix
After thinking more about this, I don't think this issue is going to s
Duplicate commons-net in published POM
--
Key: HADOOP-6765
URL: https://issues.apache.org/jira/browse/HADOOP-6765
Project: Hadoop Common
Issue Type: Bug
Components: build
Affects Versions: 0.
Add number of reader threads and queue length as configuration parameters in
RPC.getServer
--
Key: HADOOP-6764
URL: https://issues.apache.org/jira/browse/HADOOP-6764
[
https://issues.apache.org/jira/browse/HADOOP-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dmytro Molkov resolved HADOOP-6469.
---
Resolution: Not A Problem
I will close this one since I already solved HDFS-599 by simply us
Hi,
If I am not using the HDFS, which file has the policy to determine the Input
Splits iven the job tracker and how job tacker distributes the tasks? where
are these policies located.
Are they pluggable??
Saurabh Agarwal
Hemanth,
Thanks!!
Saurabh Agarwal
On Fri, May 14, 2010 at 9:49 AM, Hemanth Yamijala wrote:
> Saurabh,
>
> > let me re frame my question I wanted to knowhow job tracker decides the
> > assignment of input splits to task tracker based on task tracker's data
> > locality. Where is this policy de
11 matches
Mail list logo