ArrayIndexOutOfBoundsException is thrown by KeyFieldBasedPartitioner
Key: HADOOP-6130
URL: https://issues.apache.org/jira/browse/HADOOP-6130
Project: Hadoop Common
Issue Ty
+1
On 7/8/09 12:25 AM, "Hong Tang" wrote:
I have talked with a few folks in the community who are interested in
using TFile (HADOOP-3315) in their projects that are currently
dependent on Hadoop 0.20, and it would significantly simplify the
release process as well as their lives if we could bac
When you have 0 reduces, the map outputs themselves are moved to the output
directory for you.
It is also straight forward to open your own file and write to it directory
instead of using the output collector.
On Tue, Jul 7, 2009 at 10:14 AM, Todd Lipcon wrote:
> On Tue, Jul 7, 2009 at 1:13 AM,
+1
+1
mahadev
On 7/7/09 12:18 PM, "Milind Bhandarkar" wrote:
> +1.
>
>
> On 7/7/09 11:55 AM, "Hong Tang" wrote:
>
>> I have talked with a few folks in the community who are interested in
>> using TFile (HADOOP-3315) in their projects that are currently
>> dependent on Hadoop 0.20, and it woul
+1
Zheng Shao wrote:
+1
-Original Message-
From: Arun C Murthy [mailto:a...@yahoo-inc.com]
Sent: Tuesday, July 07, 2009 1:30 PM
To: common-dev@hadoop.apache.org
Subject: Re: [VOTE] Back-port TFile to Hadoop 0.20
On Jul 7, 2009, at 11:55 AM, Hong Tang wrote:
I have talked with a
On Tue, Jul 7, 2009 at 1:39 PM, Dhruba Borthakur wrote:
> I think we are trying to change an existing Apache-Hadoop process. The
> current process specifically says that a released branch cannot have new
> features checked into it.
>
> This vote seems to be proposing that "If a new feature does n
I think we are trying to change an existing Apache-Hadoop process. The
current process specifically says that a released branch cannot have new
features checked into it.
This vote seems to be proposing that "If a new feature does not change any
existing code (other than build.xml), then it is ok t
+1
-Original Message-
From: Arun C Murthy [mailto:a...@yahoo-inc.com]
Sent: Tuesday, July 07, 2009 1:30 PM
To: common-dev@hadoop.apache.org
Subject: Re: [VOTE] Back-port TFile to Hadoop 0.20
On Jul 7, 2009, at 11:55 AM, Hong Tang wrote:
> I have talked with a few folks in the community
On Jul 7, 2009, at 11:55 AM, Hong Tang wrote:
I have talked with a few folks in the community who are interested
in using TFile (HADOOP-3315) in their projects that are currently
dependent on Hadoop 0.20, and it would significantly simplify the
release process as well as their lives if we
On Jul 7, 2009, at 11:55 AM, Hong Tang wrote:
I have talked with a few folks in the community who are interested
in using TFile (HADOOP-3315) in their projects that are currently
dependent on Hadoop 0.20, and it would significantly simplify the
release process as well as their lives if we
+1
On Jul 7, 2009, at 11:55 AM, Hong Tang wrote:
I have talked with a few folks in the community who are interested
in using TFile (HADOOP-3315) in their projects that are currently
dependent on Hadoop 0.20, and it would significantly simplify the
release process as well as their lives if
MapFile doesn't worh with serializables other than Writables
Key: HADOOP-6129
URL: https://issues.apache.org/jira/browse/HADOOP-6129
Project: Hadoop Common
Issue Type: Improvement
+1.
On 7/7/09 11:55 AM, "Hong Tang" wrote:
> I have talked with a few folks in the community who are interested in
> using TFile (HADOOP-3315) in their projects that are currently
> dependent on Hadoop 0.20, and it would significantly simplify the
> release process as well as their lives if we
I have talked with a few folks in the community who are interested in
using TFile (HADOOP-3315) in their projects that are currently
dependent on Hadoop 0.20, and it would significantly simplify the
release process as well as their lives if we could back port TFile to
Hadoop 0.20 (instead o
[
https://issues.apache.org/jira/browse/HADOOP-5976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsz Wo (Nicholas), SZE resolved HADOOP-5976.
Resolution: Fixed
Fix Version/s: 0.21.0
Release Note: Add a new
On Tue, Jul 7, 2009 at 1:13 AM, jason hadoop wrote:
>
>
> The other alternative you may try is simply to write your map outputs to
> HDFS [ie: setNumReduces(0)], and have a consumer pick up the map outputs as
> they appear. If the life of the files is short and you can withstand data
> loss, you m
Serializer and Deserializer should extend java.io.Closeable
---
Key: HADOOP-6128
URL: https://issues.apache.org/jira/browse/HADOOP-6128
Project: Hadoop Common
Issue Type: Improvement
If your constraints are loose enough, you could consider using the chain
mapping that became available in 19, and
have multiple mappers for your jobs.
The extra mappers only receive the output of the prior map in the chain and
if I remember correctly, the combiner is run at the end of the chain of
To add to Todd/Ted's wise words, the Hadoop (and MapReduce) architects
didn't impose this limitation just for fun, it is very core to enabling
Hadoop to be as reliable as it is. If the reducer starts processing
mapper output immediately and a specific mapper fails then the reducer
would have to
20 matches
Mail list logo