AM, jason hadoop
> wrote:
> >
> >
> > The other alternative you may try is simply to write your map outputs to
> > HDFS [ie: setNumReduces(0)], and have a consumer pick up the map outputs
> as
> > they appear. If the life of the files is short and you can wit
If your constraints are loose enough, you could consider using the chain
mapping that became available in 19, and
have multiple mappers for your jobs.
The extra mappers only receive the output of the prior map in the chain and
if I remember correctly, the combiner is run at the end of the chain of
Since the only place that data is incompatable is in a map, I don't think it
should matter, unless someone has written out a file of TupleWriteables in a
sequence file.
The serialization format is designed to be able to read the old format.
On Thu, Jul 2, 2009 at 3:10 PM, Owen O'Malley wrote:
>
I don't believe that patch breaks any compatibility, the change is
completely internal to TupleWritable.
The version in 18, requires larger changes as CompositeInputReader needs to
change also.
On Wed, Jul 1, 2009 at 11:50 PM, Owen O'Malley wrote:
> On Wed, Jul 1, 2009 at 8:09 PM,
nch-0.19.
>
> Nige
>
> On Jul 1, 2009, at 7:49 AM, jason hadoop wrote:
>
> Can you put http://issues.apache.org/jira/browse/HADOOP-5589 in please?
>>
>>
>> On Wed, Jul 1, 2009 at 2:44 AM, Tom White wrote:
>>
>> I have created a candidate bui
Can you put http://issues.apache.org/jira/browse/HADOOP-5589 in please?
On Wed, Jul 1, 2009 at 2:44 AM, Tom White wrote:
> I have created a candidate build for Hadoop 0.19.2. This fixes 42
> issues in 0.19.1
> (
> http://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&mode=hide&so