On Tue, 2011-01-25 at 14:16 +0100, Patrik Modesto wrote:
> The atttached file contains the working version with cloned key in
> reduce() method. My other aproache was:
> 
> > context.write(ByteBuffer.wrap(key.getBytes(), 0, key.getLength()),
> > Collections.singletonList(getMutation(key)));
> 
> Which produce junk keys. 

In fact i have another problem (trying to write an empty byte[], or
something, as a key, which put one whole row out of whack, ((one row in
25 million...))).

But i'm debugging along the same code.

I don't quite understand how the byte[] in 
ByteBuffer.wrap(key.getBytes(),...)
gets clobbered.
Well your key is a mutable Text object, so i can see some possibility
depending on how hadoop uses these objects.
Is there something to ByteBuffer.allocate(..) i'm missing...

btw.
 is "d.timestamp = System.currentTimeMillis();" ok?
 shouldn't this be microseconds so that each mutation has a different
timestamp? http://wiki.apache.org/cassandra/DataModel


~mck


-- 
"As you go the way of life, you will see a great chasm. Jump. It is not
as wide as you think." Native American Initiation Rite 
| http://semb.wever.org | http://sesat.no
| http://finn.no       | Java XSS Filter

-- 
"Everything should be made as simple as possible, but not simpler."
Albert Einstein (William of Ockham) 
| http://semb.wever.org | http://sesat.no
| http://finn.no       | Java XSS Filter

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to