Err... Guess you replied in portuguese to the list :D

2012/10/29 Andre Tavares <andre...@gmail.com>

> Marcelo,
>
> das vezes q tive este problema geralmente era porque o valor UUID sendo
> tratado para o cassandra não correspondia a um valor "exato"  em UUID, para
> isso utilizava bastante o UUID.randomUUID() (para gerar um UUID valido)
> e UUID.fromString("081f4500-047e-401c-8c0b-a41fefd099d7") - este para
> transformar uma String em UUID valido.
>
> Como temos 2 keyspaces no cassandra (dmp_input->Astyanax) e (dmp->PlayOrm)
> pode acontecer destes frameworks tratarem as chaves UUID de maneira
> diferentes (em nossa implementação feita )
>
> portanto acho válido a solução que você encontrou (sorry por não ter
> enxergado o probs antes caso, seja este o seu caso ...)
>
> Abs,
>
> André
>
>
> 2012/10/29 Marcelo Elias Del Valle <mvall...@gmail.com>
>
>> Answering myself: it seems we can't have any non type 1 UUIDs in column
>> names. I used the UTF8 comparator and saved my UUIDs as strings, it worked.
>>
>>
>> 2012/10/29 Marcelo Elias Del Valle <mvall...@gmail.com>
>>
>>> Hello,
>>>
>>>     I am using ColumnFamilyInputFormat the same way it's described in
>>> this example:
>>> https://github.com/apache/cassandra/blob/trunk/examples/hadoop_word_count/src/WordCount.java#L215
>>>
>>>     I have been able to successfully process data in cassandra by using
>>> hadoop. However, as this solution doesn't allow me to filter which data in
>>> cassandra I want to filter, I decided to create a query column family to
>>> list data I want to process in hadoop. This column family is as follows:
>>>
>>> row key: YYYYMM
>>> column name: UUID - user ID
>>> column value: timestamp - last processed date
>>>
>>>      The problem is, when I run hadoop, I get the exception bellow. Is
>>> there any limitation in having UUIDs as column names? I am generating my
>>> user IDs with java.util.UUID.randomUUID() for now. I could change the
>>> method later, but only type 1 UUIDs are 16 bits longer, isn't it?
>>>
>>>
>>> java.lang.RuntimeException: InvalidRequestException(why:UUIDs must be
>>> exactly 16 bytes)
>>> at
>>> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:391)
>>>  at
>>> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:397)
>>> at
>>> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:323)
>>>  at
>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>> at
>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>  at
>>> org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:188)
>>> at
>>> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
>>>  at
>>> org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
>>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
>>>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
>>>  at
>>> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
>>> Caused by: InvalidRequestException(why:UUIDs must be exactly 16 bytes)
>>>  at
>>> org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12254)
>>> at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
>>>  at
>>> org.apache.cassandra.thrift.Cassandra$Client.recv_get_range_slices(Cassandra.java:683)
>>> at
>>> org.apache.cassandra.thrift.Cassandra$Client.get_range_slices(Cassandra.java:667)
>>>  at
>>> org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:356)
>>> ... 11 more
>>>
>>> Best regards,
>>> --
>>> Marcelo Elias Del Valle
>>> http://mvalle.com - @mvallebr
>>>
>>
>>
>>
>> --
>> Marcelo Elias Del Valle
>> http://mvalle.com - @mvallebr
>>
>
>


-- 
Marcelo Elias Del Valle
http://mvalle.com - @mvallebr

Reply via email to