On Sat, Jul 21, 2012 at 3:37 PM, Sylvain Lebresne <sylv...@datastax.com> wrote:
>> I don't know much about Cassandra internals, but from a user point of
>> view, a scan for a range of tokens is not a common use-case.
>
> All of boostrap/move/decommission/repair rely heavily on being able to
> scan efficiently a range of token. Otherwise, a
> boostrap/move/decommission/repair of a node would require the node and
> all the nodes that share some data with it would have to read all
> their data (to extract the correct token range). This would have a
> hugely detrimental impact of the performance of those operations and
> is therefore not an option.

Why cant we keep duplicated rows with md5 hashing, which will be used
in such scenarios ( bootstrap/move etc ). Otherwise the
OrderedPartitining will be used for loookups.


>
> --
> Sylvain

Reply via email to