Hello,
     Was curious what people had found to be better for
structuring/modeling data into C*?   With my data I have two primary
keys, one 64 bit int thats 0 - 50 million ( its unlikely to go higher
then 70 million ever ) and another 64 bit that's probably close to
hitting a trillion in the next year or so.   Looking at how the data
is going to behave, for the first few months each row/record will be
updated but after that its practically written in stone.  Still I was
leaning toward leveled compaction as it gets updated anywhere from
once an hour to at least once a day for the first 7 days.

So from anyones experience, is it better to use a low cardinality
partition key or a high cardinality.   Additionally data organized by
the low cardinality set is probably 1-6B ( and growing ) but the high
cardinality would be 1-6MB only 2-3x a year.


Thanks,
   Dave


new high cardinality keys in 1 year ~15,768,00,000
new low cardinality keys in 1 year = 10,000-30,000

low cardinality key set size ~1-6GB
high cardinality key set size 1-5MB

Reply via email to