Thank you very much!
Just read some stuff in the wiki, such as limitations and secondary index.
Adding up to what you said, the search in large rows, by which I mean rows
with millions of columns, seams to be like searching normal hash instead of
btree style.
So model A it is!
Once again thank yo
So I need to read what I write before hitting send. Should have been,
"If A works for YOUR use case." and "Wide rows DON'T spread across nodes
well"
On 09/29/2011 02:34 PM, Jeremiah Jordan wrote:
If A works for our use case, it is a much better option. A given row
has to be read in full to r
If A works for our use case, it is a much better option. A given row
has to be read in full to return data from it, there used to be
limitations that a row had to fit in memory, but there is now code to
page through the data, so while that isn't a limitation any more, it
means rows that don't
What would be the best approach
A) millions of ~2Kb rows, where each row could have ~6 columns
B) hundreds of ~100Gb rows, where each row could have ~1million columns
Considerarions:
Most entries will be searched for (read+write) at least once a day but no
more than 3 times a day.
Cheap hardware a