We want to store a large number of columns in a single row (up to about 
100,000,000), where each value is roughly 10 bytes.

We also need to be able to get slices of columns from any point in the row.

We haven't found a problem with smaller amounts of data so far, but can anyone 
think of any reason if this is a bad idea, or would cause large performance 
problems?

If breaking up the row is something we should do, what is the maximum number of 
columns we should have?

We are not too worried if there is only a small performance decrease, adding 
more nodes to the cluster would be an option to help make code simpler.

Thanks,

Owen Davies

Reply via email to