Hi
"In practice, one would want to model their data such that the 'row has
too much columns' scenario is prevented."
I am curious how really to prevent this, if the data is sharded with one
day granularity, nothing stops the client to insert enormous amount of new
columns (very often it is n
One would use batch processes (e.g. through Hadoop) or client-side
aggregation, yes. In theory it would be possible to introduce runtime
sharding across rows into the Cassandra server side, but it's not part
of its design.
In practice, one would want to model their data such that the 'row h
Hi Miguel,
I'd like to ask is it possible to have runtime sharding or rows in
cassandra, i.e. if the row has too much new columns inserted then create
another one row (let's say if the original timesharding is one day per row,
then we would have two rows for that day). Maybe batch processes could
On Fri, Apr 23, 2010 at 5:54 PM, Miguel Verde wrote:
> TimeUUID's time component is measured in 100-nanosecond intervals. The
> library you use might calculate it with poorer accuracy or precision, but
> from a storage/comparison standpoint in Cassandra millisecond data is easily
> captured by it.
TimeUUID's time component is measured in 100-nanosecond intervals. The
library you use might calculate it with poorer accuracy or precision,
but from a storage/comparison standpoint in Cassandra millisecond data
is easily captured by it.
One typical way of dealing with the data explosion of
Hello,
I am looking to store patient physiologic data in Cassandra - it's being
collected at rates of 1 to 125 Hz. I'm thinking of storing the timestamps as
the column names and the patient/parameter combo as the row key. For example,
Bob is in the ICU and is currently having his blood pressu