Things you should know:

- Thrift has a limit on the amount of data it will accept / send, you can
configure this in Cassandra: 64MB's should still work find (1)
- Rows should not become huge: this will make "perfect" load balancing
impossible in your cluster
- A single row should fit on a disk
- The limit of columns per row is 2 billion

You should pick a range for your time range (e.g. second, minute, ..) that
suits your needs.

As far as I'm aware of, there's no such limit as 10MB in Cassandra for a
single row to decrease performance. Might be a memory / IO problem.

2012/2/15 Data Craftsman <database.crafts...@gmail.com>

> Hello experts,
>
> Based on this blog of Basic Time Series with Cassandra data modeling,
> http://rubyscale.com/blog/2011/03/06/basic-time-series-with-cassandra/
>
> "This (wide row column slicing) works well enough for a while, but over
> time, this row will get very large. If you are storing sensor data that
> updates hundreds of times per second, that row will quickly become gigantic
> and unusable. The answer to that is to shard the data up in some way"
>
> There is a limit on how big the row size can be before slowing down the
> update and query performance, that is 10MB or less.
>
> Is this still true in Cassandra latest version? or in what release
> Cassandra will remove this limit?
>
> Manually sharding the wide row will increase the application complexity,
> it would be better if Cassandra can handle it transparently.
>
> Thanks,
> Charlie | DBA & Developer
>
> p.s. Quora link,
>
> http://www.quora.com/Cassandra-database/What-are-good-ways-to-design-data-model-in-Cassandra-for-historical-data
>
>
>

Reply via email to