It's really a pain to modify the data model,  the problem is how to
handle "one-to-many" relation in cassandra?  The limitation of the row
size will lead to impossible to store them with columns.

On Fri, Jun 4, 2010 at 4:13 PM, Sylvain Lebresne <sylv...@yakaz.com> wrote:
> As written in the third point of
> http://wiki.apache.org/cassandra/CassandraLimitations,
> right now, super columns are not indexed and deserialized fully when you 
> access
> them. Another way to put it is, you'll want to user super columns with
> only a relatively
> small number of columns in them.
> Because in you example, even if you read 1 column in 1 row, the full
> supercolumn
> where this column is is read from disk. Now, 50ms to read 50000 records isn't
> necessarily too bad.
>
> ColumnIndexSizeInKB will not help here (as superColumns are not indexed 
> anyway)
> and your only way out is to change you model so that you don't have
> super columns
> with so many columns.
>
> On Fri, Jun 4, 2010 at 8:26 AM, Ma Xiao <max...@robowork.com> wrote:
>> we have a SupperCF which may have up to 1000 supper columns and 50000
>> clumns for each supper column, the read latency may go up to 50ms
>> (even higher), I think it's a long time to response, how to tune the
>> storage config to optimize the performace? I read the wiki,
>> <ColumnIndexSizeInKB> may help to do this, supose that by asign a big
>> value to this( 20000 ex. ), no row and reach this limit so it never
>> generate a index for a row. In our production scenario, we only access
>> 1 row at a time and with up to 1000 columns slice retuned.  Any
>> suggestion?
>>
>

Reply via email to