The max size would probably be best determined by looking at the size of your 
MemTable

  <!--
   ~ Flush memtable after this much data has been inserted, including
   ~ overwritten data.  There is one memtable per column family, and
   ~ this threshold is based solely on the amount of data stored, not
   ~ actual heap memory usage (there is some overhead in indexing the
   ~ columns).
  -->
  <MemtableThroughputInMB>64</MemtableThroughputInMB>

Read repair is on a per column basis, every column gets a timestamp, and the 
overhead of a name.  So, balance those 3 out and you have a pretty good idea of 
what to do.

From: Dop Sun [mailto:su...@dopsun.com]
Sent: Thursday, April 29, 2010 7:38 AM
To: user@cassandra.apache.org
Subject: RE: What's the best maximum size for a single column?

Is there any practical number can refer to?

Like what's the size (big one) used in single columns in your application?

From: uncle mantis [mailto:uncleman...@gmail.com]
Sent: Thursday, April 29, 2010 1:57 AM
To: user@cassandra.apache.org
Subject: Re: What's the best maximum size for a single column?

There is no column size limitation. As to performance due to the size of a 
column and with the speeds that Cassandra are running at, I don't belive it 
would make a bit of a difference if it was 1 byte or a million bytes.

Can anyone here prove me right or wrong?

Regards,

Michael
On Wed, Apr 28, 2010 at 7:37 AM, Dop Sun 
<su...@dopsun.com<mailto:su...@dopsun.com>> wrote:
Hi,

Yesterday, I saw a lot of discussion about how to store a file (big one). It 
looks like the suggestion is store in multiple rows (even not multiple column 
in a single row).

My question is:
Is there any best maximum column size which can help to make the decision on 
the segment size? Is this related with the memory size or other stuff?

Thanks,
Regards,
Dop

Reply via email to