http://wiki.apache.org/cassandra/CassandraLimitations

"Cassandra's compaction code currently deserializes an entire row (per
columnfamily) at a time. So all the data from a given columnfamily/key
pair must fit in memory. Fixing this is relatively easy since columns
are stored in-order on disk so there is really no reason you have to
deserialize row-at-a-time except that that is easier with the current
encapsulation of functionality. This will be fixed in
https://issues.apache.org/jira/browse/CASSANDRA-16";

so yes, based on the amount of data you have in there and your memory
allocated to cassandra there is a limit currently...hopefully gone in
0.7

I will note that another issue I run across with this is that
get_count will not return within the rpc timeout after you get a
certain amount of columns in there.

cheers,
jesse

--
jesse mcconnell
jesse.mcconn...@gmail.com



On Wed, Mar 24, 2010 at 17:05, Davis, Jeremy <jda...@gridpoint.com> wrote:
>
>
> Hello,
>
> Is there a practical limit on the number of columns I put on a key?
>
> Obviously if I tried to grab the entire row at once I would have a problem.
>
> However, if I had an open ended row, with column names of “1” to “999999999”
> etc. And I only accessed ranges, would there be a practical limit I would be
> running up against? Performance degradation over time, etc?
>
>
>
> I’m running a little micro benchmark to see how well this works, but just
> curious if anyone has any insights.
>
>
>
> Many Thanks,
>
> -JD

Reply via email to