It sounds like you're trying to read entire rows at once. Past a
certain point (depending on your heap size) you won't be able to do
that, you need to "page" through them N columns at a time.

On Sat, Jun 4, 2011 at 12:27 PM, Mario Micklisch
<mario.mickli...@hpm-kommunikation.de> wrote:
> Hello there!
> I have ran into a strange problem several times now and I wonder if someone
> here has an solution for me:
>
> For some of my data I want to keep track all the ID's I have used. To do
> that, I am putting the ID's as column into rows.
> At first I wanted to put all ID's into one row (because the limit of 2
> billion column seemed high enough) and then it happened:
> For some reason I was no longer able to remove or update existing rows or
> column. Adding new rows and removing them was possible, but write access to
> the older rows did not longer work.
> I have restarted the cluster, didn't help either. Only truncating the column
> family helped.
> Because this happened several times after creating rows with 5.000+ columns
> I decided to reduce the number of columns to a maximum of 1.000 per row and
> everything is now working perfectly.
>
> I am working on version 0.8-rc1 and for testing purpose I am running only
> one node with the default settings.
>
> My questions are:
> 1) Because 0.8 is not marked as stable, is this a known problem?
> 2) Is there something in the configuration I need to change to handle a
> bigger number of columns?
> 3) Can I check on the cassandra-cli why updates are failing? (there were no
> error messages when trying deletes with the CLI manually)
> 4) Is there a recommended number of columns? (I guess this only depends on
> my systems memory?)
>
> Thanks to everyone who took some time to read!
>  Mario
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Reply via email to