On Mon, 2011-10-31 at 08:00 +0100, Mick Semb Wever wrote:
> After an upgrade to cassandra-1.0 any get_range_slices gives me:
> 
> java.lang.OutOfMemoryError: Java heap space
>       at 
> org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:93)
>       at 
> org.apache.cassandra.io.compress.CompressionMetadata.<init>(CompressionMetadata.java:66)
>       at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.metadata(CompressedRandomAccessReader.java:53)
>       at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:63)
>       at 
> org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:896)
>       at 
> org.apache.cassandra.io.sstable.SSTableScanner.<init>(SSTableScanner.java:72)
>       at 
> org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:748)
>       at 
> org.apache.cassandra.db.RowIteratorFactory.getIterator(RowIteratorFactory.java:88)
>       at 
> org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1310)
>       at 
> org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:840)
>       at 
> org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:698)
> 
> 
> I set chunk_length_kb to 16 as my rows are very skinny (typically 100b)


I see now this was a bad choice.
The read pattern of these rows is always in bulk so the chunk_length
could have been much higher so to reduce memory usage (my largest
sstable is 61G).

After changing the ckunk_length is there any way to rebuild just some
sstables rather than having to do a full nodetool scrub ?

~mck

-- 
“An idea is a point of departure and no more. As soon as you elaborate
it, it becomes transformed by thought.” - Pablo Picasso 

| http://semb.wever.org | http://sesat.no |
| http://tech.finn.no   | Java XSS Filter |

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to