On Mon, Oct 31, 2011 at 2:58 PM, Sylvain Lebresne wrote:
> On Mon, Oct 31, 2011 at 1:10 PM, Mick Semb Wever wrote:
>> On Mon, 2011-10-31 at 13:05 +0100, Mick Semb Wever wrote:
>>> Given a 60G sstable, even with 64kb chunk_length, to read just that one
>>> sstable requires close to 8G free heap me
On Mon, Oct 31, 2011 at 1:10 PM, Mick Semb Wever wrote:
> On Mon, 2011-10-31 at 13:05 +0100, Mick Semb Wever wrote:
>> Given a 60G sstable, even with 64kb chunk_length, to read just that one
>> sstable requires close to 8G free heap memory...
>
> Arg, that calculation was a little off...
> (a lon
Cleanup would have the same effect I think, in exchange for a minor
amount of extra CPU used.
On Mon, Oct 31, 2011 at 4:08 AM, Sylvain Lebresne wrote:
> On Mon, Oct 31, 2011 at 9:07 AM, Mick Semb Wever wrote:
>> On Mon, 2011-10-31 at 08:00 +0100, Mick Semb Wever wrote:
>>> After an upgrade to ca
On Mon, 2011-10-31 at 13:05 +0100, Mick Semb Wever wrote:
> Given a 60G sstable, even with 64kb chunk_length, to read just that one
> sstable requires close to 8G free heap memory...
Arg, that calculation was a little off...
(a long isn't exactly 8K...)
But you get my concern...
~mck
--
"Whe
On Mon, 2011-10-31 at 09:07 +0100, Mick Semb Wever wrote:
> The read pattern of these rows is always in bulk so the chunk_length
> could have been much higher so to reduce memory usage (my largest
> sstable is 61G).
Isn't CompressionMetadata.readChunkOffsets(..) rather dangerous here?
Given a 60
On Mon, Oct 31, 2011 at 11:41 AM, Mick Semb Wever wrote:
> On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
>> you can
>> trigger a "user defined compaction" through JMX on each of the sstable
>> you want to rebuild.
>
> May i ask how?
> Everything i see from NodeProbe to StorageProxy is
On Mon, Oct 31, 2011 at 11:35 AM, Mick Semb Wever wrote:
> On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
>> >> I set chunk_length_kb to 16 as my rows are very skinny (typically 100b)
>> >
>> >
>> > I see now this was a bad choice.
>> > The read pattern of these rows is always in bulk
On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
> you can
> trigger a "user defined compaction" through JMX on each of the sstable
> you want to rebuild.
May i ask how?
Everything i see from NodeProbe to StorageProxy is ks and cf based.
~mck
--
“Anyone who lives within their means s
On Mon, 2011-10-31 at 10:08 +0100, Sylvain Lebresne wrote:
> >> I set chunk_length_kb to 16 as my rows are very skinny (typically 100b)
> >
> >
> > I see now this was a bad choice.
> > The read pattern of these rows is always in bulk so the chunk_length
> > could have been much higher so to reduce
On Mon, Oct 31, 2011 at 9:07 AM, Mick Semb Wever wrote:
> On Mon, 2011-10-31 at 08:00 +0100, Mick Semb Wever wrote:
>> After an upgrade to cassandra-1.0 any get_range_slices gives me:
>>
>> java.lang.OutOfMemoryError: Java heap space
>> at
>> org.apache.cassandra.io.compress.CompressionMeta
On Mon, 2011-10-31 at 08:00 +0100, Mick Semb Wever wrote:
> After an upgrade to cassandra-1.0 any get_range_slices gives me:
>
> java.lang.OutOfMemoryError: Java heap space
> at
> org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:93)
> at
After an upgrade to cassandra-1.0 any get_range_slices gives me:
java.lang.OutOfMemoryError: Java heap space
at
org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:93)
at
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionM
12 matches
Mail list logo