On Mon, Oct 31, 2011 at 2:58 PM, Sylvain Lebresne wrote:
> On Mon, Oct 31, 2011 at 1:10 PM, Mick Semb Wever wrote:
>> On Mon, 2011-10-31 at 13:05 +0100, Mick Semb Wever wrote:
>>> Given a 60G sstable, even with 64kb chunk_length, to read just that one
>>> sstable requires close to 8G free heap me
On Mon, Oct 31, 2011 at 1:10 PM, Mick Semb Wever wrote:
> On Mon, 2011-10-31 at 13:05 +0100, Mick Semb Wever wrote:
>> Given a 60G sstable, even with 64kb chunk_length, to read just that one
>> sstable requires close to 8G free heap memory...
>
> Arg, that calculation was a little off...
> (a lon
On Mon, 2011-10-31 at 13:05 +0100, Mick Semb Wever wrote:
> Given a 60G sstable, even with 64kb chunk_length, to read just that one
> sstable requires close to 8G free heap memory...
Arg, that calculation was a little off...
(a long isn't exactly 8K...)
But you get my concern...
~mck
--
"Whe
On Mon, 2011-10-31 at 09:07 +0100, Mick Semb Wever wrote:
> The read pattern of these rows is always in bulk so the chunk_length
> could have been much higher so to reduce memory usage (my largest
> sstable is 61G).
Isn't CompressionMetadata.readChunkOffsets(..) rather dangerous here?
Given a 60