On 10/22/2012 08:24 PM, aaron morton wrote:
I'm not aware of how to track the memory usage for the off heap row cache in 1.0. The memory may show up in something like JConsole. What about seeing how much os memory is allocated to buffers and working backwards from there?

Anyone else ?

(One thing to be away of is each CF has it's own row cache, so tuning must be done per CF. )

Cheers
-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 23/10/2012, at 3:35 AM, Josh <jb...@thebrighttag.com <mailto:jb...@thebrighttag.com>> wrote:

Hi, I'm hoping to get some help on how to help tune our 1.0.x cluster w.r.t. row
caching.

We're using the netflix priam client, so unfortunately upgrading to 1.1.x is out of the question for now.. but until we find a way around that, is there any way to help determine where the 'sweet spot' is between heap size, row cache size,
and leaving the rest of the ram available to the OS?

We're using the oracle jvm with jna so we can do the off-heap row caching, but I'm not sure how to tell how much ram it's using, thus I'm not comfortable increasing it further. (currently we have it set to 100,000 rows and we're already seeing ~85% hit rates, so we've stopped upping it further for now).

Thanks for any advice,

-Josh




ByteBuffer::AllocateDirect uses memlocking, i think. mLock takes a mask of "all current" and "all future". Most developers outside C/C++ word won't use "all future". So buffer counts * sizes should be reliable. Even with JNA, you'd be using the same system call.


Reply via email to