No. The reason we're using mmap in the first place is that it's much better at "allowing the OS to do the caching."
You just have too much data for the OS to cache effectively; making Cassandra set that aside to cache key locations can help because it's much more ram-efficient. On Fri, Mar 26, 2010 at 10:49 AM, Todd Burruss <bburr...@real.com> wrote: > so just to close this out ... before mmap files, i would allow the OS to do > the caching using its I/O cache, but now since mmap files take up a majority > of my RAM, i need to cache more to maintain performance. > > is that a fair statement? > > ________________________________________ > From: Jonathan Ellis [jbel...@gmail.com] > Sent: Thursday, March 25, 2010 8:01 PM > To: user@cassandra.apache.org > Subject: Re: memory question > > Cassandra mmaps your data files which show up as RES and SHR. This is normal. > > c0d1p1 is completely maxed out. Assuming that is your data disk and > not your commitlog one, you need to tell Cassandra to cache more rows > (or keys, depending). > > If you are maxing out your caches and still seeing this then you just > need to add more capacity, there's no magic wand. > > On Mon, Mar 22, 2010 at 5:14 PM, Todd Burruss <bburr...@real.com> wrote: >> after running my cluster for a while performance has become unacceptable, >> 200+ ms for reads. if running well, i see reads <10ms. when i run iostat >> the disk is being hammered by reads. seems like i/o caching isn't even >> being used >> >> avg-cpu: %user %nice %system %iowait %steal %idle >> 2.81 0.00 1.41 13.62 0.00 82.16 >> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz >> avgqu-sz await svctm %util >> cciss/c0d0p1 >> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >> 0.00 0.00 0.00 0.00 >> cciss/c0d1p1 >> 0.00 0.00 848.50 0.00 13.66 0.00 32.98 >> 21.50 25.23 1.18 100.05 >> >> i run top and i see cassandra's memory usage as follows: >> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND >> 31510 bburruss 19 0 359g 37g 27g S 48.8 80.1 2137:30 java >> >> >> i set -Xmx10g so it isn't java using the memory. is it mmap i/o? what >> would be causing the huge memory usage? >> it seems reasonable that the performance is bad because the i/o cache can't >> be used properly. >>