Hi,

I have some questions regarding fs cache (cachelist and segmap).

Environment:
- Solaris 10 U4
- SPARC, 64 GB memory
- UFS, PxFS file systems

Our application is writing some logs to disk (4 GB / hour), flushing some 
mmapped files from time to time (4 GB each 15 min), but is not doing much disk 
I/O. 

Once our application is started and "warm", it doesn't allocate any further 
memory.  At this point, we have 3-4 GB of free memory (vmstat) and nothing 
paged out to disk (swap -l). When I run long tests, I see free memory 
decreasing down to 1 GB (lotsfree, 1/64th of total memory). At this point, page 
scanner activity starts, pages are being paged out (and some being paged in), 
and the amount of paged-out data (swap -l) increases to something like 2 GB 
over time. (Interestingly, 1h after I stop the load, free memory suddenly 
increases to 2.5 GB.)

Since those memory requests are not coming from our application, I assume that 
those 5 GB (3 GB less free memory plus 2 GB paged-out data) are used for the 
file system cache. I always thought the fs cache would never grow any more once 
memory gets short, so it should never cause paging activity (since the cache 
list is part of the free list). Reading Solaris Internals, I just learned that 
there's not only a cache list, but also a segmap cache. As I understand this, 
the segmap cache may very well grow up to 12% of main memory and may even cause 
application pages to be paged out, correct? So, this might be what's happening 
here. Can I somehow monitor the segmap cache (since it is kernel memory, is it 
reported as "Kernel" in ::memstat?)?

My idea is now to set segmap_percent=1 to decrease the max size of the segmap 
cache and this way avoid having pages paged out due to growing fs cache. In a 
testrun with this configuration, my free memory doesn't fall below 3.5 GB any 
more and nothing is being paged out -- saving me 4.5 GB of memory!

On solarisinternals.com I found the statement:
---------------
The size of the segmap can be increased to improve performance of file system 
intensive workloads, using the /etc/system tunable segmap_percent, but this 
tunable is only recognized on Solaris/SPARC systems. Also, such tuning is no 
longer necessary for SPARC as of Solaris 10 3/05 (FCS), because the Kernel 
Physical Mapping (KPM) feature greatly reduced the cost of mapping and 
unmapping pages in segmap. Segmap is still used, and segmap_percent is still 
recognized, but it makes little difference in performance.
---------------

Since we don't do much disk I/O, I would assume that we don't gain much from 
the segmap cache anyway, so I would like to configure it to 1%. File system 
pages will still be cached in the cache list as long as memory is available, 
right? With the advantage, that the cache list is basically "free" memory and 
would never cause other pages to be paged out. 

I'm not sure, but as I understand it the segmap cache is still used during read 
and write operations, right? So, every time we write a file, we always write 
into the segmap cache. If this cache is small (let's say: 1% = 640 MB), we 
might be slowed done when writing more than 640 MB all at once. However, if we 
would only write 64 MB every minute, pages from the segmap cache would migrate 
to the cache list and make room for more pages in the segmap cache, so next 
time we write 64 MB, would there again be enough space in the segmap cache for 
the write operation?

Also, just to be sure: memory mapped files are never read or written through 
the segmap cache, so shrinking that cache has no effect on memory mapped files, 
right?

Are there any other side effects when decreasing the segmap cache to 1%? Or is 
this the recommended way to decrease unnecessary memory usage for file system 
caching?

Thanks much for your help,
Nick.
 
 
This message posted from opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to