Just a follow-up - Thanks very much, Parakash, for this help. I've adjusted vpm_cache_percent on some systems and results were as expected (and NFS activity back to the server is almost half of what it used to be!)

/dale

On May 15, 2009, at 1:33 AM, prakash sangappa wrote:

Dale Ghent wrote:
On May 14, 2009, at 10:24 PM, prakash sangappa wrote:

However,  a new set of interfaces called VPM was introduced
which provide transient mappings to file pages similar, to segmap. The
VPM interfaces use KPM mappings. The UFS, NFS, TMPFS and
SPECfs filesystem use the VPM interface  where available.

Segmap is still available and is in limited use, there should be no need
to tweak the segmap size when VPM is available.

I see. Thank you very much for this information.

If I may, I'd like to give some background on why I ask about VPM and how the segmap now relates to it so I can understand it better vs. the old page cache mechanism and its accounting.

I have a bunch of NFSv3 clients that have a read-only mount back to a NFS server. While they all have the same share mounted (read only), they each access different files on that share.

Prior to VPM-enabled kernels, I set the segmapsize on each client to 8GB (each NFS client has 16GB of RAM.) I would monitor the segmap consumption through the ::memstat dcmd in mdb, inspecting the Page Cache category. On pre-VPM kernels, the Page Cache would get very reasonably close to that defined segmapsize limit of 8GB. The Free (cachelist) tended to consume the balance.

Now with VPM on the scene, things seem unexpectedly different, obvious to me know through your explanation. The Page Cache category reported by ::memstat on each of the aforementioned VPM- enabled NFS clients now seems to top out, consistently across these clients, at 12% or 1950MB of physmem (16GB). The Free (cachelist) category does not grow as fast as I expect it to, with several GB of RAM still on the freelist.

The 12% number strikes me as particularly odd since that was/is the default size of the segmap on *SPARC* (segmap_percent)... so I'm a bit perplexed as these are x64 clients we're talking about. Is that by design or just a crazy coincidence. My trollings through the source code doesn't reveal such a defined limit on x64. Every client is like this.

I did not mention that VPM has a different cache and the size of VPM
cache is by default set to 12%. The idea was to size the cache according to available memory, just like the segmap percent on SPARC systems and we would not have to tune the cache size explicitly like we have to with
segmap cache size(segmapsize) on x86. The default size of 12% was
considered to be adequate and normally it would not be necessary to
tune the cache size.

However, it appears that, in your case you would need a much large cache
size.

So, the VPM cache size can be tuned by setting 'vpm_cache_percent'
tunable in /etc/system. So in your case you could set it to about 50% to
have a cache of about 8Gb on a 16Gb system.

set vpm_cache_percent = 50

You don't need to tune segmapsize with VPM enabled.

With that, I would expect that your page cache size reported by
::memstat should be closed to what you would expect.

HTH,
-Prakash.



My primary goal is to keep as many pages from the numerous but large files these NFS clients access cached as much as possible... this is why I set the segmapsize much higher than its 64MB default. I guess my problem (or perhaps misunderstanding) is that this doesn't appear to be happening anymore on these VPM-enabled kernel, regardless of what segmapsize is set to. Perhaps I just don't understand how to find and account for pages cached via VPM?

Thanks again
/dale


So, if you are concerned about the excessive cross calls issues
associated  with the traditional segmap mappings, that has been
addressed by the VPM interfaces.

The VPM interface is also available for x64 systems on S10
starting update 6(I believe).

-Prakash.


Thanks for any insight
/dale
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org




_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to