> Another alternative to try would be setting primarycache=metadata on
the
> ZFS dataset that contains the mmap files.  That way you are only
turning
> of the ZFS ARC cache of the file content for that one dataset rather
> than clamping the ARC.

Yeah, you'd think that would be the right thing to do. 

It's not. That resulted in throughput going through the floor, as it
turns out the developers were occasionally using read() (I'm sorry,
FileInputStream() - Java), which of course resulted in every single
2-byte read resulting in a fetch to the L2ARC since I disabled the
primarycache for data blocks. 

I'd go with MAP_NORESERVE, but that option isn't available on a Java
MappedByteBuffer - most of which are opened R/O but occasionally get
opened R/W. (Yes. Please try not to cringe. It does make sense, in
context.) Fortunately, when you have 100TB to play with, a bit of disk
allocated to swap that's never used is not all so much of a sacrifice.

So, to Phil's email - read()/write() on a ZFS-backed vnode somehow
completely bypass the page cache and depend only on the ARC? How the
heck does that happen - I thought all files were represented as vm
objects? 

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to