Hi Folks..
We have started to convert our Veritas clustered systems over to ZFS root to
take advantage of the extreme simplification of using Live Upgrade. Moving the
data of these systems off VxVM and VxFS is not in scope for reasons to numerous
to go into..
One thing my customers noticed immediately was a reduction in "free" memory as
reported by 'top'. By way of explaining that ZFS keeps it's cache in kernel and
not in the freelist, it became apparent that memory is being used
disproportionally to the filesystems.
For example - On an M5000 (S10 U8) with 128 GB memory, we have 2 132GB disks in
the root ZFS pool. There is no other ZFS in use on the system. We have
approximately 4TB of vxfs filesystems with a very active Oracle database
instance. When I do the math and count up the caches, the results make me
scratch my head.
Solaris 10 Memory summary: MB %
----------------------------- ------ ----
Total system memory 131072 100%
----------------------------- ------ ----
----------------------------- ------ ----
Oracle Memory in ISM 19472 14%
Other Shared Memory 0 0%
* Oracle Process Memory w/o ISM 8840 9%
* Other Process Memory 5464 4%
Kernel Memory 13221 10%
ZFS ARC (Kernel/free) Memory 67312 51%
VXFS Buffer Cache (freelist) 2673 2%
Memory Free (freelist) 14090 10%
----------------------------- ------ ----
Totals accounted for above 131072 100%
From my little table, you can see that 2% of the spinning disk is using 51% of
available system memory. 98% of system disk (and where the customer important
stuff is) is only consuming 2% of memory. Our block size is such that we are
*not* using discovered_direct_io on the datafiles, so we should be hitting the
vxfs cache a lot..
Background out of the way, I have some questions that I was hoping that the zfs
guru's out there could chime in on..
Is this a problem? solarisinternals.com ZFS Best Practices seems to indicate
reducing the ARC in the presence of another filesystem is a good idea.
In this scenario (boot on ZFS, everything else on VxFS) what would be a
reasonable value to limit the ZFS ARC without impacting performance?
Is ZFS swap cached in the ARC? I can't account for data in the ZFS filesystems
to use as much ARC as is in use without the swap files being cached.. seems a
bit redundant?
With ZFS ARC in kernel, does this reduce the memory that vxfs sees as available
for it's own buffer cache? From teh Evil Tuning Guide and other reading at
solarisinternals, ZFS is supposed to be a good citizen when it comes to giving
up memory for userland apps that need memory, but is vxfs asking for memory in
a way that zfs is pushing it into the corner?
We have not seen much in the way of performance degradation from the
application folks, with the exception of datafile backups that are taking 2-3x
longer after root conversion to ZFS (and upgrade to U8, and patching beyond
that.. we have a case open for this one..). I'm just trying to get ahead of
this one so we can tune our process going forward if we need to.
Thanks much for any insight you care to share!
--Kris
--
Thomas Kris Kasner
Qualcomm Inc.
5775 Morehouse Drive
San Diego, CA 92121
(858)658-4932
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss