On Mar 16, 2007, at 1:29 PM, JS wrote:
I've been seeing this failure to cap on a number of (Solaris 10
update 2 and 3) machines since the script came out (arc hogging is
a huge problem for me, esp on Oracle). This is probably a red
herring, but my v490 testbed seemed to actually cap on 3 separate
tests, but my t2000 testbed doesn't even pretend to cap - kernel
memory (as identified in Orca) sails right to the top, leaves me
maybe 2GB free on a 32GB machine and shoves Oracle data into swap.
This isn't as amusing as one Stage and one Production Oracle
machine which have 128GB and 96GB respectively. Sending in 92GB
core dumps to support is an impressive gesture taking 2-3 days to
complete.
hey Jeff,
For the ARC using lots of memory, is this a problem for you just on
the startup of Oracle or throughout?
If the ARC didn't cache user data (still would cache metadata), do
you foresee that as a win in your tests? This could be set per-dataset.
eric
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss