On Dec 28, 2009, at 1:40 PM, Brad wrote:

"This doesn't make sense to me. You've got 32 GB, why not use it?
Artificially limiting the memory use to 20 GB seems like a waste of
good money."

I'm having a hard time convincing the dbas to increase the size of the SGA to 20GB because their philosophy is, no matter what eventually you'll have to hit disk to pick up data thats not stored in cache (arc or l2arc). The typical database server in our environment holds over 3TB of data.

Wow!  Where did you find DBAs who didn't want more resources? :-)
If that is the case, then you might need many more disks to keep the
(hungry) database fed.

If the performance does not improve then we'll possibly have to change the raid layout from raidz to a raid10.

Yes, the notion of adding more disks and using them as mirrors
are closely aligned. However, you know that the data in the ARC
is more than 50% frequently used, which makes the argument that
a larger SGA (or ARC) should benefit the workload.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to