This issue has been discussed a number of times in this forum.
To summerize:

ZFS (specifically, the ARC) will try to use *most* of the systems
available memory to cache file system data.  The default is to
max out at physmem-1GB (i.e., use all of physical memory except
for 1GB).  In the face of memory pressure, the ARC will "give up"
memory, however there are some situations where we are unable to
free up memory fast enough for an application that needs it (see
example in the HELIOS note below).  In these situations, it may
be necessary to lower the ARCs maximum memory footprint, so that
there is a larger amount of memory immediately available for
applications.  This is particularly relevant in situations where
there is a known amount of memory that will always be required for
use by some application (databases often fall into this category).
The tradeoff here is that the ARC will not be able to cache as much
file system data, and that could impact performance.

For example, if you know that an application will need 5GB on a
36GB machine, you could set the arc maximum to 30GB (0x780000000).

In ZFS on on10 prior to update 4, you can only change the arc max
size via explicit actions with mdb(1):

# mdb -kw
> arc::print -a c_max
<address> c_max = <current-max>
> <address>/Z <new-max>

In the current opensolaris nevada bits, and in s10u4, you can use
the system variable 'zfs_arc_max' to set the maximum arc size.  Just
set this in /etc/system.

-Mark

Erik Vanden Meersch wrote:

Could someone please provide comments or solution for this?

Subject: Solaris 10 ZFS problems with database applications


HELIOS TechInfo #106
====================


Tue, 20 Feb 2007

Solaris 10 ZFS problems with database applications
--------------------------------------------------

We have tested Solaris 10 release 11/06 with ZFS without any problems
using all HELIOS UB based products, including very high load tests.

However we learned from customers that some database solutions (known
are Sybase and Oracle), when allocating a large amount of memory may
slow down or even freeze the system for up to a minute. This can
result in RPC timeout messages and service interrupts for HELIOS
processes. ZFS is basically using most memory for file caching.
Freeing this ZFS memory for the database memory allocation can result
into serious delays. This does not occur when using HELIOS products
only.

HELIOS tested system was using 4GB memory.
Customer production machine was using 16GB memory.


Contact your SUN representative how to limit the ZFS cache and what
else to consider using ZFS in your workflow.

Check also with your application vendor for recommendations using ZFS
with their applications.


Best regards,

HELIOS Support

HELIOS Software GmbH
Steinriede 3
30827 Garbsen (Hannover)
Germany

Phone:          +49 5131 709320
FAX:            +49 5131 709325
http://www.helios.de

--
<http://www.sun.com/solaris>      * Erik Vanden Meersch *
Solution Architect

*Sun Microsystems, Inc.*
Phone x48835/+32-2-704 8835
Mobile 0479/95 05 98
Email [EMAIL PROTECTED]


------------------------------------------------------------------------

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to