On Oct 2, 2009, at 11:45 AM, Rob Logan wrote:
> zfs will use as much memory as is "necessary" but how is
"necessary" calculated?
using arc_summary.pl from http://www.cuddletech.com/blog/pivot/entry.php?id=979
my tiny system shows:
Current Size: 4206 MB (arcsize)
Target Size (Adaptive): 4207 MB (c)
Min Size (Hard Limit): 894 MB (zfs_arc_min)
Max Size (Hard Limit): 7158 MB (zfs_arc_max)
so arcsize is close to the desired c, no pressure here but it would
be nice to know
how c is calculated as its much smaller than zfs_arc_max on a system
like yours with nothing else on it.
c is the current size the ARC. c will change dynamically, as memory
pressure
and demand change.
> When an L2ARC is attached does it get used if there is no memory
pressure?
My guess is no. for the same reason an L2ARC takes sooooo long to
fill.
arc_summary.pl from the same system is
You want to cache stuff closer to where it is being used. Expect the
L2ARC
to contain ARC evictions.
Most Recently Used Ghost: 0% 9367837 (mru_ghost) [ Return
Customer Evicted, Now Back ]
Most Frequently Used Ghost: 0% 11138758 (mfu_ghost) [ Frequent
Customer Evicted, Now Back ]
so with no ghosts, this system wouldn't benefit from an L2ARC even
if added
In review: (audit welcome)
if arcsize = c and is much less than zfs_arc_max,
there is no point in adding system ram in hopes of increase arc.
If you add RAM arc_c_max will change unless you limit it by setting
zfs_arc_max. In other words, c will change dynamically between the
limits: arc_c_min <= c <= arc_c_max.
By default for 64-bit machines, the arc_c_max is the greater of 3/4
of physical memory or all but 1GB. If zfs_arc_max is set and is less
than arc_c_max and greater than 64 MB, then arc_c_max is set to
zfs_arc_max. This allows you to reasonably cap arc_c_max.
Note: if you pick an unreasonable value for zfs_arc_max, you will
not be notified -- check current values with
kstat -n arcstats
if m?u_ghost is a small %, there is no point in adding an L2ARC.
Yes, to the first order. Ghosts are those whose data is evicted, but
whose
pointer remains.
if you do add a L2ARC, one must have ram between c and zfs_arc_max
for its pointers.
No. The pointers are part of c. Herein lies the rub. If you have a
very large
L2ARC and limited RAM, then you could waste L2ARC space because the
pointers run out of space. SWAG pointers at 200 bytes each per record.
For example, suppose you use a Seagate 2 TB disk for L2ARC:
+ Disk size = 3,907,029,168 512-byte sectors - 4.5 MB for labels and
reserve
+ workload uses 8KB fixed record size (eg Oracle OLTP database)
+ RAM needed to support this L2ARC on this workload is approximately:
1 GB + Application space + ((3,907,029,168 - 9,232) * 200 / 16)
or at least 48 GBytes, practically speaking
Do not underestimate the amount of RAM needed to address lots of
stuff :-)
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss