Hello.

We have a file server running S10u8 which is a disk backend to a caching
ftp/http frontend "cluster" (homebrew) which currently has about 4.4TB
of data which obviously doesn't fit in the 8GB of ram the machine has.

arc_summary currently says:
System Memory:
         Physical RAM:  8055 MB
         Free Memory :  1141 MB
         LotsFree:      124 MB
ARC Size:
         Current Size:             3457 MB (arcsize)
         Target Size (Adaptive):   3448 MB (c)
         Min Size (Hard Limit):    878 MB (zfs_arc_min)
         Max Size (Hard Limit):    7031 MB (zfs_arc_max)
ARC Size Breakdown:
         Most Recently Used Cache Size:          93%    3231 MB (p)
         Most Frequently Used Cache Size:         6%    217 MB (c-p)
...
        CACHE HITS BY CACHE LIST:
          Anon:                        3%        377273490              [ New 
Customer, First Cache Hit ]
          Most Recently Used:          9%        1005243026 (mru)       [ 
Return Customer ]
          Most Frequently Used:       81%        9113681221 (mfu)       [ 
Frequent Customer ]
          Most Recently Used Ghost:    2%        284232070 (mru_ghost)  [ 
Return Customer Evicted, Now Back ]
          Most Frequently Used Ghost:  3%        361458550 (mfu_ghost)  [ 
Frequent Customer Evicted, Now Back ]

And some info from echo ::arc | mdb -k:
arc_meta_used             =      2863 MB
arc_meta_limit            =      3774 MB
arc_meta_max              =      4343 MB


Now to the questions.. As I've understood it, ARC keeps a list of newly
evicted data from the ARC in the "ghost" lists, for example to be used
for L2ARC (or?).

In mdb -k:
> ARC_mfu_ghost::print
...
    arcs_lsize = [ 0x2341ca00, 0x4b61d200 ]
    arcs_size = 0x6ea39c00
...
> ARC_mru_ghost::print
    arcs_lsize = [ 0x65646400, 0xd24e00 ]
    arcs_size = 0x6636b200
> ARC_mru::print
    arcs_lsize = [ 0x2b9ae600, 0x38646e00 ]
    arcs_size = 0x758ae800
> ARC_mfu::print
    arcs_lsize = [ 0, 0x4d200 ]
    arcs_size = 0x1043a000

Does this mean that currently, 1770MB+1635MB is "wasted" just for
statistics, and 1880+260MB is used for actual cached data, or does these
numbers just refer to how much data they keep stats for?

So basically, what is the point of the ghost lists and how much ram are
they actually using?

Also, since this machine just has 2 purposes in life - sharing data over
nfs and taking backups of the same data, I'd like to get those 1141MB of
"free memory" to be actually used.. Can I set zfs_arc_max (can't find
any runtime tunable, only /etc/system one, right?) to 8GB. If it runs
out of memory, it'll set no_grow and shrink a little, right?

Currently, data can use all of ARC if it wants, but metadata can use a
maximum of $arc_meta_max. Since there's no chance of caching all of the
data, but there's a high chance of caching a large proportion of the
metadata, I'd like "reverse limits"; limit data size to 1GB or so (due
to buffers currently being handled, setting primarycache=metadata will
give crap performance in my testing) and let metadata take as much as
it'd like.. Is there a chance of getting something like this?

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to