Hi. We're running OmniOS as a ZFS storage server. For some reason, our
arc cache will grow to a certain point, then suddenly drops. I used
arcstat to catch it in action, but I was not able to capture what else
was going on in the system at the time. I'll do that next.

read  hits  miss  hit%  l2read  l2hits  l2miss  l2hit%  arcsz  l2size
 166   166     0   100       0       0       0       0    85G    225G
5.9K  5.9K     0   100       0       0       0       0    85G    225G
 755   715    40    94      40       0      40       0    84G    225G
 17K   17K     0   100       0       0       0       0    67G    225G
 409   395    14    96      14       0      14       0    49G    225G
 388   364    24    93      24       0      24       0    41G    225G
 37K   37K    20    99      20       6      14      30    40G    225G

For reference, it's a 12TB pool with 512GB SSD L2 ARC and 198GB RAM.
We have nothing else running on the system except NFS. We are also not
using dedupe. Here is the output of memstat at one point:

# echo ::memstat | mdb -k
Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                   19061902             74460   38%
ZFS File Data            28237282            110301   56%
Anon                        43112               168    0%
Exec and libs                1522                 5    0%
Page cache                  13509                52    0%
Free (cachelist)             6366                24    0%
Free (freelist)           2958527             11556    6%

Total                    50322220            196571
Physical                 50322219            196571

According to "prstat -s rss" nothing else is consuming the memory.

   592 root       33M   26M sleep   59    0   0:00:33 0.0% fmd/27
    12 root       13M   11M sleep   59    0   0:00:08 0.0% svc.configd/21
   641 root       12M   11M sleep   59    0   0:04:48 0.0% snmpd/1
    10 root       14M   10M sleep   59    0   0:00:03 0.0% svc.startd/16
   342 root       12M 9084K sleep   59    0   0:00:15 0.0% hald/5
   321 root       14M 8652K sleep   59    0   0:03:00 0.0% nscd/52

So far I can't figure out what could be causing this. The only other
thing I can think of is that we have a bunch of zfs send/receive
operations going on as backups across 10 datasets in the pool. I  am
not sure how snapshots and send/receive affect the arc. Does anyone
else have any ideas?

Thanks,
Chris
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to