Hmmm, interesting data. See comments in-line:
Robert Milkowski wrote:
Yes, server has 8GB of RAM.
Most of the time there's about 1GB of free RAM.
bash-3.00# mdb 0
Loading modules: [ unix krtld genunix dtrace specfs ufs sd md ip sctp usba fcp
fctl qlc ssd lofs zfs random logindmux ptm cpc nfs ipc ]
arc::print
{
anon = ARC_anon
mru = ARC_mru
mru_ghost = ARC_mru_ghost
mfu = ARC_mfu
mfu_ghost = ARC_mfu_ghost
size = 0x8b72ae00
We are referencing about 2.2GB of data from the ARC.
p = 0xfe41b00
c = 0xfe51b00
We are trying to get down to our minimum target size of 16MB.
So we are obviously feeling memory pressure and trying to react.
c_min = 0xfe51b00
c_max = 0x1bca36000
...
>::kmastat
cache buf buf buf memory alloc alloc
name size in use total in use succeed fail
------------------------- ------ ------ ------ --------- --------- -----
...
vn_cache 240 2400324 2507745 662691840 6307891 0
This is very interesting: 2.4 million vnodes are "active".
...
zio_buf_512 512 2388292 2388330 1304346624 176134688 0
zio_buf_1024 1024 18 96 98304 17058709 0
zio_buf_1536 1536 0 30 49152 2791254 0
zio_buf_2048 2048 0 20 40960 1051435 0
zio_buf_2560 2560 0 33 90112 1716360 0
zio_buf_3072 3072 0 40 122880 1902497 0
zio_buf_3584 3584 0 225 819200 3918593 0
zio_buf_4096 4096 3 34 139264 20336550 0
zio_buf_5120 5120 0 144 737280 8932632 0
zio_buf_6144 6144 0 36 221184 5274922 0
zio_buf_7168 7168 0 16 114688 3350804 0
zio_buf_8192 8192 0 11 90112 9131264 0
zio_buf_10240 10240 0 12 122880 2268700 0
zio_buf_12288 12288 0 8 98304 3258896 0
zio_buf_14336 14336 0 60 860160 15853089 0
zio_buf_16384 16384 142762 142793 2339520512 74889652 0
zio_buf_20480 20480 0 6 122880 1299564 0
zio_buf_24576 24576 0 5 122880 1063597 0
zio_buf_28672 28672 0 6 172032 712545 0
zio_buf_32768 32768 0 4 131072 1339604 0
zio_buf_40960 40960 0 6 245760 1736172 0
zio_buf_49152 49152 0 4 196608 609853 0
zio_buf_57344 57344 0 5 286720 428139 0
zio_buf_65536 65536 520 522 34209792 8839788 0
zio_buf_73728 73728 0 5 368640 284979 0
zio_buf_81920 81920 0 5 409600 133392 0
zio_buf_90112 90112 0 6 540672 96787 0
zio_buf_98304 98304 0 4 393216 133942 0
zio_buf_106496 106496 0 5 532480 91769 0
zio_buf_114688 114688 0 5 573440 72130 0
zio_buf_122880 122880 0 5 614400 52151 0
zio_buf_131072 131072 100 107 14024704 7326248 0
dmu_buf_impl_t 328 2531066 2531232 863993856 237052643 0
dnode_t 648 2395209 2395212 1635131392 83304588 0
arc_buf_hdr_t 128 142786 390852 50823168 155745359 0
arc_buf_t 40 142786 347333 14016512 160502001 0
zil_lwb_cache 208 28 468 98304 30507668 0
zfs_znode_cache 192 2388224 2388246 465821696 83149771 0
...
Because of all of those vnodes, we are seeing a lot of extra memory
being used by ZFS:
- about 1.5GB for the dnodes
- another 800MB for dbufs
- plus 1.3GB for the "bonus buffers" (not accounted for in the arc)
- plus about 400MB for znodes
This totals to another 4GB + .6GB held in vnodes
The question is who is holding these vnodes in memory... Could you do a
>::dnlc!wc
and let me know what it comes back with?
-Mark
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss