Thanks for the information Richard!

The output of running arcstat.pl is included below.   A potentially
interesting thing I see is that the "Prefetch miss percentage" is
100% during this test.   I would have thought that a large sequential
read test would be an easy case for prefetch prediction.

When I first start the test and am seeing a 100Mbytes/sec
read rate, arcstat  output is:

Time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
10:29:22   685   374     54     9    2   365  100     7    2    51M  490M
10:29:23    2K   855     37     7    0   848  100     7    0   163M  490M
10:29:24    2K   774     33     6    0   768  100     6    0   264M  490M
10:29:25    3K    1K     33     8    0    1K  100     8    0   398M  490M
10:29:26    2K   774     30     6    0   768  100     6    0   413M  412M
10:29:27    2K    1K     34     8    0    1K  100     8    0   413M  412M
10:29:28    2K   774     35     6    0   768  100     6    0   413M  412M
10:29:29    2K   774     34     6    0   768  100     6    0   413M  412M
10:29:30    2K   774     36     6    0   768  100     6    0   413M  412M
10:29:31    2K   774     35     6    0   768  100     6    0   413M  412M

13 minutes later after the test has run about 30 times (and
still going) and the read rate has dropped to 50 Mbytes/sec 
the arcstat output is:

    Time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
10:42:58    1K   523     40    11    1   512  100     4    0   397M  397M
10:42:59   761   266     34    10    1   256  100     2    0   397M  397M
10:43:00    1K   525     41    13    1   512  100     4    0   397M  397M
10:43:01    1K   464     40    15    2   449  100     4    0   397M  397M
10:43:02   932   331     35    12    1   319  100     4    1   397M  397M
10:43:03    1K   534     41    19    2   515   99     8    1   397M  397M
10:43:04   770   266     34    10    1   256  100     2    0   397M  397M
10:43:05    1K   525     41    13    1   512  100     4    0   397M  397M
10:43:06   777   267     34    11    2   256  100     3    1   397M  397M
10:43:08    1K   533     41    18    2   515   99     7    1   397M  397M

Given the information above, is it still likely this degradation
would not occur on a faster machine with more memory?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to