On Mon, 13 Jul 2009, Mike Gerdts wrote:

Using cpio's -C option seems to not change the behavior for this bug,
but I did see a performance difference with the case where I hadn't
modified the zfs caching behavior.  That is, the performance of the
tmpfs backed vdisk more than doubled with "cpio -o -C $((1024 * 1024))
/dev/null".  At this point cpio was spending roughly 13% usr and 87%
sys.

Interesting. I just updated zfs-cache-test.ksh on my web site so that it uses 131072 byte blocks. I see a tiny improvement in performance from doing this, but I do see a bit less CPU consumption so the CPU consumption is essentially zero. The bug remains. It seems best to use ZFS's ideal block size so that issues don't get confused.

Using an ARC monitoring script called 'arcstat.pl' I see a huge number of 'dmis' events when performance is poor. The ARC size is 7GB, which is less than its prescribed cap of 10GB.

Better:

    Time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
15:39:37   20K    1K      6    58    0    1K  100    19  100     7G   10G
15:39:38   19K    1K      5    57    0    1K  100    19  100     7G   10G
15:39:39   19K    1K      6    54    0    1K  100    18  100     7G   10G
15:39:40   17K    1K      6    51    0    1K  100    17  100     7G   10G

Worse:

    Time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
15:43:24    4K   280      6   280    6     0    0     4  100     9G   10G
15:43:25    4K   277      6   277    6     0    0     4  100     9G   10G
15:43:26    4K   268      6   268    6     0    0     5  100     9G   10G
15:43:27    4K   259      6   259    6     0    0     4  100     9G   10G

An ARC stats summary from a tool called 'arc_summary.pl' is appended to this message.

Operation is quite consistent across the full span of files. Since 'dmis' is still low when things are "good" (and even when the ARC has surely cycled already) this leads me to believe that prefetch is mostly working and is usually satisfying read requests. When things go bad I see that 'dmiss' becomes 100% of the misses. A hypothesis is that if zfs thinks that the data might be in the ARC (due to having seen the file before) that it disables file prefetch entirely, assuming that it can retrieve the data from its cache. Then once it finally determines that there is no cached data after all, it issues a read request.

Even the "better" read performance is 1/2 of what I would expect from my hardware and based on prior test results from 'iozone'. More prefetch would surely help.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

System Memory:
         Physical RAM:  20470 MB
         Free Memory :  2511 MB
         LotsFree:      312 MB

ZFS Tunables (/etc/system):
         * set zfs:zfs_arc_max = 0x300000000
         set zfs:zfs_arc_max = 0x280000000
         * set zfs:zfs_arc_max = 0x200000000
         set zfs:zfs_write_limit_override = 0xea600000
         * set zfs:zfs_write_limit_override = 0xa0000000
         set zfs:zfs_vdev_max_pending = 5

ARC Size:
         Current Size:             8735 MB (arcsize)
         Target Size (Adaptive):   10240 MB (c)
         Min Size (Hard Limit):    1280 MB (zfs_arc_min)
         Max Size (Hard Limit):    10240 MB (zfs_arc_max)

ARC Size Breakdown:
         Most Recently Used Cache Size:          95%    9791 MB (p)
         Most Frequently Used Cache Size:         4%    448 MB (c-p)

ARC Efficency:
         Cache Access Total:             827767314
         Cache Hit Ratio:      96%       800123657      [Defined State for 
buffer]
         Cache Miss Ratio:      3%       27643657       [Undefined State for 
Buffer]
         REAL Hit Ratio:       89%       743665046      [MRU/MFU Hits Only]

         Data Demand   Efficiency:    99%
         Data Prefetch Efficiency:    61%

        CACHE HITS BY CACHE LIST:
          Anon:                        5%        47497010               [ New 
Customer, First Cache Hit ]
          Most Recently Used:         33%        271365449 (mru)        [ 
Return Customer ]
          Most Frequently Used:       59%        472299597 (mfu)        [ 
Frequent Customer ]
          Most Recently Used Ghost:    0%        1700764 (mru_ghost)    [ 
Return Customer Evicted, Now Back ]
          Most Frequently Used Ghost:  0%        7260837 (mfu_ghost)    [ 
Frequent Customer Evicted, Now Back ]
        CACHE HITS BY DATA TYPE:
          Demand Data:                73%        589582518
          Prefetch Data:               2%        20424879
          Demand Metadata:            17%        139111510
          Prefetch Metadata:           6%        51004750
        CACHE MISSES BY DATA TYPE:
          Demand Data:                21%        5814459
          Prefetch Data:              46%        12788265
          Demand Metadata:            27%        7700169
Prefetch Metadata: 4% 1340764 ---------------------------------------------
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to