For what it is worth, I too have seen this behavior when load testing our zfs 
box. I used iometer and the RealLife profile (1 worker, 1 target, 65% reads, 
60% random, 8k, 32 IOs in the queue). When writes are being dumped, reads drop 
close to zero, from 600-700 read IOPS to 15-30 read IOPS.
zpool iostat data01 1

    Where data01 is my pool name

pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data01      55.5G  20.4T    691      0  4.21M      0
data01      55.5G  20.4T    632      0  3.80M      0
data01      55.5G  20.4T    657      0  3.93M      0
data01      55.5G  20.4T    669      0  4.12M      0
data01      55.5G  20.4T    689      0  4.09M      0
data01      55.5G  20.4T    488  1.77K  2.94M  9.56M
data01      55.5G  20.4T     29  4.28K   176K  23.5M
data01      55.5G  20.4T     25  4.26K   165K  23.7M
data01      55.5G  20.4T     20  3.97K   133K  22.0M
data01      55.6G  20.4T    170  2.26K  1.01M  11.8M
data01      55.6G  20.4T    678      0  4.05M      0
data01      55.6G  20.4T    625      0  3.74M      0
data01      55.6G  20.4T    685      0  4.17M      0
data01      55.6G  20.4T    690      0  4.04M      0
data01      55.6G  20.4T    679      0  4.02M      0
data01      55.6G  20.4T    664      0  4.03M      0
data01      55.6G  20.4T    699      0  4.27M      0
data01      55.6G  20.4T    423  1.73K  2.66M  9.32M
data01      55.6G  20.4T     26  3.97K   151K  21.8M
data01      55.6G  20.4T     34  4.23K   223K  23.2M
data01      55.6G  20.4T     13  4.37K  87.1K  23.9M
data01      55.6G  20.4T     21  3.33K   136K  18.6M
data01      55.6G  20.4T    468    496  2.89M  1.82M
data01      55.6G  20.4T    687      0  4.13M      0

-Scott
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to