Why is it that the read operations are 0 but the read bandwidth is >0? 
What is iostat
[not] accounting for? Is it the metadata reads? (Is it possible to 
determine what kind of metadata
reads these are?

I plan to have 3 disks and am debating what I should do with them, if I 
should do a
raidz (single or double parity) or just a mirror.

As per some of the blog entries, I've been reading that raidz may not be 
that suitable for lot of
random reads.

With the # of reads below, I don't see any reason why I should consider 
that. I would like to
proceed with doing a raidz with double parity, please give me some feedback.

Thanks,
Anil

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data1       41.6G  5.67G      2     19  52.3K   198K
data2       58.2G  9.83G      3     44  60.5K   180K
----------  -----  -----  -----  -----  -----  -----
data1       41.6G  5.66G      0     21  11.3K   151K
data2       58.2G  9.83G      0     44    158   140K
----------  -----  -----  -----  -----  -----  -----
data1       41.6G  5.66G      0     18    436   117K
data2       58.2G  9.83G      0     44    203   149K
----------  -----  -----  -----  -----  -----  -----
data1       41.6G  5.66G      0     20  1.49K   167K
data2       58.2G  9.83G      0     44    331   154K
----------  -----  -----  -----  -----  -----  -----
data1       41.6G  5.66G      0     21    791   166K
data2       58.2G  9.83G      0     46    199   167K
----------  -----  -----  -----  -----  -----  -----
data1       41.6G  5.66G      0     25    686   364K
data2       58.2G  9.83G      0     45  35.9K   152K
----------  -----  -----  -----  -----  -----  -----
data1       41.6G  5.66G      0     19    698   129K
data2       58.2G  9.83G      0     43     81   146K
----------  -----  -----  -----  -----  -----  -----
data1       41.6G  5.66G      0     19  1.45K   141K
data2       58.2G  9.82G      0     44     59   139K
----------  -----  -----  -----  -----  -----  -----
data1       41.6G  5.66G      0     19    436   124K
data2       58.2G  9.82G      0     43     71   145K
----------  -----  -----  -----  -----  -----  -----
data1       41.6G  5.66G      0     21    412   150K
data2       58.2G  9.82G      0     41    114   138K
----------  -----  -----  -----  -----  -----  -----
data1       41.6G  5.66G      0     20  1.35K   128K
data2       58.2G  9.82G      0     47    918   160K
----------  -----  -----  -----  -----  -----  -----


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to