Hi,

zpool create test raidz c0t0d0 c1t0d0 c2t0d0 c3t0d0 \
                  raidz c0t1d0 c1t1d0 c2t1d0 c3t1d0 \
                  raidz c0t2d0 c1t2d0 c2t2d0 c3t2d0 \
                  raidz c0t3d0 c1t3d0 c2t3d0 c3t3d0 \
                  [...]
                  raidz c0t10d0 c1t10d0 c2t10d0 c3t10d0

zfs set atime=off test
zfs set recordsize=16k test
(I know...)

now if I create a one large file with filebench and simulate a randomread workload with 1 or more threads then disks on c2 and c3 controllers are getting about 80% more reads. This happens both on 111b and snv_134. I would rather except all of them to get about the same number of iops.

Any idea why?


--
Robert Milkowski
http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to