On Tue, 17 Jul 2012, Michael Hase wrote:

The below is with a 2.6 GB test file but with a 26 GB test file (just add another zero to 'count' and wait longer) I see an initial read rate of 618 MB/s and a re-read rate of 8.2 GB/s. The raw disk can transfer 150 MB/s.

To work around these caching effects just use a file > 2 times the size of ram, iostat then shows the numbers really coming from disk. I always test like this. a re-read rate of 8.2 GB/s is really just memory bandwidth, but quite impressive ;-)

Yes, in the past I have done benchmarking with file size 2X the size of memory. This does not necessary erase all caching because the ARC is smart enough not to toss everything.

At the moment I have an iozone benchark run up from 8 GB to 256 GB file size. I see that it has started the 256 GB size now. It may be a while. Maybe a day.

In the range of > 600 MB/s other issues may show up (pcie bus contention, hba contention, cpu load). And performance at this level could be just good enough, not requiring any further tuning. Could you recheck with only 4 disks (2 mirror pairs)? If you just get some 350 MB/s it could be the same problem as with my boxes. All sata disks?

Unfortunately, I already put my pool into use and can not conveniently destroy it now.

The disks I am using are SAS (7200 RPM, 1 GB) but return similar per-disk data rates as the SATA disks I use for the boot pool.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to