On Mon, 31 Aug 2009, en...@businessgrade.com wrote:

Hi. I've been doing some simple read/write tests using filebench on a mirrored pool. Essentially, I've been scaling up the number of disks in the pool before each test between 4, 8 and 12. I've noticed that for individual disks, ZFS write performance scales very well between 4, 8 and 12 disks. This may be due to the fact that I'm using a SSD as a logging device. But I'm seeing individual disk performance drop by as much as 14MB per disk between 4 and 12 disks. Across the entire pool that means I've lost 168MB of raw throughput just by adding two mirror sets. I'm curious to know if there are any dials I can turn to improve this. System details are below:

Sun is currently working on several prefetch bugs (complete loss of prefetch & insufficient prefetch) which have been identified. Perhaps you were not on this list in July when a huge amount of discussion traffic was dominated by the topic "Why is Solaris 10 ZFS performance so terrible?", "http://mail.opensolaris.org/pipermail/zfs-discuss/2009-July/029340.html";. It turned out that the subject was over-specific since current OpenSolaris suffers from the same issues as proven by test results run by many people on a wide variety of hardware.

Eventually Rich Morris posted a preliminary analysis of the performance problem at "http://mail.opensolaris.org/pipermail/zfs-discuss/2009-July/030169.html";

Hopefully Sun will get the prefetch algorithm and timing perfected so that we may enjoy the full benefit of our hardware.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to