On Mon, 31 May 2010, Sandon Van Ness wrote:
I think I have came to the conclusion that the problem here is CPU due to the fact that its only doing this with parity raid. I would think if it was I/O based then it would be the same as if anything its heavier on I/O on non parity raid due to the fact that it is no longer CPU bottlenecked (dd write test gives me near 700 megabytes/sec vs 450 with parity raidz2).
The "parity RAID" certainly does impose more computational overhead, but not because of the parity calcuation. You should put that out of your mind right away. With raidz, each 128K block is chopped into smaller chunks which are written across the disks in the vdev. This is less efficient (in many ways, but least of which is "parity") than writing 128K blocks to each disk in turn. You are creating a blast of smaller I/Os to the various disks which may seem like more CPU but could be related to PCI-E access, interrupts, or a controller bottleneck.
Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss