On Sun, 28 Jun 2009, Bob Friesenhahn wrote:
Today I experimented with doubling this value to 688128 and was happy to see a large increase in sequential read performance from my ZFS pool which is based on six mirrors vdevs. Sequential read performance jumped from 552787 MB/s to 799626 MB/s. It seems that the default driver buffer size interfers with zfs's ability to double the read performance by balancing the reads from the mirror devices. Now the read performance is almost 2X the write performance.

Grumble.  This may be a bit of a red herring.

When testing with a 16GB file, the reads were definitely faster. With 32GB and 64GB test files the read performace is the same as before. Now I am thinking that the improved performance with the 16GB file is due to the test being executed on a freshly booted system vs one that has run for a week. Perhaps somehow there is still some useful caching going on with the 16GB file.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to