Hello Eric,

Monday, August 7, 2006, 5:53:38 PM, you wrote:

ES> Cool stuff, Robert.  It'd be interesting to see some RAID-Z (single- and
ES> double-parity) benchmarks as well, but understandably this takes time
ES> ;-)

I intend to test raid-z. Not sure there'll be enough time for raidz2.


ES> The first thing to note is that the current Nevada bits have a number of
ES> performance fixes not in S10u2, so there's going to be a natural bias
ES> when comparing ZFS to ZFS between these systems.

Yeah, I know. That's why I put UFS on HW config also to see if ZFS
doesn't underperform on U2.


ES> Second, you may be able to get more performance from the ZFS filesystem
ES> on the HW lun by tweaking the max pending # of reqeusts.  One thing
ES> we've found is that ZFS currently has a hardcoded limit of how many
ES> outstanding requests to send to the underlying vdev (35).  This works
ES> well for most single devices, but large arrays can actually handle more,
ES> and we end up leaving some performance on the floor.  Currently the only
ES> way to tweak this variable is through 'mdb -kw'.  Try something like:

Well, strange - I did try with value of 1, 60 and 256. And basically I
get the same results from varmail tests.


-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to