Harley Gorrell writes:
 > On Fri, 22 Sep 2006, [EMAIL PROTECTED] wrote:
 > > Are you just trying to measure ZFS's read performance here?
 > 
 >     That is what I started looking at.  We scrounged around
 > and found a set of 300GB drives to replace the old ones we
 > started with.  Comparing these new drives to the old ones:
 > 
 > Old 36GB drives:
 > 
 > | # time mkfile -v 1g zeros-1g
 > | zeros-1g 1073741824 bytes
 > | 
 > | real    2m31.991s
 > | user    0m0.007s
 > | sys     0m0.923s
 > 
 > Newer 300GB drives:
 > 
 > | # time mkfile -v 1g zeros-1g
 > | zeros-1g 1073741824 bytes
 > | 
 > | real    0m8.425s
 > | user    0m0.010s
 > | sys     0m1.809s
 > 
 >     At this point I am pretty happy.
 > 

This looks like on the second run, you had lots more free
memory and mkfile completed near memcpy speed.

Something is awry on the first pass though. Then,

        zpool iostat 1

can put some lights on this. IO will keep on going after the 
mkfile completes in the second case. For the first one,
there may have been an interaction with not yet finished I/O loads ?

-r


 >     I am wondering if there is something other than capacity
 > and seek time which has changed between the drives.  Would a
 > different scsi command set or features have this dramatic a
 > difference?
 > 
 > thanks!,
 > harley.
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to