Ahmed Kamal writes:
 > Hi,
 > 
 > I have been doing some basic performance tests, and I am getting a big hit
 > when I run UFS over a zvol, instead of directly using zfs. Any hints or
 > explanations is very welcome. Here's the scenario. The machine has 30G RAM,
 > and two IDE disks attached. The disks have 2 fdisk partitons (c4d0p2,
 > c3d0p2) that are mirrored and form a zpool. When using filebench with 20G
 > files writing directly on the zfs filesystem, I get the following results:
 > 
 > RandomWrite-8k:  0.8M/s
 > SingleStreamWriteDirect1m: 50M/s
 > MultiStreamWrite1m:      51M/s
 > MultiStreamWriteDirect1m: 50M/s
 > 
 > Pretty consistent and lovely. The 50M/s rate sounds pretty reasonable, while
 > the random 0.8M/s is a bit too low ? All in all, things look ok to me though
 > here
 > 
 > The second step, is to create a 100G zvol, format it with UFS, then bench
 > that under same conditions. Note that this zvol lives on the exact same
 > zpool used previously. I get the following:
 > 
 > RandomWrite-8k:  0.9M/s
 > SingleStreamWriteDirect1m: 5.8M/s   (??)
 > MultiStreamWrite1m:      33M/s
 > MultiStreamWriteDirect1m: 11M/s
 > 

The straight ZVOL case might have unfairly benefited  from

        6770534 - zvols do not observe O_SYNC semantic

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6770534

Commited to fix in the next ONNV build (106).

The UFS over ZVOL would not since the strategy entry points
to zvol are not impacted by the bug.

-r


 > Obviously, there's a major hit. Can someone please shed some light as to why
 > this is happening ? If more info is required, I'd be happy to test some more
 > ... This is all running on osol 2008.11 release.
 > 
 > Note: I know ZFS autodisables disk-caches when running on partitions (is
 > that slices, or fdisk partitions?!) Could this be causing what I'm seeing ?
 > 
 > Thanks for the help
 > Regards
 > <div dir="ltr">Hi,<br><br>I have been doing some basic performance tests, 
 > and I am getting a big hit when I run UFS over a zvol, instead of directly 
 > using zfs. Any hints or explanations is very welcome. Here&#39;s the 
 > scenario. The machine has 30G RAM, and two IDE disks attached. The disks 
 > have 2 fdisk partitons (c4d0p2, c3d0p2) that are mirrored and form a zpool. 
 > When using filebench with 20G files writing directly on the zfs filesystem, 
 > I get the following results:<br>
 > <br>RandomWrite-8k:&nbsp; 0.8M/s<br>SingleStreamWriteDirect1m: 
 > 50M/s<br>MultiStreamWrite1m:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 
 > 51M/s<br>MultiStreamWriteDirect1m: 50M/s<br><br>Pretty consistent and 
 > lovely. The 50M/s rate sounds pretty reasonable, while the random 0.8M/s is 
 > a bit too low ? All in all, things look ok to me though here<br>
 > <br>The second step, is to create a 100G zvol, format it with UFS, then 
 > bench that under same conditions. Note that this zvol lives on the exact 
 > same zpool used previously. I get the 
 > following:<br><br>RandomWrite-8k:&nbsp; 0.9M/s<br>
 > 
 > SingleStreamWriteDirect1m: 5.8M/s&nbsp;&nbsp; (??)<br>
 > MultiStreamWrite1m:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 33M/s<br>
 > MultiStreamWriteDirect1m: 11M/s<br><br>Obviously, there&#39;s a major hit. 
 > Can someone please shed some light as to why this is happening ? If more 
 > info is required, I&#39;d be happy to test some more ... This is all running 
 > on osol 2008.11 release.<br>
 > <br>Note: I know ZFS autodisables disk-caches when running on partitions (is 
 > that slices, or fdisk partitions?!) Could this be causing what I&#39;m 
 > seeing ?<br><br>Thanks for the help<br>Regards<br></div>
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to