you may want to try disabling the disk write cache on the single disk.
also for the RAID disable 'host cache flush' if such an option exists.  that 
solved the problem for me.

let me know.


Bob Friesenhahn <[EMAIL PROTECTED]> wrote: On Thu, 27 Mar 2008, Neelakanth 
Nadgir wrote:
>
> This causes the sync to happen much faster, but as you say, suboptimal.
> Haven't had the time to go through the bug report, but probably
> CR 6429205 each zpool needs to monitor its throughput
> and throttle heavy writers
> will help.

I hope that this feature is implemented soon, and works well. :-)

I tested with my application outputting to a UFS filesystem on a 
single 15K RPM SAS disk and saw that it writes about 50MB/second and 
without the bursty behavior of ZFS.  When writing to ZFS filesystem on 
a RAID array, zpool I/O stat reports an average (over 10 seconds) 
write rate of 54MB/second.  Given that the throughput is not much 
higher on the RAID array, I assume that the bottleneck is in my 
application.

>> Are the 'zpool iostat' statistics accurate?
>
> Yes. You could also look at regular iostat
> and correlate it.

Iostat shows that my RAID array disks are loafing with only 9MB/second 
writes to each but with 82 writes/second.

Bob
======================================
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


       
---------------------------------
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to