[please don't top-post, please remove CC's, please trim quotes.  it's
 really tedious to clean up your post to make it readable.]

Marc Nicholas <geekyth...@gmail.com> writes:
> Brent Jones <br...@servuhome.net> wrote:
>> Marc Nicholas <geekyth...@gmail.com> wrote:
>>> Kjetil Torgrim Homme <kjeti...@linpro.no> wrote:
>>>> his problem is "lazy" ZFS, notice how it gathers up data for 15
>>>> seconds before flushing the data to disk.  tweaking the flush
>>>> interval down might help.
>>>
>>> How does lowering the flush interval help? If he can't ingress data
>>> fast enough, faster flushing is a Bad Thibg(tm).

if network traffic is blocked during the flush, you can experience
back-off on both the TCP and iSCSI level.

>>>> what are the other values?  ie., number of ops and actual amount of
>>>> data read/written.

this remained unanswered.

>> ZIL performance issues? Is writecache enabled on the LUNs?
> This is a Windows box, not a DB that flushes every write.

have you checked if the iSCSI traffic is synchronous or not?  I don't
use Windows, but other reports on the list have indicated that at least
the NTFS format operation *is* synchronous.  use zilstats to see.

> The drives are capable of over 2000 IOPS (albeit with high latency as
> its NCQ that gets you there) which would mean, even with sync flushes,
> 8-9MB/sec.

2000 IOPS is the aggregate, but the disks are set up as *one* RAID-Z2!
NCQ doesn't help much, since the write operations issued by ZFS are
already ordered correctly.

the OP may also want to try tweaking metaslab_df_free_pct, this helped
linear write performance on our Linux clients a lot:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6869229

-- 
Kjetil T. Homme
Redpill Linpro AS - Changing the game

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to