On Tue, Jan 27, 2009 at 5:47 PM, Richard Elling
<richard.ell...@gmail.com> wrote:
> comment far below...
>
> Brent Jones wrote:
>>
>> On Mon, Jan 26, 2009 at 10:40 PM, Brent Jones <br...@servuhome.net> wrote:
>>
>>>

>>>
>>>
>>> --
>>> Brent Jones
>>> br...@servuhome.net
>>>
>>>
>>
>> I found some insight to the behavior I found at this Sun blog by Roch
>> Bourbonnais : http://blogs.sun.com/roch/date/20080514
>>
>> Excerpt from the section that I seem to have encountered:
>>
>> "The new code keeps track of the amount of data accepted in a TXG and
>> the time it takes to sync. It dynamically adjusts that amount so that
>> each TXG sync takes about 5 seconds (txg_time variable). It also
>> clamps the limit to no more than 1/8th of physical memory. "
>>
>> So, when I fill up that transaction group buffer, that is when I see
>> that 4-5 second "I/O burst" of several hundred megabytes per second.
>> He also documents that the buffer flush can, and does issue delays to
>> the writing threads, which is why I'm seeing those momentary drops in
>> throughput and sluggish system performance while that write buffer is
>> flushed to disk.
>>
>
> Yes, this tends to be more efficient. You can tune it by setting
> zfs_txg_synctime, which is 5 by default.  It is rare that we've seen
> this be a win, which is why we don't mention it in the Evil Tuning
> Guide.
>
>> Wish there was a better way to handle that, but at the speed I'm
>> writing (and I'll be getting a 10GigE link soon), I don't see any
>> other graceful methods of handling that much data in a buffer
>>
>
> I think your workload might change dramatically when you get a
> faster pipe.  So unless you really feel compelled to change it, I
> wouldn't suggest changing it.
> -- richard
>
>> Loving these X4540's so far though...
>>
>>
>
>

Are there any additional tuneables, such as opening a new txg buffer
before the previous one is flushed? Or otherwise allow writes to
continue without the tick delay? My workload will be pretty
consistent, it is going to serve a few roles, which I hope to
accomplish in the same units:
- large scale backups
- cifs share for window app servers
- nfs server for unix app servers

GigE quickly became the bottleneck, and I imagine 10GigE will add
further stress to those write buffers.

-- 
Brent Jones
br...@servuhome.net
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to