Working with a small txg_time means we are hit by the pool
sync overhead more often. This is why the per second
throughpuot has smaller peak values.
With txg_time = 5, we have another problem which is that
depending on timing of the pool sync, some txg can end up
with too little data in them an
I observed better predictable thoughput if I use a IO generator that
can do throttling (xdd or vdbench)
s.
On 3/11/07, Jesse DeFer <[EMAIL PROTECTED]> wrote:
OK, I tried it with txg_time set to 1 and am seeing less predictable results.
The first time I ran the test it completed in 27 seconds
OK, I tried it with txg_time set to 1 and am seeing less predictable results.
The first time I ran the test it completed in 27 seconds (vs 24s for ufs or 42s
with txg_time=5). Further tests ran from 27s to 43s, about half the time
greater than 40s.
zpool iostat doesn't show the large no-write
Jesse, You can change txg_time with mdb
echo "txg_time/W0t1" | mdb -kw
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> [EMAIL PROTECTED] wrote on
> 03/05/2007 03:56:28 AM:
>
> > one question,
> > is there a way to stop the default txg push
> behaviour (push at regular
> > timestep-- default is 5sec) but instead push them
> "on the fly"...I
> > would imagine this is better in the case of an
> application doing bi