Re: [zfs-discuss] Re: ZFS stalling problem

2007-03-12 Thread Roch - PAE
Working with a small txg_time means we are hit by the pool sync overhead more often. This is why the per second throughpuot has smaller peak values. With txg_time = 5, we have another problem which is that depending on timing of the pool sync, some txg can end up with too little data in them an

Re: [zfs-discuss] Re: ZFS stalling problem

2007-03-11 Thread Selim Daoud
I observed better predictable thoughput if I use a IO generator that can do throttling (xdd or vdbench) s. On 3/11/07, Jesse DeFer <[EMAIL PROTECTED]> wrote: OK, I tried it with txg_time set to 1 and am seeing less predictable results. The first time I ran the test it completed in 27 seconds

[zfs-discuss] Re: ZFS stalling problem

2007-03-11 Thread Jesse DeFer
OK, I tried it with txg_time set to 1 and am seeing less predictable results. The first time I ran the test it completed in 27 seconds (vs 24s for ufs or 42s with txg_time=5). Further tests ran from 27s to 43s, about half the time greater than 40s. zpool iostat doesn't show the large no-write

Re: [zfs-discuss] Re: ZFS stalling problem

2007-03-06 Thread Roch - PAE
Jesse, You can change txg_time with mdb echo "txg_time/W0t1" | mdb -kw -r ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZFS stalling problem

2007-03-06 Thread Jesse DeFer
> [EMAIL PROTECTED] wrote on > 03/05/2007 03:56:28 AM: > > > one question, > > is there a way to stop the default txg push > behaviour (push at regular > > timestep-- default is 5sec) but instead push them > "on the fly"...I > > would imagine this is better in the case of an > application doing bi