> [EMAIL PROTECTED] wrote on
> 03/05/2007 03:56:28 AM:
> 
> > one question,
> > is there a way to stop the default txg push
> behaviour (push at regular
> > timestep-- default is 5sec) but instead push them
> "on the fly"...I
> > would imagine this is better in the case of an
> application doing big
> > sequential write (video streaming... )
> >
> > s.
> 
> 
> I do not believe you would want to do that under any
> workload -- txg allow
> for optimized writes.  I am wondering if this stall
> behavior (is it really
> stalling,  or just a visual stat issue) is more
> related to txg maxsize
> (calculated from memory/arc size) vs txg_time.
>  txg_time adjusting may
> loud the real issue if it is due to a bottleneck
> while evacing a txg or if
> the txg maxsize is miscalculated so that people are
> hitting a state where
> txg is _almost_ hitting maxsize in 5 seconds
> (txg_time default), and
> blocking the next txg while evacing -- in which case
> the core issue is the
> txg evac / maxsize.
> 
> Any thoughts?
> 
> -Wade

Wall time for my two tests is 24s for UFS and 42s for ZFS, so it doesn't appear 
to be a stat visualization problem.  I am currently attempting to change 
txg_size, but am having trouble setting up a build environment.

Jesse

> >
> > On 3/5/07, Jeff Bonwick <[EMAIL PROTECTED]>
> wrote:
> > > Jesse,
> > >
> > > This isn't a stall -- it's just the natural
> rhythm of pushing out
> > > transaction groups.  ZFS collects work
> (transactions) until either
> > > the transaction group is full (measured in terms
> of how much memory
> > > the system has), or five seconds elapse --
> whichever comes first.
> > >
> > > Your data would seem to suggest that the read
> side isn't delivering
> > > data as fast as ZFS can write it.  However, it's
> possible that
> > > there's some sort of 'breathing' effect that's
> hurting performance.
> > > One simple experiment you could try: patch
> txg_time to 1.  That
> > > will cause ZFS to push transaction groups every
> second instead of
> > > the default of every 5 seconds.  If this helps
> (or if it doesn't),
> > > please let us know.
> > >
> > > Thanks,
> > >
> > > Jeff
> > >
> > > Jesse DeFer wrote:
> > > > Hello,
> > > >
> > > > I am having problems with ZFS stalling when
> writing, any help in
> > troubleshooting would be appreciated.  Every 5
> seconds or so the
> > write bandwidth drops to zero, then picks up a few
> seconds later
> > (see the zpool iostat at the bottom of this
> message).  I am running
> > SXDE, snv_55b.
> > > >
> > > > My test consists of copying a 1gb file (with
> cp) between two
> > drives, one 80GB PATA, one 500GB SATA.  The first
> drive is the
> > system drive (UFS), the second is for data.  I have
> configured the
> > data drive with UFS and it does not exhibit the
> stalling problem and
> > it runs in almost half the time.  I have tried many
> different ZFS
> > settings as well: atime=off, compression=off,
> checksums=off,
> > zil_disable=1 all to no effect.  CPU jumps to about
> 25% system time
> > during the stalls, and hovers around 5% when data
> is being transferred.
> > > >
> > > > # zpool iostat 1
> > > >                capacity     operations
>    bandwidth
> > pool         used  avail   read  write   read
>   write
> > > ----------  -----  -----  -----  -----  -----
>  -----
> > > tank         183M   464G      0     17  1.12K
>   1.93M
> > > tank         183M   464G      0    457      0
>  57.2M
> > > tank         183M   464G      0    445      0
>   55.7M
> > > tank         183M   464G      0    405      0
>  50.7M
> > > tank         366M   464G      0    226      0
>   4.97M
> > > tank         366M   464G      0      0      0
>      0
>  tank         366M   464G      0      0      0      0
> > > tank         366M   464G      0      0      0
>       0
> tank         366M   464G      0    200      0  25.0M
> > > > tank         366M   464G      0    431      0
>  54.0M
> > > tank         366M   464G      0    445      0
>   55.7M
> > > tank         366M   464G      0    423      0
>  53.0M
> > > tank         574M   463G      0    270      0
>   18.1M
> > > tank         574M   463G      0      0      0
>      0
>  tank         574M   463G      0      0      0      0
> > > tank         574M   463G      0      0      0
>       0
> tank         574M   463G      0    164      0  20.5M
> > > > tank         574M   463G      0    504      0
>  63.1M
> > > tank         574M   463G      0    405      0
>   50.7M
> > > tank         753M   463G      0    404      0
>  42.6M
> > > tank         753M   463G      0      0      0
>       0
> tank         753M   463G      0      0      0      0
> > > > tank         753M   463G      0      0      0
>      0
>  tank         753M   463G      0    343      0  42.9M
> > > tank         753M   463G      0    476      0
>   59.5M
> > > tank         753M   463G      0    465      0
>  50.4M
> > > tank         907M   463G      0     68      0
>    390K
> > tank         907M   463G      0      0      0
>       0
> tank         907M   463G      0     11      0  1.40M
> > > > tank         907M   463G      0    451      0
>  56.4M
> > > tank         907M   463G      0    492      0
>   61.5M
> > > tank        1.01G   463G      0    139      0
>  7.94M
> > > tank        1.01G   463G      0      0      0
>       0
> > > > Thanks,
> > > > Jesse DeFer
> > > >
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to