On 2/15/08, Roch Bourbonnais <[EMAIL PROTECTED]> wrote:
>
>  Le 15 févr. 08 à 11:38, Philip Beevers a écrit :
>
[...]
>  > Obviously this isn't good behaviour, but it's particularly unfortunate
>  > given that this checkpoint is stuff that I don't want to retain in any
>  > kind of cache anyway - in fact, preferably I wouldn't pollute the ARC
>  > with it in the first place. But it seems directio(3C) doesn't work
>  > with
>  > ZFS (unsurprisingly as I guess this is implemented in segmap), and
>  > madvise(..., MADV_DONTNEED) doesn't drop data from the ARC (again, I
>  > guess, as it's working on segmap/segvn).
>  >
>  > Of course, limiting the ARC size to something fairly small makes it
>  > behave much better. But this isn't really the answer.
>  >
>  > I also tried using O_DSYNC, which stops the pathological behaviour but
>  > makes things pretty slow - I only get a maximum of about 20MBytes/sec,
>  > which is obviously much less than the hardware can sustain.
>  >
>  > It sounds like we could do with different write throttling behaviour
>  > to
>  > head this sort of thing off. Of course, the ideal would be to have
>  > some
>  > way of telling ZFS not to bother keeping pages in the ARC.
>  >
>  > The latter appears to be bug 6429855. But the underlying behaviour
>  > doesn't really seem desirable; are there plans afoot to do any work on
>  > ZFS write throttling to address this kind of thing?
>  >
>
>
> Throttling is being addressed.
>
>         http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6429205
>
>
>  BTW, the new code will adjust write speed to disk speed very quickly.
>  You will not see those ultra fast initial checkpoints. Is this a
>  concern ?

I'll wait for more details on how you address this.
Maybe a blog? like this one:
http://blogs.technet.com/markrussinovich/archive/2008/02/04/2826167.aspx

"Inside Vista SP1 File Copy Improvements" :-

"One of the biggest problems with the engine's implementation is
that for copies involving lots of data, the Cache Manager
write-behind thread on the target system often can't keep up with
the rate at which data is written and cached in memory.
That causes the data to fill up memory, possibly forcing other
useful code and data out, and eventually, the target's system's
memory to become a tunnel through which all the copied data
flows at a rate limited by the disk."

Sounds familiar? ;-)

Tao
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to