Didier Kryn <k...@in2p3.fr> wrote:

>    Down to zero?

Depends on what the system is doing !
I've just checked several of my systems, one showed 12k when I logged in and 
dropped to 0. OK, that's a router so doesn't do much disk I/O - just a bit of 
logging.
Another (my mail server amongst other things) showed 312k when I logged in, and 
while I've been watching it has been down to 16k and up to 920k. Oh, just 
before I hit send, I've just seen this one down to 0 as well.

Even my MythTV server (which is currently recording and commflagging two 
programs) is varying between 1/2M and 9M dirty - oh, just dropped to 136k 
dirty. OK, this is a slightly special case because MythTV fsyncs the recording 
streams about once/second - but the commflagging and database updates that are 
done constantly aren't.

So yes, observation confirms theory that if there is a period of no writes then 
the dirty pages will get written to disk.

>    Who "have to wait" ? Apps don't have to: they get the data from cache and 
> write to cache. Maybe the disk-write policy depends on the IO scheduler as 
> the read policy does, but this layer is completely isolated from the 
> applications.

Actually, apps will wait if the underlying system just "doesn't return" from a 
read or write call for a while - a system I used to administer had one 
particular process (an inefficient reporting tool) we could run which resulted 
in 99% to 100% wait i/o system status (and simultaneously causing our phones to 
ring with user complaints as their processes effectively stopped dead).
If you rely on new writes to "push out" old dirty data then there will come a 
time when the underlying system will make the application wait while it makes 
some cache space available - in effect, write performance will become near 
enough identical to having no cache at all. One of the points of combining a 
write cache with a process that flushes it out is to give some "ready and 
waiting" space for intermittent writes. That way, many loads will never have to 
wait for disk as writes will go straight into cache.

>    Data was lost and filesystems *were* corrupted, at every such crash until 
> the advent of journalled filesystems. I started to install Reiserfs many 
> years ago to face this problem with crash.

Indeed, but the more dirty data, the higher the risk and effect. There's a 
difference between a few k/M and hundreds of M of lost data. You also have to 
consider the effect of optimisation techniques (either in the OS, disk driver, 
or drive itself) re-ordering writes to maximise performance.
Back to the original subject, once upon a time it was simply a case of "wait 
till the light goes out and then eject the floppy" because the systems didn't 
have a write cache - I'm thinking of "desktop" systems like Mac OS, DOS, early 
Windows etc. Similarly, often the effects of a crash were minimal because 
updates were written straight to disk - I still know people who think nothing 
of pulling the plug to shut down a system simply because "we've always done it 
that way".
_______________________________________________
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng

Reply via email to