Date: Tue, 14 Jun 2011 16:03:55 +0000 (UTC) From: Eduardo Horvath <e...@netbsd.org> Message-ID: <pine.neb.4.64.1106141556120.22...@mail.netbsd.org>
| I'd suggest trying to keep an incoming list that's sorted, and an | operating list that's being processed and periodically flush the incoming | list to the operating list. That way you can minimize the expensive | seeks and keep the code complexity down. | | Of course I haven't tested this so it may not work so well in practice. Sorry, I'm a long way behind in my list reading at the minute so this reply is to a couple of month old discussion... That kind of scheme was used in a University of Sydney I/O schedueller for 6th edition unix back in the 70's (done by Tim Long I think, but I'm not certain of that any more ... poor memory). It works very well, and the "periodically flush" is easily done when the operating list empties. That is, there are two sets (lists, trees, doesn't matter for this) of requests, one the low level driver is working on emptying, the other the upper level parts of the driver are filling. When the first empties, the roles are switched, the upper level starts refilling the now-empty set, and the low part starts clearing all that arrived since the last switch. If the I/O rate is low, this achieves almost nothing, but nor does anything else, and no-one really cares. If the I/O rate is high, this can allow good sort performance (back then with SMD drives it was possible to plan the I/O knowing just how the drive was rotating and what sector would be available next...) while preventing starvation (though as the I/O load goes up processes doing little I/O may get degraded performance, at least they won't stop completely). Further, the implementation is trivial, and also helps avoid lock contention between the upper and lower parts of the driver - they very rarely ever want to access the same set of buffers (or whatever buf's get replaced by if that ever happens). kre