Matthew Dillon wrote:
Your idea of 'sequential' access cache restriction only
works if there is just one process doing the accessing.
Not necessarily. I suspect that there is a strong tendency to access particular files in particular ways. E.g., in your example of a download server, those files are always read sequentially. You can make similar assertions about a lot of files: manpages, gzip files, C source code files, etc, are "always" read sequentially. If a file's access history were stored as a "hint" associated with the file, then it would be possible to make better up-front decisions about how to allocate cache space. The ideal would be to store such hints on disk (maybe as an extended attribute?), but it might also be useful to cache them in memory somewhere. That would allow the cache-management code to make much earlier decisions about how to handle a file. For example, if a process started to read a 10GB file that has historically been accessed sequentially, you could immediately decide to enable read-ahead for performance, but also mark those pages to be released as soon as they were read by the process. FWIW, a web search for "randomized caching" yields some interesting reading. Apparently, there are a few randomized cache-management algorithms for which the mathematics work out reasonably well, despite Terry's protestations to the contrary. ;-) I haven't yet found any papers describing experiences with real implementations, though. If only I had the time to spend poring over FreeBSD's cache-management code to see how these ideas might actually be implemented... <sigh> Tim Kientzle To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-hackers" in the body of the message