On Sun, 26 Jan 2003, Sean Hamilton wrote: > > In my case I have a webserver serving up a few dozen files of about 10 MB > each. While yes it is true that I could purchase more memory, and I could > purchase more drives and stripe them, I am more interested in the fact that > this server is constantly grinding away because it has found a weakness in > the caching algorithm. > > After further thought, I propose something much simpler: when the kernel is > hinted that access will be sequential, it should stop caching when there is > little cache space available, instead of throwing away old blocks, or be > much more hesitant to throw away old blocks. Consider that in almost all > cases where access is sequential, as reading continues, the chances of the > read being aborted increase: ie, users downloading files, directory tree > traversal, etc. Since the likelihood of the first byte reading the first > byte is very high, and the next one less high, and the next less yet, etc, > it seems to make sense to tune the caching algorithm to accomodate this.
Your case seems to be a highly specific one, and would require very specific tuning. And then one will be able to find some other "unwanted behaviour" once you tune your system for particular behaviour. -ASR http://www.crosswinds.net/~rajekar MERYL STREEP is my obstetrician! To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-hackers" in the body of the message