For those thinking of playing with predictive caching
(likely an area of considerable student endeveour/interest
these days at both filesystem and "web" level):

---
Matthew Dillon:
 > So there is no 'perfect' caching algorithm.  There
 > are simply too many variables even in a well defined
 > environment for even the best system heuristics to
 > cover optimally.
---
David Schultz:
 > If that proves to be infeasible, I'm sure there are
 > ways to approximate the same thing.  The hard parts,
 > I think, would be teaching the VM system to use the
 > new information, and gathering statistics from which
 > you form your hints.

---

Right. It's easy if you know the complete future of
the total system state, which of course you never
will.  Someone interested in this might try to apply
the latest in machine learing techniques, classifiers,
etc., to the online problem. Variants of this are
receiving lots of attention in areas such as gene
sequence prediction. I dunno, but it seems like a lot
of the math ends up pretty similar to economics, and
we all know how well those models work. Kind of funny,
running an economic simulation in your kernel... but
actually getting possible at some level, at least for
research systems with modern machines.  There was a
time when you would be fired for putting floating-point
in an OS.


----
 http://csl.cse.ucsc.edu/acme.shtml :

"Many cache replacement policies have been invented
and some perform better than others under certain
workload and network-topological conditions. It is
impossible and sub-optimal to manually choose cache
replacement policies for workloads and topologies that
are under continuous change. We use machine learning
algorithms to automatically select the best current
policy or mixtures of policies from a policy (a.k.a
expert) pool to provide an "adaptive caching" service."


 - bruce

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to