On Thu, Sep 18, 2008 at 6:51 AM, erik quanstrom <[EMAIL PROTECTED]> wrote: >> On Fri, Sep 12, 2008 at 7:47 PM, erik quanstrom <[EMAIL PROTECTED]> wrote: >> also... > > the fundamental problem is that it becomes very difficult to > implement fileservers which don't serve up regular files. > you might make perminant changes to something stored on > a disk with readahead. >
My experience is that there are a couple of different scenarios here -- there's dealing with synthetic file systems, dealing with regular files, and then there is dealing with both. Latency can effect all three situations -- my understanding was that Op was actually developed to deal with latency problems in dealing with the deep hierarchies of the Octopus synthetic file systems. There are likely a number of optimizations possible when dealing with regular files -- but we currently don't give many/any hints in the protocol as to what kind of optimizations are valid on a particular fid -- and with things like batching walk/open its even more difficult as you may cross mount points which invalidate the type of optimization you think you can do. Of course, if these were dealt with in a single protocol one approach would be to return Error when attempting an invalid optimization allowing clients to fall-back to a safer set of operations. I do tend to agree with Uriel that extensions, such as Op, may be better done in a complimentary op-code space to make this sort of negotitation possible. Unfortunately this can add quite a bit of complexity to the client and servers, so its not clear to me that its a definite win. If you know you are dealing exclusively with regular files, I would suggest starting with something like cfs(4) and play with different potential optimizations there such as read-ahead, loose caches, directory caches, temporal caches, etc. Most of these techniques are things you'd never want to look at with a synthetic file service, but should provide a route for most of the optimizations you might want in a wide-area-file-system -- particularly if you have exclusive access and aren't worried about coherency. If you are worried about coherency, you probably don't want to be doing any of these optimizations. There have been some conversations about how to approach coherent caching, and I think some folks have started working on it, but nothing is available yet. -eric