On Thu, Apr 23, 2009 at 01:07:58PM +0800, sqweek wrote:
> Ken Thompson wrote:
> | Now for the question. 9P could probably be speeded up, for large
> | reads and writes, by issuing many smaller reads in parallel rather
> | than serially. Another try would be to allow the client of the
> | filesystem to issue asynchronous requests and at some point
> | synchronize. Because 9P is really implementing a filesystem, it
> | will be very hard to get any more parallelism with multiple outstanding
> | requests.
> 
>  I followed it up with a more focused question (awkwardly worded to
> fit within the draconian 250 character limit), but no response yet:
> "Hi Ken, thanks for your 9p/HTTP response. I guess my real question
> is: can we achieve such parallelism transparently, given that most
> code calls read() with 4k/8k blocks. The syscall implies
> synchronisation... do we need new primitives? h8 chr limit"

Not to beat a (potentially) dead horse (even further) to death, but if we
had some way of knowing that files were actually data (i.e. not ctl files;
cf. QTDECENT) we could do more prefetching in a proxy -- e.g. cfs could be
modified to do read entire files into its cache (presumably it would have to
Tstat afterwards to ensure that it's got a stable snapshot of the file).
Adding cache journal callbacks would further allow us to avoid the RTT of
Tstat on open and would bring us a bit closer to a fully coherent store.
Achieving actual coherency could be an interesting future direction (IMHO).

--nwf;

Attachment: pgpVXNbfdyNw5.pgp
Description: PGP signature

Reply via email to