On Thu, Sep 18, 2008 at 8:34 AM, sqweek <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 18, 2008 at 7:51 PM, erik quanstrom <[EMAIL PROTECTED]> wrote:
>
>  How can multiple threads possibly help with latency caused by
> operations that forced to be serial by the protocol? In fact, how can
> multithreading help with latency in general? It will help
> *throughput*, certainly, but unless you have a thread dedicated to
> predicting the future, I can't see where the benifit will come from.
>

I'm not saying I know it would be a good idea, but you could implement
speculative-9P which issued the equivalent of the batched requests
without waiting for the responses of the prior request -- since you
are on an in-order pipe the file server would get the requests in
order and if they all succeeded, then you've reduced some latency.
Of course any failure will cause issues, potentially bad issues (a
bunch of walks followed by a create would end badly if one of the
walks failed and the create put the file someplace undesirable).

I think this is tending towards what was discussed at the first IWP9
-- IIRC the idea was to use the same tid for all the operations
(instead of a new tid per operation as would normally be done in
multi-threaded 9P).  An error or short-walk on any of the ops
associated with that tid.  I can't remember if we needed some op to
represent transaction boundries in order to recover failed tids --
anyone's memory better than mine?

The problem again here is added complexity to client and server who
have to track more state associated with these transactions....

               -eric

Reply via email to