If the mechanism cannot be applied to every (even wrong) problem, then it still doesn't solve the file I/O over high-latency links issue that we started with first.
2010/10/15 <cinap_len...@gmx.de>: > if it doesnt help, you apply the mechanism to the wrong problem :) or > the mechanism is not that usefull as i thought... thanks ron for your > comment! i was just hoping to get some responses from the osprey > dudes as they had it on ther slides :) > > -- > cinap > > > ---------- Forwarded message ---------- > From: Latchesar Ionkov <lu...@ionkov.net> > To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> > Date: Fri, 15 Oct 2010 10:43:34 -0600 > Subject: Re: [9fans] πp > And how is fork going to help when the forked processes need to > exchange the data over the same high-latency link? > > 2010/10/15 <cinap_len...@gmx.de>: >> fork! >> >> -- >> cinap >> >> >> ---------- Forwarded message ---------- >> From: Latchesar Ionkov <lu...@ionkov.net> >> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> >> Date: Fri, 15 Oct 2010 10:31:47 -0600 >> Subject: Re: [9fans] πp >> What if the data your process needs is located on more than one >> server? Play ping-pong? >> >> Thanks, >> Lucho >> >> 2010/10/15 <cinap_len...@gmx.de>: >>> i wonder if making 9p work better over high latency connections is >>> even the right answer to the problem. the real problem is that the >>> data your program wants to work on in miles away from you and >>> transfering it all will suck. would it not be cool to have a way to >>> teleport/migrate your process to a cpu server close to the data? >>> >>> i know, this is a crazy blue sky idea that has lots of problems on its >>> own... but it poped up again when i read the "bring the computation >>> to the data" point from the ospray talk. >>> >>> -- >>> cinap >>> >>> >>> ---------- Forwarded message ---------- >>> From: Francisco J Ballesteros <n...@lsub.org> >>> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net> >>> Date: Fri, 15 Oct 2010 16:59:02 +0200 >>> Subject: Re: [9fans] πp >>> It's not just that you can stream requests or not. >>> If you have caches in the path to the server, you'd like to batch together >>> (or >>> stream or whatever you'd like to call that) requests so that if a client is >>> reading a file and a single rpc suffices, the cache, in the worst case, >>> knows >>> that it has to issue a single rpc to the server. >>> >>> Somehow, you need to group requests to retain the idea that a bunch of >>> requests have some meaning as a whole. >>> >>> 2010/10/15 David Leimbach <leim...@gmail.com>: >>>> >>>> >>>> 2010/10/14 Latchesar Ionkov <lu...@ionkov.net> >>>>> >>>>> It can't be dealt with the current protocol. It doesn't guarantee that >>>>> Topen will be executed once Twalk is done. So can get Rerrors even if >>>>> Twalk succeeds. >>>>> >>>> >>>> It can be dealt with if the scheduling of the pipeline is done properly. >>>> You just have to eliminate the dependencies. >>>> I can imagine having a few concurrent queues of "requests" in a client that >>>> contain items with dependencies, and running those queues in a pipelined >>>> way >>>> against a 9P server. >>>> >>>>> >>>>> 2010/10/13 Venkatesh Srinivas <m...@acm.jhu.edu>: >>>>> >> 2) you can't pipeline requests if the result of one request depends on >>>>> >> the >>>>> >> result of a previous. for instance: walk to file, open it, read it, >>>>> >> close >>>>> >> it. >>>>> >> if the first operation fails, then subsequent operations will be >>>>> >> invalid. >>>>> > >>>>> > Given careful allocation of FIDs by a client, that can be dealt with - >>>>> > operations on an invalid FID just get RErrors. >>>>> > >>>>> > -- vs >>>>> > >>>>> >>>> >>>> >>> >>> >> >> > >