I was using a slightly weird configuration, partially because it's the
hardware I had available, and partly because I thought it might more
adequately represent a typical internet connection. On one side of the
Linux bridge was a 10 Mbit hub, on the other side, a 100 Mbit switch.

The average latency was 500 us RTT.

Results may vary greatly when you bump up to gbit, but since the
Internet isn't gbit (and I can't afford gbit) I used much slower
hardware.

The testing setup and the results are all in the document; I
generalized a bit in my earlier email, in that streaming 9P had a
*slight* edge over regular 9P at 500 us RTT, but it's very slight. I
think at that point the bottleneck was bandwidth rather than latency,
and 9P was still able to get RPCs over as quickly as streams.


John

On Mon, Jan 10, 2011 at 10:48 AM, hiro <23h...@googlemail.com> wrote:
> What bandwidth? With a gbit I could notice a difference. But probably
> the fault of the linux v9fs modules I used (half usec RTT).
>
> On 1/10/11, Francisco J Ballesteros <n...@lsub.org> wrote:
>>>
>>> Right, my results were that you get pretty much exactly the same
>>> performance when you're working over a LAN whether you choose streams
>>> or regular 9P. Streaming only really starts to help when you're up
>>> into the multiple-millisecond RTT range.
>>
>> This is weird. Didn't read the thesis yet, sorry, but, do you know the
>> reason?
>> I think that when I measured op I found that even on lans, using get instead
>> of multiple rpcs was measurable. Of course users would not notice unless
>> latency
>> gets higher or many rpcs add their times.
>> thanks
>>
>>
>
>

Reply via email to