Re: [9fans] fs performance

2011-01-11 Thread hiro
Sorry, I wanted to say half a ms. I also see 100us on another pc.

Re: [9fans] fs performance

2011-01-10 Thread John Floren
I was using a slightly weird configuration, partially because it's the hardware I had available, and partly because I thought it might more adequately represent a typical internet connection. On one side of the Linux bridge was a 10 Mbit hub, on the other side, a 100 Mbit switch. The average laten

Re: [9fans] fs performance

2011-01-10 Thread erik quanstrom
On Mon Jan 10 13:50:09 EST 2011, 23h...@googlemail.com wrote: > What bandwidth? With a gbit I could notice a difference. But probably > the fault of the linux v9fs modules I used (half usec RTT). > could you perhaps have intended 0.5ms, not µs? here's mellanox bragging about 4µs latency for 10gb

Re: [9fans] fs performance

2011-01-10 Thread hiro
What bandwidth? With a gbit I could notice a difference. But probably the fault of the linux v9fs modules I used (half usec RTT). On 1/10/11, Francisco J Ballesteros wrote: >> >> Right, my results were that you get pretty much exactly the same >> performance when you're working over a LAN whether

Re: [9fans] fs performance

2011-01-10 Thread Francisco J Ballesteros
> > Right, my results were that you get pretty much exactly the same > performance when you're working over a LAN whether you choose streams > or regular 9P. Streaming only really starts to help when you're up > into the multiple-millisecond RTT range. This is weird. Didn't read the thesis yet, so

Re: [9fans] fs performance

2011-01-10 Thread Charles Forsyth
I think it's fair to say that the IO path for fossil is > considerably slower than the IO path for kernel-based file systems in > Linux: slower as in multiples of 10, not multiples. There's a fair > amount of copying, allocation, and bouncing in and out of the kernel, for common applications you'd

Re: [9fans] fs performance

2011-01-10 Thread David Leimbach
On Sunday, January 9, 2011, ron minnich wrote: > On Sun, Jan 9, 2011 at 1:38 PM, Bakul Shah wrote: > >> I didn't say plan9 "suffers". Merely that one has to look at >> other aspects as well (implying putting in Tstream may not >> make a huge difference). > > well, what we do know from one set of

Re: [9fans] fs performance

2011-01-09 Thread erik quanstrom
> Peak Local file access bandwidth is typically 50 to 100 MBPs > x number of disks; over the localnet it is about 80MBps. On > my internet connection I barely get 1MBps download (& 0.2MBps > upload) speeds. interesting observation: when i first set up the diskless fileserver at coraid, we had a mi

Re: [9fans] fs performance

2011-01-09 Thread Bakul Shah
On Sun, 09 Jan 2011 22:58:22 GMT Charles Forsyth wrote: > it's curious that people are still worrying about "local" file systems > when so much of most people's data increasingly is miles > away on Google, S3, S3 via Drop Box, etc, which model is closer if anything t > o the > original plan 9 mod

Re: [9fans] fs performance

2011-01-09 Thread Charles Forsyth
>The way I move files to/from Dropbox and these other services is via >streams, btw :-) yes, and some streams are better than others, but i suspect (based on observed behaviour and wireshark) that there are non-trivial delays and thus latency visible within the stream. it isn't a nice stream of re

Re: [9fans] fs performance

2011-01-09 Thread ron minnich
On Sun, Jan 9, 2011 at 2:58 PM, Charles Forsyth wrote: > it's curious that people are still worrying about "local" file systems > when so much of most people's data increasingly is miles > away on Google, S3, S3 via Drop Box, etc, which model is closer if anything > to the > original plan 9 model

Re: [9fans] fs performance

2011-01-09 Thread Charles Forsyth
it's curious that people are still worrying about "local" file systems when so much of most people's data increasingly is miles away on Google, S3, S3 via Drop Box, etc, which model is closer if anything to the original plan 9 model of dedicated file servers than the unix/linux model of "the whole

Re: [9fans] fs performance

2011-01-09 Thread erik quanstrom
> John did do some measurement of file system times via the trace device > we wrote. I think it's fair to say that the IO path for fossil is > considerably slower than the IO path for kernel-based file systems in > Linux: slower as in multiples of 10, not multiples. There's a fair > amount of copyi

Re: [9fans] fs performance

2011-01-09 Thread erik quanstrom
> > i also think that your examples don't translate well into the > > plan 9 world. we trade performance for keeping ramfs out of > > the kernel, etc. (620mb/s on my much slower machine, btw.) > > This is for dd /dev/null? What do you get for > various block sizes? that's for dd -if /dev/zero -

Re: [9fans] fs performance

2011-01-09 Thread erik quanstrom
> - other local optimizations (does plan9 pay marshalling, > unmarshalling cost for node local communication?) not unless it hits the mount driver. since a user level fs is a 9p server, it is clear that io must go through the mnt driver and kernel fileservers or pipes need not. > - pushing per

Re: [9fans] fs performance

2011-01-09 Thread ron minnich
On Sun, Jan 9, 2011 at 1:38 PM, Bakul Shah wrote: > I didn't say plan9 "suffers". Merely that one has to look at > other aspects as well (implying putting in Tstream may not > make a huge difference). well, what we do know from one set of measurements is that it makes a measurable difference whe

Re: [9fans] fs performance

2011-01-09 Thread ron minnich
On Sun, Jan 9, 2011 at 12:47 PM, erik quanstrom wrote: > however, i think we could do even better by modifying devmnt > to keep more than 1 outstanding message per channel, as a mount > option.  each 9p connection can stream without the overhead of > seperate connections. > > this is the stragegy

Re: [9fans] fs performance

2011-01-09 Thread Bakul Shah
On Sun, 09 Jan 2011 16:14:21 EST erik quanstrom wrote: > > The point of mentioning FreeBSD numbers is to show what is > > possible. To really improve plan9 fs performance one would > > have to look at things like syscall overhead, number of data > > copies made, number of syscalls and context swi

Re: [9fans] fs performance

2011-01-09 Thread Bakul Shah
On Sun, 09 Jan 2011 12:25:41 PST ron minnich wrote: > On Sun, Jan 9, 2011 at 11:54 AM, Bakul Shah wrote > : > >None of these > > use any streaming (though there *is* readahead at the FS > > level). > > yes, all the systems that perform well do so via aggressive readahead > -- which, from one po

Re: [9fans] fs performance

2011-01-09 Thread erik quanstrom
> The point of mentioning FreeBSD numbers is to show what is > possible. To really improve plan9 fs performance one would > have to look at things like syscall overhead, number of data > copies made, number of syscalls and context switches etc. and > tune each component. i don't see any evidence t

Re: [9fans] fs performance

2011-01-09 Thread erik quanstrom
> If you think about it, a single 9p connection is a multiplexed stream > for managing file I/O requests. What john's work did is to create an > individual stream for each file. And, as Andrey's results and John's > results show, it can be a win. The existence of readahead supports the > idea that

Re: [9fans] fs performance

2011-01-09 Thread ron minnich
On Sun, Jan 9, 2011 at 11:54 AM, Bakul Shah wrote: >None of these > use any streaming (though there *is* readahead at the FS > level). yes, all the systems that perform well do so via aggressive readahead -- which, from one point of view, is one way of creating a stream from a discrete set of req

Re: [9fans] fs performance

2011-01-09 Thread Bakul Shah
On Sun, 09 Jan 2011 09:29:04 PST ron minnich wrote: > > Those are interesting numbers. Actually, however, changing a program > to use the stream stuff is trivial. I would expect the streaming to be > a real loser in a site with 10GE but we can try it. As John has > pointed out the streaming only

Re: [9fans] fs performance

2011-01-09 Thread erik quanstrom
On Sun Jan 9 12:41:37 EST 2011, rminn...@gmail.com wrote: > simple question: what's it take to set up a kenfs + coraid combo? Or > is there a howto somewhere on your site? I'd like to give this a go. since i've done this a number of times, it's getting easier. i've added some features to the fs a

Re: [9fans] fs performance

2011-01-09 Thread John Floren
On Sun, Jan 9, 2011 at 9:29 AM, ron minnich wrote: [snipped] > As John has > pointed out the streaming only makes sense where the inherent network > latency is pretty high (10s of milliseconds), i.e. the wide area. > > ron > > Right, my results were that you get pretty much exactly the same perfo

Re: [9fans] fs performance

2011-01-09 Thread ron minnich
simple question: what's it take to set up a kenfs + coraid combo? Or is there a howto somewhere on your site? I'd like to give this a go. Those are interesting numbers. Actually, however, changing a program to use the stream stuff is trivial. I would expect the streaming to be a real loser in a si

[9fans] fs performance

2011-01-09 Thread erik quanstrom
the new auth server, which uses the fs as its root rather than a stand-alone fs, happens to be faster than our now-old cpu server, so i did a quick build test with a kernel including the massive-fw myricom driver. suspecting that latency kills even on 10gbe, i tried a second build with NPROC=24.