[9fans] critique of sockets API

2009-06-09 Thread Bhanu Nagendra Pisupati

Interesting read:
http://cacm.acm.org/magazines/2009/6/28495-whither-sockets/fulltext

If I am right, the filesystem based networking interface offered by Plan 9 
has the three limitations discussed here:
* performance overhead: app requesting data from a socket typically needs 
to perform 2 system calls (select/read or alt/read) 
* lack of an "kernel up-call API": which allows the kernel to inform an 
app each time network data is available
* hard to implement "multi homing" with support for multiple network 
interfaces


Any thoughts/comments?

Thanks,
-Bhanu



Re: [9fans] critique of sockets API

2009-06-09 Thread Bhanu Nagendra Pisupati
First off, I really am a big fan of filesystem interfaces as used in Plan 
9 - after all my PhD work was based on the model :)
My objective here is to debate and understand how the points made in the 
paper relate to the Plan9 networking model.


* performance overhead: app requesting data from a socket typically 
needs to perform 2 system calls (select/read or alt/read)


alt ? which is not required ? is not a system call.  only a read or write is
required.


Well, select() or alt might or might not be required depending on whether 
you want your thread to wait till the read operation waiting 
for data from the network completes. You may argue that since threads are 
"cheap" in Plan9 you can afford to have a thread wait on the read 
operation. But that to me is a different question...



* lack of an "kernel up-call API": which allows the kernel to inform an
app each time network data is available


plan 9 uses synchronous i/o, so this statement doesn't make sense
in plan 9.  you can use threads to do i/o asynch w.r.t. your application,
but the i/o itself is still synchronous w.r.t. the kernel.


Whether the IO is synchronous or not, there is this 
read()->process()->read()... alternating sequence of operations that is 
required, wherein the application has to explicitly go fetch data from the network 
using the read operation. To borrow text from the paper:


The API does not provide the programmer a way in which to say, "Whenever 
there is data for me, call me to process it directly."





* hard to implement "multi homing" with support for multiple network
interfaces


i have no idea how this relates to the use of a fs in implementing the
network stack.  why would using a filsystem (or not) make any difference
in the ability to multihome?

by the way, plan 9 beats the pants off anything else i've used for multiple
network / interface support.  it's support for mulitple ip stacks is quite
neat.


The question was meant to ask as to how easy it is to programmatically 
use the filesystem interface in a multi home network. But I agree that support for 
multiple network interfaces in Plan9 is way superior.




Re: [9fans] critique of sockets API

2009-06-10 Thread Bhanu Nagendra Pisupati



Did you do this on Plan 9 or bring some of the filesystem sanity of another OS?


Actually my work was based on 9P based synthetic filesystems 
implemented using Npfs (see 
http://docs.google.com/Doc?id=dcb7zf48_503q8j84)


These filesystems resided in resource constrained embedded devices which 
could not run Plan 9 in whole.





Re: [9fans] critique of sockets API

2009-06-10 Thread Bhanu Nagendra Pisupati



perhaps you think this is doging the question, but the cannonical
plan 9 approach to this is to note that it's easy (trivial) to have a n
reader threads and m worker threads s.t. i/o threads don't block


I'll agree. With multi threading the network read/write operations 
could be pipelined to minimize the overhead.




Re: [9fans] critique of sockets API

2009-06-12 Thread Bhanu Nagendra Pisupati

do you have a comparison to other similar projects like
1. styx on a brick
http://www.vitanuova.com/inferno/rcx_paper.html
2. 9p for imbedded devices
http://iwp9.inf.uth.gr/iwp9_proceedings08.pdf

and or other information on your work?  in particular,
what hardware are you using?


The "Styx on a Brick" paper was a definition inspiration for my work. I 
have acknowledged as much in my publications.


I completed my PhD in '07 before the "embedFS" filesystem paper from IW9P 
came out last year, and I have not done a formal comparison per se. But 
reading through the text, I do see similarities in some of our ideas, 
such as having one outstanding messag, of read/read message 
size, using 9P (rather than 9p2000) etc.


The IW9P paper does not give size numbers, but the basic 9P filesystem in 
my implementation was about 15KB in size when implemented on a 32-bit RISC 
processor named Nios (an embedded processor from Altera).


Most of my work was done using the Nios embedded FPGA environment from 
Altera. For more information:

http://www.cs.indiana.edu/cgi-bin/techreports/TRNNN.cgi?trnum=TR647
http://www.cs.indiana.edu/pub/techreports/TR647.html/embed.html



Re: [9fans] JTAG

2010-11-01 Thread Bhanu Nagendra Pisupati
I am trying to understand the end objective of 
the JTAG work discussed in one of the threads last week (sorry, I'm behind on my mails!).
There was one response that said: "The hope is that it would help debug 
usb/bt device issues on kw."; but beyond this I could not make out the use 
case for this work from the thread.


Is the idea to use JTAG as a communication pipe on which to export virtual 
filesystems from within the device? Can somebody please elaborate?


-Bhanu

On Tue, Oct 26, 2010 at 10:49 AM, Jeff Sickel  

wrote:
At the latest IWP9 I caught wind of interest in getting a JTAG file 
system added into Plan 9. ?There were more details than just the file 
system and USB connectivity that are still a little > > foggy. ?At first I 
didn't show to much enthusiasm but things have changed in a few short 
weeks!


What's the status of the effort?

Does a shipment of ice cream need to be arranged for the developers?

-jas




Re: [9fans] JTAG

2010-11-02 Thread Bhanu Nagendra Pisupati

I am not sure this fits into a /proc kind of interface because
JTAG lets you access the bare hardware. Nemo has just pointed to me
that a process is not the
same as a running kernel, and maybe the abstraction does not fit that
well.


Often cross debugging of embedded systems does not take place on a 
per process basis. Breakpoints for instance are set at the address within 
flash where the relevant code resides, and gets hit whenver code at that 
address gets executed (irrespective of executing process). This assumes 
that the code executes in place within flash - things get a bit more 
complicated when the code is copied to RAM and then executed.


Therefore all you need to do to facilitate cross debugging is to provide some 
means to
access registers/memory, control execution, set breakpoints and so on. You 
don't typically need to enable this on a per process basis.



Could one (is is this the plan) to generate a /proc like virtual file system
for jtag so acid will then work over jtag?


As part of our research work, we had some success exploring a
similar sort of idea to facilitate cross debug embedded code.
The model we used is as follows:

debugger <-- RS232 link --> 9P virtual filesystem <-- JTAG --> ARM7
HOST SIDE EMBEDDED SIDE

A 9P virtual filesystem (implemented on the embedded side) encapsulates 
JTAG based debug interface for an ARM7 device. The PC side debugger 
mounts this filesystem, and uses it to perform typical debugging 
tasks such as access register/memory valus, control execution and so on 
without having to deal with any JTAG messiness.


For anybody who's curious to learn more:
http://www.cs.indiana.edu/pub/techreports/TR647.html/embed.html