[EMAIL PROTECTED] (Neal H. Walfield) writes: > The client would make a container containing the pages and give the > server access to the container. Then the server would map the pages > in the container, build a packet and send it down the network. Upon > return, the client could reject the server's access to the container > or fill it with another pay load and continue.
I'm out on somewhat thin ice here, but I've heard the following explanation of network card programming: The network card and the driver shares two circular buffers containing data and some control information. The network card uses dma to read data from the send queue, and to write to the receive queue. One generally wants a circular queue with more than one element, so that the network card immediately can go on getting the next packet to transmit, without having to wait for the operating system to process interrupts, and then tell it to go on. My (possibly naive) view on how to do this on l4 would be to share some memory with the driver, preferably memory of some type that can be used directly for dma. The i/o client would write data at the tail of the circular queue (or wait until space becomes available), and then make an rpc to the driver. The driver would fill out the (hardware-dependent) control information, and make sure the packet is scheduled for transmission by the networking card. While the packet is being queued and transmitted, the client will go on, trying to enqueue more packets, or doing other work. Non-blocking is important, you don't want to have a bunch of threads just in order to have more than one packet in the queue at a time. When transmission of a packet is finished, an rpc is sent back to the client, telling it that there is now one more slot available in the queue for reuse. In order to make this scheme safe, one would aither want to put the control information in a separate area not shared with the client, or lock the appropriate memory area while it is in the care of the sirver and hardware. Locking the memory area seems to be the solution that fits best with your model of memory management. That implies, I guess, that either each entry in the queue must be on it's own page. Or the client will do some local buffering, giving a bunch of packets at a time to the driver, when either (i) it has collected a full page of data, or (ii) the driver's queue is getting empty. To me, these issues seem a little hairy. Library functions to take care of the details would be cool, in particular if this kind of communication is useful in more contexts than just network card programming. Regards, /Niels _______________________________________________ Bug-hurd mailing list [EMAIL PROTECTED] http://mail.gnu.org/mailman/listinfo/bug-hurd