At the risk of being told to launch myself towards a body of water...
So, sort of linking with the data about saturating a GbE both ways on a single TCP connection, and how it required binding netperf to the CPU other than the one taking interrupts... If channels are taken to their limit, and the non-hard-irq processing of the packet is all in the user's context what are the implications for having the application churning away doing application things while TCP is feeding it data? Or for an application that is processing more than one TCP connection in a given thread?
It almost feels like the channel concept wants a "thread per connection" model? rick jones - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html