On Tue, Oct 09, 2007 at 05:04:35PM -0700, David Miller wrote: > We have to keep in mind, however, that the sw queue right now is 1000 > packets. I heavily discourage any driver author to try and use any > single TX queue of that size.
Why would you discourage them? If 1000 is ok for a software queue why would it not be ok for a hardware queue? > Which means that just dropping on back > pressure might not work so well. > > Or it might be perfect and signal TCP to backoff, who knows! :-) 1000 packets is a lot. I don't have hard data, but gut feeling is less would also do. And if the hw queues are not enough a better scheme might be to just manage this in the sockets in sendmsg. e.g. provide a wait queue that drivers can wake up and let them block on more queue. > The idea is that the network stack, as in the pure hw queue scheme, > unconditionally always submits new packets to the driver. Therefore > even if the hw TX queue is full, the driver can still queue to an > internal sw queue with some limit (say 1000 for ethernet, as is used > now). > > > When the hw TX queue gains space, the driver self-batches packets > from the sw queue to the hw queue. I don't really see the advantage over the qdisc in that scheme. It's certainly not simpler and probably more code and would likely also not require less locks (e.g. a currently lockless driver would need a new lock for its sw queue). Also it is unclear to me it would be really any faster. -Andi - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html