On Wed, 2006-25-01 at 10:24 +0100, Stefan Rompf wrote:
> Am Mittwoch 25 Januar 2006 07:55 schrieb James Ketrenos:
> 


> > Jamal indicated we should just return NETDEV_TX_BUSY and the stack would
> > take care of rescheduling...
> 
> well so even Jamal can be wrong sometimes ;-)
> 

I am never wrong ;->

> But after all, it isn't Jamals fault, it is your development model. For me, 
> you Intel people seem to be sitting in your ivory tower, programming stuff, 
> dropping new releases onto the ipw and 802.11 web sites every couple of 
> weeks. Then someone of you has the job to split up the work into patches and 
> submit it to the kernel. I must admit that you submit it at least, different 
> to other driver developers.
> 


Sorry, I havent followed what led to this discussion.

If you return NETDEV_TX_BUSY when the queue is full the core will
reschedule for you. However, you need to have something to
asynchronously wake up the queue (such as a transmit complete interrupt)
- it seems from the description I have seen there is nothing
asynchronous? i.e is this why you resort to having _synchronous_
polling? Yes, that will chew tons of CPU if you have low driver timeouts
or introduce high latency if you have high timeout. The timeout is
really a last resort and is only supposed to catch bugs

Generally the rules are:
- driver detects queue is full and calls netif_stop_queue()
- calls into the driver from top layer always get back a NETDEV_TX_BUSY
- EOT interupt happens and you clean up the tx hardware path to make
more space so the core layer can send more packets. Subsequent packets
get queued to the hardware.

For the issue of multi-queueing and qos - the solution from the
devicescape folks with some master device which is stacked on top of
virtual devices(one per harcware queue) sounds like a really good
interim solution. The core code requires a lot more changes to get it
work well.

cheers,
jamal


-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to