Hi Patrick, Patrick McHardy <[EMAIL PROTECTED]> wrote on 07/20/2007 11:46:36 PM:
> Krishna Kumar wrote: > > +static inline int get_skb(struct net_device *dev, struct Qdisc *q, > > + struct sk_buff_head *blist, > > + struct sk_buff **skbp) > > +{ > > + if (likely(!blist) || (!skb_queue_len(blist) && qdisc_qlen(q) <= 1)) { > > + return likely((*skbp = dev_dequeue_skb(dev, q)) != NULL); > > + } else { > > + int max = dev->tx_queue_len - skb_queue_len(blist); > > > I'm assuming the driver will simply leave excess packets in the > blist for the next run. Yes, and the next run will be scheduled even if no more xmits are called either due to qdisc_restart()'s call to driver returning : BUSY : driver failed to send all, net_tx_action will handle this later (the case you mentioned) OK : and qlen is > 0, return 1 and __qdisc_run() will re-retry (where blist len will become zero as driver processed EVERYTHING on blist) > The check for tx_queue_len is wrong though, > its only a default which can be overriden and some qdiscs don't > care for it at all. I think it should not matter whether qdiscs use this or not, or even if it is modified (unless it is made zero in which case this breaks). The intention behind this check is to make sure that not more than tx_queue_len skbs are in all queues put together (q->qdisc + dev->skb_blist), otherwise the blist can become too large and breaks the idea of tx_queue_len. Is that a good justification ? > > -void __qdisc_run(struct net_device *dev) > > +void __qdisc_run(struct net_device *dev, struct sk_buff_head *blist) > > > And the patches should really be restructured so this change is > in the same patch changing the header and the caller, for example. Ah, OK. Thanks, - KK - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html