Am Mittwoch 25 Januar 2006 07:18 schrieb Zhu Yi: > > This is what leads to the high ksoftirqd usage I reported October 2005 > > into the ipw bugzilla > > (http://www.bughost.org/bugzilla/show_bug.cgi?id=825). > > Sorry, I'm not aware of this bug since I'm not on the cc list.
Hmm, frankly the ipw bugzilla seems quite write only to me sometimes. May I also remind you of bug 892, the failure in ipw_best_network() that prevents ipw2200 to connect to some networks? After I provided the data you asked for (and a patch!), this bug is in reopened state without any action for nearly one week! At least some feedback by accepting or rejecting would be nice. > > --- ipw2200.c.orig 2005-10-20 23:35:24.000000000 +0200 > > +++ ipw2200.c 2005-10-29 18:03:30.000000000 +0200 > > @@ -3564,9 +3564,7 @@ static void ipw_queue_init(struct ipw_pr > > if (q->low_mark < 4) > > q->low_mark = 4; > > > > - q->high_mark = q->n_bd / 8; > > - if (q->high_mark < 2) > > - q->high_mark = 2; > > + q->high_mark = 2; > > I believe this is for your own testing. Right? After all, yes, this part is not needed for the fix. > > q->first_empty = q->last_used = 0; > > q->reg_r = read; > > @@ -10412,8 +10410,10 @@ static int ipw_net_is_queue_full(struct > > struct clx2_tx_queue *txq = &priv->txq[0]; > > #endif /* CONFIG_IPW_QOS */ > > > > - if (ipw_queue_space(&txq->q) < txq->q.high_mark) > > + if (ipw_queue_space(&txq->q) < txq->q.high_mark) { > > + if (!netif_queue_stopped(dev)) netif_stop_queue(dev); > > return 1; > > + } > > The function is_queue_full() returns whether the queue is full or not, it > should not make any decision to stop the queue. IMHO, The decision should > be made from the network schedule layer. For example, if a device > continually returns NETDEV_TX_BUSY exceeds a certain rate, the netdev > scheduler should put it to "sleep" instead of busy polling. Well, even though it isn't busy polling anymore, it is still polling. How should the network layer estimate the right time? Maybe it takes 1ms until a slot is available, maybe much longer. So normally network drivers do a netif_stop_queue() when they detect a filled queue during transfer, and a netif_wake_queue() as soon as one or some items are free (again). > BTW, where did you restart the queue after it is empty again? There is still a netif_wake_queue() around in ipw_queue_tx_reclaim() from the time the driver did it right. > Your comment is valid and the high cpu usage of ksoftirqd should be fixed. > But this patch should still be merged since it fixed a different problem. The entire concept of is_queue_full() seems broken to me, I'm about to comment on it in my answer to James. Stefan - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html