Jesse Brandeburg wrote:
So we've recently put a bit of code in our e1000 driver to decrease the qlen based on the speed of the link.

On the surface it seems like a great idea. A driver knows when the link speed changed, and having a 1000 packet deep queue (the default for most kernels now) on top of a 100Mb/s link (or 10Mb/s worst case for us) makes for a *lot* of latency if many packets are queued up in the qdisc.

Problem we've seen is that setting this shorter queue causes a large spike in cpu when transmitting using UDP:

100Mb/s link
txqueuelen: 1000 Throughput: 92.44 CPU: 5.00
txqueuelen: 100 Throughput: 93.80 CPU: 61.59

Is this expected? any comments?

Triggering intra-stack flow-control perhaps? Perhaps 10X more often than before if the queue is 1/10th what it was before?

Out of curiousity, how does the UDP socket's SO_SNDBUF compare to the queue depth?

rick jones
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to