On Wed, 7 Oct 2009, Eugene Grosbein wrote:

On Tue, Oct 06, 2009 at 08:28:35PM +0500, rihad wrote:

I don't think net.inet.ip.intr_queue_maxlen is relevant to this problem, as net.inet.ip.intr_queue_drops is normally zero or very close to it at all times.

When net.isr.direct is 1, this queue is used very seldom. Would you change it to 0, it will be used extensively.

Just to clarify this more specifically:

With net.isr.direct set to 0, the netisr will always be used when processing inbound IP packets.

With net.isr.direct set to 1, the netisr will only be used for special cases, such as loopback traffic, IPSEC decapsulation, and other processing types where there's a risk of recursive processing.

In the default 8.0 configuration, we use one netisr thread; however, you can specify to use multiple threads at boot time. This is not the default currently because we're still researching load distribution schemes, and on current high-performance systems the hardware tends to take care of that already pretty well (i.e., most modern 10gbps cards).

Also, ipfw/dummynet have fairly non-granular locking, so adding parallelism won't necessarily help currently.

Robert
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to