The driver already handles the pinning, you shouldnt need to mess with it.

MSIX interrupts start at 256, the igb driver uses one vector per queue,
which
is an TX/RX pair. The driver creates as many queues as cores up to a max
of 8.

Jack


On Thu, Nov 11, 2010 at 10:05 AM, Eugene Perevyazko <j...@dnepro.net> wrote:

> On Thu, Nov 11, 2010 at 12:49:52PM +0200, Eugene Perevyazko wrote:
> > On Thu, Nov 11, 2010 at 01:47:02AM +0100, Ivan Voras wrote:
> > > On 11/10/10 12:04, Eugene Perevyazko wrote:
> > >
> > > >Tried 2 queues and 1 queue per iface, neither hitting cpu limit.
> > >
> > > Are you sure you are not hitting the CPU limit on individual cores?
> Have
> > > you tried running "top -H -S"?
> > >
> > Sure, even with 1queue per iface load is 40-60% on busy core, with 2
> queues it was much lower.
> > Now I've got the module for mb with 2 more ports, going to see if it
> helps.
> The IO module has em interfaces on it and somehow I've already got 2 panics
> after moving one of vlans to it.
>
> In the mean time, can someone explain me what is processed by threads
> marked
> like "irq256: igb0" and "igb0 que". May be understanding this will let me
> pin those threads to cores more optimally.
> There are (hw.igb.num_queues+1) "irq" threads and (hw.igb.num_queues) "que"
> threads. Now I just pin them sequentially to even cores (odd ones are HT).
>
> Now I use hw.igb.num_queues=2, and with traffic limited to 1200Mbits the
> busiest core is still 60% idle...
>
>
>
> --
> Eugene Perevyazko
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to