Ah, ok, let me play around with it a bit, perhaps I'll make it definable, of course if there is no positive benefit from using it it would seem silly to leave it around :)
Will look at your patch changes and that issue tomorrow. Thanks for your efforts! Jack On Thu, Apr 8, 2010 at 4:07 PM, Pyun YongHyeon <pyu...@gmail.com> wrote: > On Thu, Apr 08, 2010 at 02:06:09PM -0700, Jack Vogel wrote: > > Only one device support by em does multiqueue right now, and that is > > Hartwell, 82574. > > > > Thanks for the info. > > Mike, here is updated patch. Now UDP bulk TX transfer performance > recovered a lot(about 890Mbps) but it still shows bad numbers > compared to other controllers. For example, bce(4) shows about > 958Mbps for the same load. > During the testing I found a strong indication of packet reordering > issue of drbr interface. If I forcibly change to use single TX > queue, em(4) got 950Mbps as it used to be. > > Jack, as we talked about possible drbr issue with igb(4), UDP > transfer seems to suffer from packet reordering issue here. Can we > make em(4)/igb(4) use single TX queue until we solve drbr interface > issue? Given that only one em(4) controller supports multiqueue, > dropping multiqueue support for em(4) does not look bad to me. > > > Jack > > > > > > On Thu, Apr 8, 2010 at 2:05 PM, Mike Tancsa <m...@sentex.net> wrote: > > > > > At 04:56 PM 4/8/2010, Pyun YongHyeon wrote: > > > > > >> On Thu, Apr 08, 2010 at 02:31:18PM -0400, Mike Tancsa wrote: > > >> > At 02:17 PM 4/8/2010, Pyun YongHyeon wrote: > > >> > > > >> > >Try this patch. It should fix the issue. It seems Jack forgot to > > >> > >strip CRC bytes as old em(4) didn't strip it, probably to > > >> > >workaround silicon bug of old em(4) controllers. > > >> > > > >> > Thanks! The attached patch does indeed fix the dhclient issue. > > >> > > > >> > > > >> > >It seems there are also TX issues here. The system load is too high > > >> > >and sometimes system is not responsive while TX is in progress. > > >> > >Because I initiated TCP bulk transfers, TSO should reduce CPU load > > >> > >a lot but it didn't so I guess it could also be related watchdog > > >> > >timeouts you've seen. I'll see what can be done. > > >> > > > >> > Thanks for looking into that as well!! > > >> > > > >> > ---Mike > > >> > > > >> > > >> Mike, > > >> > > >> Here is patch I'm working on. This patch fixes high system load and > > >> system is very responsive as before. But it seems there is still > > >> some TX issue here. Bulk UDP performance is very poor(< 700Mbps) > > >> and I have no idea what caused this at this moment. > > >> > > >> BTW, I have trouble to reproduce watchdog timeouts. I'm not sure > > >> whether latest fix from Jack cured it. By chance does your > > >> controller support multi TX/RX queues? You can check whether em(4) > > >> uses multi-queues with "vmstat -i". If em(4) use multi-queue you > > >> may have multiple irq output for em0. > > >> > > > > > > Hi, > > > I will give it a try later tonight! This one does not seem to. > > > > > > 0(ich10)# vmstat -i > > > interrupt total rate > > > irq16: uhci0+ 30 0 > > > irq18: ehci0 uhci5 158419 17 > > > irq19: fwohci0++ 86 0 > > > irq21: uhci1 17 0 > > > irq23: uhci3 ehci1 2 0 > > > cpu0: timer 18570305 1994 > > > irq256: igb0 80 0 > > > irq257: igb0 255 0 > > > irq258: igb0 66 0 > > > irq259: igb0 32 0 > > > irq260: igb0 2 0 > > > irq261: igb1 2679 0 > > > irq262: igb1 998 0 > > > irq263: igb1 2468 0 > > > irq264: igb1 6361 0 > > > irq265: igb1 2 0 > > > irq266: em0 33910 3 > > > irq267: ahci1 15317 1 > > > cpu1: timer 18557074 1993 > > > cpu3: timer 18557168 1993 > > > cpu2: timer 18557108 1993 > > > Total 74462379 7998 > > > 0(ich10)# > > > > _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"