Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread Adrian Chadd
Hi, Can you please do a pciconf -lv on the previous and current hardware? I wonder if it's something to do with a feature that is chipset dependent. (And please, disable flow-director on ixgbe on 10.1. Pretty please.) -a ___ freebsd-net@freebsd.org m

Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread Maxim Sobolev
OK, so following Luigi's suggestion we've re-enabled AIM, set max_interrupt_rate to 8000 (matching igb), and reduced number of queues to 6. We'll have next peak in about 14 hours, I'll try to capture and record history of the per-queue interrupt rate. It still remains somewhat puzzling why somethin

Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread Maxim Sobolev
Thanks Barney for totally useless response and an attempted insult! And yes, we are hitting CPU limit on 12 core E5-2620 v2 systems running I350, so yes, we do know a little bit how to distribute our application at least with the igb. For some reason this does not work with ixgb and we are trying

Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread Barney Cordoba via freebsd-net
Also, using a slow-ass cpu like the atom is completely absurd; first, no-one would ever use them.  You have to test cpu usage under 60% cpu usage, because as you get to higher cpu usage levels the lock contention increases exponentially. You're increasing lock contention by having more queues; s

Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread Barney Cordoba via freebsd-net
Wow, this is really important! if this is a college project, I give you a D. Maybe a D- because it's almost useless information. You ignore the most important aspect of "performance". Efficiency is arguably the most important aspect of performance.  1M pps at 20% cpu usage is much better "perform

Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread Maxim Sobolev
Olivier, keep in mind that we are not "kernel forwarding" packets, but "app forwarding", i.e. the packet goes full way net->kernel->recvfrom->app->sendto->kernel->net, which is why we have much lower PPS limits and which is why I think we are actually benefiting from the extra queues. Single-thread

Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread Olivier Cochard-Labbé
On Tue, Aug 11, 2015 at 11:18 PM, Maxim Sobolev wrote: > Hi folks, > > ​Hi, ​ > We've trying to migrate some of our high-PPS systems to a new hardware that > has four X540-AT2 10G NICs and observed that interrupt time goes through > roof after we cross around 200K PPS in and 200K out (two ports

Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread Maxim Sobolev
Here it is, the distribution looks pretty normal to me. dev.ix.0.queue0.tx_packets: 846233384 dev.ix.0.queue0.rx_packets: 856092418 dev.ix.0.queue1.tx_packets: 980356163 dev.ix.0.queue1.rx_packets: 922935329 dev.ix.0.queue2.tx_packets: 970700307 dev.ix.0.queue2.rx_packets: 907776311 dev.ix.0.queu

Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread Maxim Sobolev
Thanks, we will try, however I don't think it's going to make a huge difference because we run almost 2x the PPS on the otherwise identical (as far as FreeBSD version/lagg code goes) with I350/igb(9) and inferior CPU hardware. That kinda suggests that whatever the problem is it is below lagg. -Max

Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread hiren panchasara
On 08/11/15 at 03:16P, hiren panchasara wrote: > > There were some lagg/hashing related changes recently so let us know if > that is hurting you. Ah, my bad. Said changes would not be in 10.1. You may want to give 10.2 a try. (rc3 is out now.) Cheers, Hiren pgpZ2AIh1_GgP.pgp Description: PGP s

Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread hiren panchasara
On 08/11/15 at 03:01P, Adrian Chadd wrote: > hi, > > Are you able to graph per-queue interrupt rates? > > It looks like the traffic is distributed differently (the first two > queues are taking interrupts). Yeah, also check out "# sysctl dev.ix | grep packets" > > Does 10.1 have the flow direct

Re: Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread Adrian Chadd
hi, Are you able to graph per-queue interrupt rates? It looks like the traffic is distributed differently (the first two queues are taking interrupts). Does 10.1 have the flow director code disabled? I remember there was some .. interesting behaviour with ixgbe where it'd look at traffic and set

Poor high-PPS performance of the 10G ixgbe(9) NIC/driver in FreeBSD 10.1

2015-08-11 Thread Maxim Sobolev
Hi folks, We've trying to migrate some of our high-PPS systems to a new hardware that has four X540-AT2 10G NICs and observed that interrupt time goes through roof after we cross around 200K PPS in and 200K out (two ports in LACP). The previous hardware was stable up to about 350K PPS in and 350K

Re: Fw: D3300: LAG LACP timeout tunable through IOCTL

2015-08-11 Thread hiren panchasara
On 08/11/15 at 08:04P, Lakshmi Narasimhan Sundararajan wrote: > Hi FreeBSD Team, > > We have been working on LACP timeout tunable to help expedite link failure > propagation through a IOCtl interface. > > > > The changes have been tested and review is in progress. > > The phabricator link is

Fw: D3300: LAG LACP timeout tunable through IOCTL

2015-08-11 Thread Lakshmi Narasimhan Sundararajan
Hi FreeBSD Team, We have been working on LACP timeout tunable to help expedite link failure propagation through a IOCtl interface. The changes have been tested and review is in progress. The phabricator link is https://reviews.freebsd.org/D3300 We would kindly appreciate if the changes can