Maxim Sobolev sippysoft.com> writes:
>
> Yes, we've confirmed it's IXGBE_FDIR. That's good it comes disabled in
10.2.
>
> Thanks everyone for constructive input!
>
> -Max
> ___
> freebsd-net freebsd.org mailing list
> https://lists.freebsd.org/mail
On 08/18/15 at 11:03P, Adrian Chadd wrote:
> you're welcome.
>
> Someone should really add a release errata to 10.1 or something.
Yes, I strongly feel the same. Adding gjb@ here to see how that can be
done.
Cheers,
Hiren
>
>
> -a
>
>
> On 18 August 2015 at 10:59, Maxim Sobolev wrote:
> > Ye
On 08/18/15 at 06:25P, Glen Barber wrote:
> On Tue, Aug 18, 2015 at 11:18:33AM -0700, hiren panchasara wrote:
> > On 08/18/15 at 11:03P, Adrian Chadd wrote:
> > > you're welcome.
> > >
> > > Someone should really add a release errata to 10.1 or something.
> >
> > Yes, I strongly feel the same. Ad
On Tue, Aug 18, 2015 at 11:18:33AM -0700, hiren panchasara wrote:
> On 08/18/15 at 11:03P, Adrian Chadd wrote:
> > you're welcome.
> >
> > Someone should really add a release errata to 10.1 or something.
>
> Yes, I strongly feel the same. Adding gjb@ here to see how that can be
> done.
>
Please
you're welcome.
Someone should really add a release errata to 10.1 or something.
-a
On 18 August 2015 at 10:59, Maxim Sobolev wrote:
> Yes, we've confirmed it's IXGBE_FDIR. That's good it comes disabled in 10.2.
>
> Thanks everyone for constructive input!
>
> -Max
> __
Yes, we've confirmed it's IXGBE_FDIR. That's good it comes disabled in 10.2.
Thanks everyone for constructive input!
-Max
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "f
I think we are getting a better performance today with the IXGBE_FDIR
switched off. It's not 100% decisive though, since we've only pushed it to
little bit below 200kpps. We'll push more traffic tomorrow and see how it
goes.
-Maxim
On Fri, Aug 14, 2015 at 10:29 AM, Maxim Sobolev wrote:
> Hi guy
I am laughing so hard that I had to open some windows to get more oxygen!
On Friday, August 14, 2015 1:30 PM, Maxim Sobolev
wrote:
Hi guys, unfortunately no, neither reduction of the number of queues from 8
to 6 nor pinning interrupt rate at 2 per queue have not made any
differ
P.S. Just for the comparison, here is today's stats from the system
mentioned here with the low-end I210 chip (4 hardware queues), running
happily at some 240Kpps. The system and software is identical otherwise and
the igb(9) settings are the default ones:
http://sobomax.sippysoft.com/ScreenShot39
Hi guys, unfortunately no, neither reduction of the number of queues from 8
to 6 nor pinning interrupt rate at 2 per queue have not made any
difference. The card still goes kaboom at about 200Kpps no matter what. in
fact I've gone bit further, and after the first spike went on an pushed
interru
Thanks, we'll try that as well. We have not got as much traffic in the past
2 days, so we were running at about 140Kpps, well below the level that used
to cause issues before. I'll try to redistribute traffic tomorrow so that
we get it tested.
-Max
On Wed, Aug 12, 2015 at 11:47 PM, Adrian Chadd
Hi,
Try this:
* I'd disable AIM and hard-set interrupts to something sensible;
* I'd edit sys/conf/files and sys/dev/ixgbe/Makefile on 10.1 and
remove the '-DIXGBE_FDIR' bit that enabled flow director - the
software setup for flow director is buggy, and it causes things to get
wildly unhappy.
Here we go (ix2 and ix3 are not used):
ix0@pci0:3:0:0: class=0x02 card=0x152815d9 chip=0x15288086 rev=0x01
hdr=0x00
vendor = 'Intel Corporation'
device = 'Ethernet Controller 10-Gigabit X540-AT2'
class = network
subclass = ethernet
ix1@pci0:3:0:1: class=0x02
Right, and for the ixgbe hardware?
-a
On 12 August 2015 at 08:05, Maxim Sobolev wrote:
> igb0@pci0:7:0:0:class=0x02 card=0x153315d9 chip=0x15338086
> rev=0x03 hdr=0x00
> vendor = 'Intel Corporation'
> device = 'I210 Gigabit Network Connection'
> class = ne
igb0@pci0:7:0:0:class=0x02 card=0x153315d9 chip=0x15338086
rev=0x03 hdr=0x00
vendor = 'Intel Corporation'
device = 'I210 Gigabit Network Connection'
class = network
subclass = ethernet
igb1@pci0:8:0:0:class=0x02 card=0x153315d9 chip=0x15338086
Ok, so my current settings are:
hw.ix.max_interrupt_rate: 2
dev.ix.0.queue0.interrupt_rate: 2
dev.ix.0.queue1.interrupt_rate: 2
dev.ix.0.queue2.interrupt_rate: 2
dev.ix.0.queue3.interrupt_rate: 2
dev.ix.0.queue4.interrupt_rate: 2
dev.ix.0.queue5.interrupt_rate: 2
dev.ix
As I was telling to maxim, you should disable aim because it only matches
the max interrupt rate to the average packet size, which is the last thing
you want.
Setting the interrupt rate with sysctl (one per queue) gives you precise
control on the max rate and (hence, extra latency). 20k interrupts
I ran into the same problem with almost the same hardware (Intel X520)
on 10-STABLE. HT/SMT is disabled and cards are configured with 8 queues,
with the same sysctl tunings as sobomax@ did. I am not using lagg, no
FLOWTABLE.
I experimented with pmcstat (RESOURCE_STALLS) a while ago and here [1]
[2
12.08.2015, 02:28, "Maxim Sobolev" :
> Olivier, keep in mind that we are not "kernel forwarding" packets, but "app
> forwarding", i.e. the packet goes full way
> net->kernel->recvfrom->app->sendto->kernel->net, which is why we have much
> lower PPS limits and which is why I think we are actually be
Hi,
Can you please do a pciconf -lv on the previous and current hardware?
I wonder if it's something to do with a feature that is chipset
dependent.
(And please, disable flow-director on ixgbe on 10.1. Pretty please.)
-a
___
freebsd-net@freebsd.org m
OK, so following Luigi's suggestion we've re-enabled AIM, set
max_interrupt_rate to 8000 (matching igb), and reduced number of queues to
6. We'll have next peak in about 14 hours, I'll try to capture and record
history of the per-queue interrupt rate. It still remains somewhat puzzling
why somethin
Thanks Barney for totally useless response and an attempted insult! And
yes, we are hitting CPU limit on 12 core E5-2620 v2 systems running I350,
so yes, we do know a little bit how to distribute our application at least
with the igb. For some reason this does not work with ixgb and we are
trying
Also, using a slow-ass cpu like the atom is completely absurd; first, no-one
would ever use them.
You have to test cpu usage under 60% cpu usage, because as you get to higher
cpu usage levels the lock contention increases exponentially. You're increasing
lock contention by having more queues; s
Wow, this is really important! if this is a college project, I give you a D.
Maybe a D- because it's almost useless information.
You ignore the most important aspect of "performance". Efficiency is arguably
the most important aspect of performance.
1M pps at 20% cpu usage is much better "perform
Olivier, keep in mind that we are not "kernel forwarding" packets, but "app
forwarding", i.e. the packet goes full way
net->kernel->recvfrom->app->sendto->kernel->net, which is why we have much
lower PPS limits and which is why I think we are actually benefiting from
the extra queues. Single-thread
On Tue, Aug 11, 2015 at 11:18 PM, Maxim Sobolev wrote:
> Hi folks,
>
> Hi,
> We've trying to migrate some of our high-PPS systems to a new hardware that
> has four X540-AT2 10G NICs and observed that interrupt time goes through
> roof after we cross around 200K PPS in and 200K out (two ports
Here it is, the distribution looks pretty normal to me.
dev.ix.0.queue0.tx_packets: 846233384
dev.ix.0.queue0.rx_packets: 856092418
dev.ix.0.queue1.tx_packets: 980356163
dev.ix.0.queue1.rx_packets: 922935329
dev.ix.0.queue2.tx_packets: 970700307
dev.ix.0.queue2.rx_packets: 907776311
dev.ix.0.queu
Thanks, we will try, however I don't think it's going to make a huge
difference because we run almost 2x the PPS on the otherwise identical (as
far as FreeBSD version/lagg code goes) with I350/igb(9) and inferior CPU
hardware. That kinda suggests that whatever the problem is it is below lagg.
-Max
On 08/11/15 at 03:16P, hiren panchasara wrote:
>
> There were some lagg/hashing related changes recently so let us know if
> that is hurting you.
Ah, my bad. Said changes would not be in 10.1. You may want to give 10.2
a try. (rc3 is out now.)
Cheers,
Hiren
pgpZ2AIh1_GgP.pgp
Description: PGP s
On 08/11/15 at 03:01P, Adrian Chadd wrote:
> hi,
>
> Are you able to graph per-queue interrupt rates?
>
> It looks like the traffic is distributed differently (the first two
> queues are taking interrupts).
Yeah, also check out "# sysctl dev.ix | grep packets"
>
> Does 10.1 have the flow direct
hi,
Are you able to graph per-queue interrupt rates?
It looks like the traffic is distributed differently (the first two
queues are taking interrupts).
Does 10.1 have the flow director code disabled? I remember there was
some .. interesting behaviour with ixgbe where it'd look at traffic
and set
31 matches
Mail list logo