As a workround: Use one ethercard supports SR-IOV, then adds all the
VFs to a bridge, bound each VF with a MPD5 instance and a cpu core. I
have heard that PPP protocol support RoundRobin algo.
But I have not tested it.
Simon
20180728
在 2018/7/28 02:02, bugzilla-nore...@freebsd.org 写道:
>
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #31 from Eugene Grosbein ---
(In reply to Kurt Jaeger from comment #30)
This patch does not apply in any sense: it won't apply textually and it was
(incomplete) attempt to solve another problem in first place: it tried to add a
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #30 from Kurt Jaeger ---
(In reply to anoteros from comment #1)
This link for a patch seems valid:
http://static.ipfw.ru/patches/igb_flowid.diff
Did anyone test with that ? Does that still apply ?
--
You are receiving this
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #29 from Vladimir ---
(In reply to Eugene Grosbein from comment #28)
No, just wanted to confirm.
--
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #28 from Eugene Grosbein ---
(In reply to Vladimir from comment #27)
Yes. Have you missed comment #11 describing possible solutions including this?
--
You are receiving this mail because:
You are the assignee for the bug.
___
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #27 from Vladimir ---
Do you mean something like net.isr.dispatch=deferred ?
--
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mai
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #26 from Eugene Grosbein ---
(In reply to ricsip from comment #24)
Please use our mailing lists or web forums for general support questions of
discussion and leave Bugzilla for bug reports. Again, the problem has nothing
to do
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #25 from Jan Bramkamp ---
Does Linux use RSS to achieve this performance or does it drain the NIC queue
in a single interrupt and load balance the rest? Did you try the netisr
workaround?
--
You are receiving this mail because
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #24 from ricsip ---
(In reply to Eugene Grosbein from comment #23)
Hi Eugene,
as I was not satisfied with the outcome here, I installed an ipfire linux
distrib on my APU2 to see what performance it can achieve versus the low
p
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #23 from Eugene Grosbein ---
(In reply to ricsip from comment #22)
This greatly depends on local conditions. There is exactly same problem for
PPtP-connected client using GRE encapsulation (not clean TCP) when all traffic
is de
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #22 from ricsip ---
(In reply to Eugene Grosbein from comment #20)
Eugene: thanks for the clarification. Forgive my ignorance, that I was not
aware that the feature "multiple TX/RX queue with RSS support" for any NIC on
the mar
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #21 from ricsip ---
(In reply to Eugene Grosbein from comment #20)
Eugene: thanks for the clarification. Forgive my ignorance, that I was not
aware that the feature "multiple TX/RX queue with RSS support" for any NIC on
the mar
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #20 from Eugene Grosbein ---
(In reply to ricsip from comment #19)
Again, this is not igb(4) driver "broken", these are corresponding network
cards having no hardware support to distribute PPPoE traffic per-queue. This is
alrea
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #19 from ricsip ---
(In reply to Eugene Grosbein from comment #16)
Gents, if the igb (and any other NIC on the planet) is so fundamentally broken
for multi-queue + PPPoe, at least make this clear written in the drivers "Known
i
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #18 from Vladimir ---
I think most people expect that igb hardware works the same way under FreeBSD
like it works under other OSes like Linux or whatever... Windows, I think this
pointed me also to think that it is driver proble
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #17 from Vladimir ---
Thanks, Eugene.
--
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mai
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #16 from Eugene Grosbein ---
(In reply to Vladimir from comment #14)
I guess you can read Russian, please take a look at my post
https://dadv.livejournal.com/139170.html , it may make things clearer for you.
--
You are receiv
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #15 from Eugene Grosbein ---
(In reply to Vladimir from comment #14)
The problem is in hardware card not supporting PPPoE per-queue load
distribution, not in the driver. Why anyone would think that igb-supported NICs
are capabl
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #14 from Vladimir ---
If the problem is isolated to igb driver only, could it be igb driver problem,
no?
Why then there was patch for igb, that currently missing?
https://wiki.freebsd.org/NetworkPerformanceTuning (Traffic flow
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #13 from Eugene Grosbein ---
(In reply to Jan Bramkamp from comment #12)
I'm not sure which lock you are talking of.
Anyway, the problem has nothing to do with igb driver as NICs supported by
igb(4) driver have no hardware sup
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #12 from Jan Bramkamp ---
I may work as intended, but that doesn't mean that it works well, because if I
remember correctly there is a single lock in each PPPoE instance as well so you
just moved the bottleneck a little bit up t
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
Eugene Grosbein changed:
What|Removed |Added
Resolution|--- |Works As Intended
St
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #11 from Eugene Grosbein ---
There seems to be common mis-understanging in how hardware receive queues work
in igb(4) chipsets.
First, one should read Intel's datasheet on the NIC. For example of 82576-based
NIC this is
https:/
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #10 from ricsip ---
(In reply to Jan Bramkamp from comment #9)
Your reply greatly appreciated. My intention was to see if a stone is dropped
into the lake, it may create some waves and progress may happen.
TBH I was unaware o
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
--- Comment #9 from Jan Bramkamp ---
(In reply to ricsip from comment #7)
The problem exists and can be fixed, but would require non-trivial changes. I
suspect that the problem isn't specific to the Intel NIC driver and that you'll
encount
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856
Mark Linimon changed:
What|Removed |Added
Keywords||IntelNetworking
Assignee|
26 matches
Mail list logo