Hi Matus,

I've not yet had the chance to check what that looks like with multiple clients.


I did check the rest of what you wrote, and that made everything a bit clearer!


I have it setup so one physical interface handles all internal traffic, and 
another physical interface handles all external traffic.


I've setup each interface with 4 RX queues each. The internal interface queues 
are on the first four workers (0-3), while the external interface queues are on 
the last four workers (4-7).


However, setting NAT workers on the last four workers showed not as many 
issues. I confirmed with "show run" that they're on the latter workers. We see 
some performance issues if the NAT workers are on different workers than the 
interfaces handling inside traffic. We see extreme issues if the NAT workers 
are on workers that have no queues on them, for example worker 8+.


We see the best performance if the NAT workers are using the same workers as 
those handling the inside interface RX queues, which would make sense why the 
issues go away if I limit NAT workers to 0-3.


John

________________________________
From: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 
<matfa...@cisco.com>
Sent: Friday, December 21, 2018 7:02 AM
To: John Biscevic; vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] NAT workers above 4 completely tanks performance

Is worker distribution same in case of multiple clients (you ca see this with 
same "show run" exercise, take a look at number of interface and nat44-in2out 
calls for each core)? Maybe you should try to play with interface rx queue 
placement (you can see it in "show interface rx-placement" output) or RSS 
options of the card/interface.

Matus


From: John Biscevic <john.bisce...@bahnhof.net>
Sent: Friday, December 21, 2018 6:34 AM
To: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 
<matfa...@cisco.com>; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] NAT workers above 4 completely tanks performance


Hi Matus,



Thanks!



Any suggestions on what that can be done to alleviate the issues?

The above test was done with a single client, but the same symptoms are shown 
when throwing far more flows at it, around 5.5 million sessions, from thousands 
of L3 sources.?



John



________________________________
From: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 
<matfa...@cisco.com<mailto:matfa...@cisco.com>>
Sent: Friday, December 21, 2018 6:21 AM
To: John Biscevic; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: RE: [vpp-dev] NAT workers above 4 completely tanks performance

Hi,

in your case most of NAT translations are done in one core. With 4 cores you 
are lucky and flows arrive at same core where translations are processing (no 
worker handoff) and with 10 cores there is worker handoff between two workers 
and it is reason of performance drop. Basically your flows are not 
symmetrically distributed between cores.

Matus


From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>> On Behalf Of JB
Sent: Friday, December 21, 2018 4:25 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] NAT workers above 4 completely tanks performance

Hi Damjan,

Absolutely.

I raw one case with the default number of NAT workers (10) which has poor 
performance, and another case with a fewer number of NAT workers (4) showing 
great performance. They're separated by two different files, both are attached.

John
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11750): https://lists.fd.io/g/vpp-dev/message/11750
Mute This Topic: https://lists.fd.io/mt/28802889/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-
  • [... JB
    • ... Damjan Marion via Lists.Fd.Io
      • ... JB
        • ... Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES@Cisco) via Lists.Fd.Io
          • ... JB
            • ... Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES@Cisco) via Lists.Fd.Io
              • ... JB

Reply via email to