Re: [vpp-dev] do not SNAT if forwarding enabled

2019-11-14 Thread carlito nueno
Hi all,

Anyone get this working? When I enable nat44 forwarding, all NAT
translations stop working.

example - 110.21.22.12 is the IP address of my wan0.

I have:
set interface nat44 in loop0 out wan0

Without forwarding:
vpp# sh nat44 sessions
NAT44 sessions:
 thread 0 vpp_main: 2240 sessions 
  10.1.3.138: 1540 dynamic translations, 0 static translations
  10.1.3.135: 36 dynamic translations, 0 static translations
  10.1.3.125: 524 dynamic translations, 0 static translations
  10.1.1.2: 108 dynamic translations, 0 static translations
  10.1.3.174: 5 dynamic translations, 0 static translations
  10.1.3.169: 15 dynamic translations, 0 static translations
  10.1.3.62: 10 dynamic translations, 0 static translations
  10.1.2.203: 2 dynamic translations, 0 static translations

With forwarding:
vpp# sh nat44 sessions
NAT44 sessions:
 thread 0 vpp_main: 19 sessions 
  110.21.22.12: 19 dynamic translations, 0 static translations

Thanks

On Mon, Apr 15, 2019 at 1:29 AM Shahid Khan 
wrote:

> Hi Ole,
>
> any finding on it ? are u able to reproduce it ?
>
> -Shahid.
>
>
>
> On Thu, Apr 11, 2019 at 1:32 PM Shahid Khan via Lists.Fd.Io
>  wrote:
>
>> There is another physical port bridged to loop1 which is on
>> 192.168.15.0/24 network.
>> .the packets coming inside GRE tunnel are for 192.168.15.0/24
>> network.
>>
>> also just want to understand  why SNAT is blocked when forwarding is
>> enabled ?
>> someone might have a requirement to SNAT first and then do forward.
>>
>> when i comment the code as below, SNAT and GRE both works. but i don't
>> know how it will impact rest of code/functionality.
>>
>> static inline int
>> snat_not_translate (snat_main_t * sm, vlib_node_runtime_t * node,
>> u32 sw_if_index0, ip4_header_t * ip0, u32 proto0,
>> u32 rx_fib_index0, u32 thread_index)
>> {
>>   udp_header_t *udp0 = ip4_next_header (ip0);
>>   snat_session_key_t key0, sm0;
>>   clib_bihash_kv_8_8_t kv0, value0;
>>
>>   key0.addr = ip0->dst_address;
>>   key0.port = udp0->dst_port;
>>   key0.protocol = proto0;
>>   key0.fib_index = sm->outside_fib_index;
>>   kv0.key = key0.as_u64;
>>
>>   /* NAT packet aimed at external address if */
>>   /* has active sessions */
>>   if (clib_bihash_search_8_8 (&sm->per_thread_data[thread_index].out2in,
>> &kv0,
>>   &value0))
>> {
>>   /* or is static mappings */
>>   if (!snat_static_mapping_match (sm, key0, &sm0, 1, 0, 0, 0, 0, 0))
>> return 0;
>> }
>>   else
>> return 0;
>>
>> /*
>>   if (sm->forwarding_enabled)
>> return 1;
>> */
>>
>>   return snat_not_translate_fast (sm, node, sw_if_index0, ip0, proto0,
>>   rx_fib_index0);
>> }
>>
>>
>>
>> -Shahid.
>>
>>
>>
>>
>> On Thu, Apr 11, 2019 at 12:44 PM Ole Troan  wrote:
>>
>>> Shahid,
>>>
>>> Right, so the GRE packets shouldn’t go through the NAT at all.
>>> Are the GRE tunnel itself marked as inside?
>>>
>>> I should have thoguht this was supported with
>>> https://jira.fd.io/browse/VPP-447
>>> Let me see if I can reproduce.,
>>>
>>> Best regards,
>>> Ole
>>>
>>> > On 10 Apr 2019, at 12:55, Shahid Khan 
>>> wrote:
>>> >
>>> > Hi Ole,
>>> >
>>> > we have a bridge(loop0) with a private ip say 192.168.100.2/24.
>>> > a TAP is also connected to this bridge and other end of TAP is on host
>>> side.
>>> >
>>> > we have one physical interface connected to another bridge (loop1)
>>> with outside network ip of say 192.168.10.1/24
>>> > and a GRE tunnel is created having source as 192.168.10.1.
>>> >
>>> > Host has requirement to initiate sessions(tcp/udp) to outside network.
>>> so we have applied NAT as below.
>>> >
>>> > nat44 add interface address loop1
>>> > set interface nat44 in loop0 out loop1
>>> >
>>> > with this host can initiate session with outside network and SNAT
>>> works fine.
>>> >
>>> > But GRE does not work. we looked into traces and found that packet
>>> comming to GRE tunnels are getting dropped with  trace showing "unknown
>>> protocol".
>>> >
>>> > if we enable forwarding then GRE packets are getting forwarded to
>>> destination but now host is not able to initiate session to outside network
>>> because SNAT stops
>>> >
>>> > -Shahid.
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > On Wed, Apr 10, 2019 at 2:33 PM Ole Troan 
>>> wrote:
>>> > Hi Shahid,
>>> >
>>> > What are you trying to achieve?
>>> > https://wiki.fd.io/view/VPP/NAT#Enable_or_disable_forwarding
>>> >
>>> > You do not typically enable the “forwarding” feature.
>>> >
>>> > Cheers,
>>> > Ole
>>> >
>>> > > On 8 Apr 2019, at 07:52, Shahid Khan 
>>> wrote:
>>> > >
>>> > > can someone look into below query ?
>>> > >
>>> > > -Shahid.
>>> > >
>>> > > On Wed, Apr 3, 2019 at 12:56 PM Shahid Khan via Lists.Fd.Io
>>>  wrote:
>>> > > Hi,
>>> > >
>>> > > can someone help us on below query ?
>>> > >
>>> > > -Shahid.
>>> > >
>>> > > On Mon, Apr 1, 2019 at 11:45 AM Shahid Khan via Lists.Fd.Io
>>>  wrote:
>>> > >
>>

Re: [vpp-dev] fd to poll in VPP-API-CLIENT with no RX thread

2019-11-14 Thread Dave Barach via Lists.Fd.Io
Please take a look at https://gerrit.fd.io/r/c/vpp/+/23427. 

From: vpp-dev@lists.fd.io  On Behalf Of Satya Murthy
Sent: Wednesday, November 13, 2019 8:24 PM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] fd to poll in VPP-API-CLIENT with no RX thread

Thanks a lot Dave for offering the help.

The change would be of great help for us. 
Please let us know, once you have a patch, so that we can selectively take that 
patch and import.

-- 
Thanks & Regards,
Murthy 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14594): https://lists.fd.io/g/vpp-dev/message/14594
Mute This Topic: https://lists.fd.io/mt/55586364/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2019-11-14 14:00:17 UTC

2019-11-14 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 5
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14595): https://lists.fd.io/g/vpp-dev/message/14595
Mute This Topic: https://lists.fd.io/mt/57623327/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Multiple patch validation failures: yamllint AWOL

2019-11-14 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io
> Makefile by testing for JENKINS_URL?

Inputs to the program make should be understandable by human.
When I want to predict locally what would the checkstyle job vote,
I prefer something like:
  make checkstyle-all
rather than:
  JENKINS_URL=jenkins.fd.io make checkstyle

Or did you mean a different variable?
Here [3] is a typical list.

Vratko.

[3] 
https://jenkins.fd.io/view/vpp/job/vpp-checkstyle-verify-master/12455/injectedEnvVars/

From: vpp-dev@lists.fd.io  On Behalf Of Paul Vinciguerra
Sent: Wednesday, November 13, 2019 4:28 PM
To: Dave Wallace 
Cc: vpp-dev 
Subject: Re: [vpp-dev] Multiple patch validation failures: yamllint AWOL

How do people feel about managing the CI where possible via the Makefile by 
testing for JENKINS_URL?

On Wed, Nov 13, 2019 at 10:11 AM Dave Wallace 
mailto:dwallac...@gmail.com>> wrote:
Status update:

Florin fixed 'make checkstyle' with [1]  (Thanks Florin! :)

Ed Kern has added 'make install-dep' to the checkstyle verify build executors.

The revert of my ci-management patch [2] will be abandoned.

Florin's patch will be reverted once the new build executors are available and 
verified that yamllint is available.

Sorry for the bog-up.

Thanks,
-daw-

[1] https://gerrit.fd.io/r/c/vpp/+/23391
[2] https://gerrit.fd.io/r/c/ci-management/+/23410

On 11/13/2019 8:58 AM, Dave Wallace via Lists.Fd.Io wrote:
Dave,

My bad.  I had Vanessa merge a patch to ci-management [0] which I tested 
locally, but forgot that the build executors don't run 'make install-dep'.

Unfortunately the ci-management verify jobs don't actually run any of the CI 
jobs to ensure that the system still works with the patch being submitted...

I'm in the process of reverting the patch until the build executors can be 
upgraded.

Thanks,
-daw-
[0] https://gerrit.fd.io/r/c/ci-management/+/23364
On 11/13/2019 7:35 AM, Dave Barach via Lists.Fd.Io wrote:
I wonder where “yamllint” went? Multiple patches affected...

07:28:23 clang-format version 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
07:28:24 HEAD detached at dce8b380a
07:28:24 nothing to commit, working tree clean
07:28:24 ***
07:28:24 * VPP CHECKSTYLE SUCCESSFULLY COMPLETED
07:28:24 ***
07:28:24 yamllint /w/workspace/vpp-checkstyle-verify-master/src
07:28:24 make: yamllint: Command not found
07:28:24 Makefile:561: recipe for target 'checkstyle' failed
07:28:24 make: *** [checkstyle] Error 127
07:28:24 Build step 'Execute shell' marked build as failure

FWIW... Dave


-=-=-=-=-=-=-=-=-=-=-=-

Links: You receive all messages sent to this group.



View/Reply Online (#14579): https://lists.fd.io/g/vpp-dev/message/14579

Mute This Topic: https://lists.fd.io/mt/55791974/675079

Group Owner: vpp-dev+ow...@lists.fd.io

Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[dwallac...@gmail.com]

-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-

Links: You receive all messages sent to this group.



View/Reply Online (#14583): https://lists.fd.io/g/vpp-dev/message/14583

Mute This Topic: https://lists.fd.io/mt/55791974/675079

Group Owner: vpp-dev+ow...@lists.fd.io

Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[dwallac...@gmail.com]

-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14589): https://lists.fd.io/g/vpp-dev/message/14589
Mute This Topic: https://lists.fd.io/mt/55791974/1594641
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[pvi...@vinciconsulting.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14596): https://lists.fd.io/g/vpp-dev/message/14596
Mute This Topic: https://lists.fd.io/mt/55791974/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] fd to poll in VPP-API-CLIENT with no RX thread

2019-11-14 Thread Satya Murthy
Thank you Dave.

Will import this patch and try.

--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14597): https://lists.fd.io/g/vpp-dev/message/14597
Mute This Topic: https://lists.fd.io/mt/55586364/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] [csit-dev] make: yamllint: Command not found

2019-11-14 Thread Dave Wallace

Yulong,

Ed Kern updated the job executor yesterday morning.  I have issued a 
recheck on your patch which should now pass checkstyle.


Sorry for the inconvenience.
-daw-

On 11/14/2019 1:44 AM, Pei, Yulong wrote:


Hello vpp-dev & csit-dev,

https://jenkins.fd.io/job/vpp-checkstyle-verify-master/12407/console

*16:56:10

*16:56:10** VPP CHECKSTYLE SUCCESSFULLY COMPLETED

*16:56:10

*16:56:10*yamllint /w/workspace/vpp-checkstyle-verify-master/src

*16:56:10*make: yamllint: Command not found

*16:56:10*Makefile:561: recipe for target 'checkstyle' failed

*16:56:10*make: *** [checkstyle] Error 127

*16:56:10*Build step 'Execute shell' marked build as failure

It seems that the server that scheduled by Jenkins lack the command 
`yamllint`  that caused  vpp checkstyle failure,


Who can help install `yamllint` on the servers to fix the issue ?

Best Regards

Yulong Pei


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#3799): https://lists.fd.io/g/csit-dev/message/3799
Mute This Topic: https://lists.fd.io/mt/57093159/675079
Group Owner: csit-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/csit-dev/unsub  [dwallac...@gmail.com]
-=-=-=-=-=-=-=-=-=-=-=-


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14598): https://lists.fd.io/g/vpp-dev/message/14598
Mute This Topic: https://lists.fd.io/mt/57700985/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] nat44-out2in no translation with multiple tenants #nat #nsh

2019-11-14 Thread Cipher Chen
Hi all,

I'm testing multiple tenants using nat44-snat, and turns out tenants might be 
mixed.

Assuming two tenants 200 (fib index 1) and 100 (fib index 2):

vpp# show ip fib table 100
ipv4-VRF:100, fib_index:2, flow hash:[src dst sport dport proto ] 
locks:[src:CLI:7, src:plugin-low:1, src:adjacency:3, ]

creating nsh decapsultion tep (nsp: 14000, nsi: 255):

[VPP] create nsh map nsp 14000 nsi 255 mapped-nsp 14000 mapped-nsi 255 
nsh_action pop encap-none 2 0
[VPP] set interface ip table nsh_tunnel6 100
[VPP] set interface nat44 out nsh_tunnel6

this nsh_tunnel6's sw_if_index is 18:

vpp# show interface nsh_tunnel6
Name   Idx    State  MTU (L3/IP4/IP6/MPLS) Counter  
Count
nsh_tunnel6   18 up   0/0/0/0   drops   
  59
ip4   59
vpp#

but all these turn out that snat out2in goes into wrong vrf:

00:00:29:028228: vxlan4-gpe-input
VXLAN-GPE: tunnel 0 next 5 error 0
00:00:29:028229: nsh-input

nsh ver 0 ttl 3 len 6 (24 bytes) md_type 1 next_protocol 3
service path 14000 service index 255
c1 0 c2 0 c3 0 c4 0

00:00:29:028230: ethernet-input
IP4: ec:f4:bb:c4:ae:80 -> 52:54:00:00:02:00
00:00:29:028230: ip4-input
ICMP: 10.255.1.200 -> 10.255.1.211
tos 0x00, ttl 64, length 84, checksum 0x5c1a
fragment id 0x04f7
ICMP echo_reply checksum 0x31cb
*# HERE, it complains that no session found when this package decapsulated from 
nsh_tunnel6.*
00:00:29:028231: nat44-out2in
NAT44_OUT2IN: sw_if_index 18, next index 0, session index -1
00:00:29:028233: error-drop
rx:nsh_tunnel6
00:00:29:028234: drop
nat44-out2in: no translation

and also, nat44 detail looks weired: i2o works in vrf100-fib2, this is 
expected, but *out2in works in vrf200-fib1*.

vpp# show nat44 sessions detail
NAT44 sessions:
 thread 0 vpp_main: 1 sessions 
192.168.1.1: 1 dynamic translations, 0 static translations
i2o 192.168.1.1 proto icmp port 24403 fib 2
o2i 10.255.1.211 proto icmp port 16253 fib 1
index 0
last heard 90.61
total pkts 85, total bytes 7140
dynamic translation

vpp#

To the best of my knowledge, nsh_tunnel6 was set into vrf100, and decapsulated 
into fib2,
so out2in should also go into fib2 here. Not figure out what cause o2i goes 
into fib1 instead.

Any suggestion would be helpful. Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14599): https://lists.fd.io/g/vpp-dev/message/14599
Mute This Topic: https://lists.fd.io/mt/57703484/21656
Mute #nsh: https://lists.fd.io/mk?hashtag=nsh&subid=1480452
Mute #nat: https://lists.fd.io/mk?hashtag=nat&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Is there a limit when assigning corelist-workers in vpp?

2019-11-14 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hi Chuan,

I took a deeper look at your conf and actually realized that the dpdk 'workers' 
stanza does not refer to core number but to worker id.
So, when you say "cpu { corelist-workers 4,6,8,10,12,14,16,18,20,22 }" you 
define 10 workers with id from 0 to 9 and pin them to specific cores (4,6 etc.).
Then, with "dpdk { ... dev ... { workers 4,6,8,10,12 } ..." you refer to the 
*workers id*, numbered from 0 to 9, not to cores.
So the correct dpdk stanzas should be "workers 0,1,2,3,4" and "workers 
5,6,7,8,9" instead (note: you can also use "workers 0-4" and "workers 5-9").
What is confusing is the dpdk will happily parse inexistent workers id, and 
then when assigning queues to workers, it will respect the conf for known ones 
(4,6,8) and do round-robin assignment for the unknown ones ignoring already set 
manual assignment.

Best
Ben

> -Original Message-
> From: Chuan Han 
> Sent: lundi 11 novembre 2019 23:23
> To: Benoit Ganne (bganne) 
> Cc: Dave Barach (dbarach) ; vpp-dev  d...@lists.fd.io>
> Subject: Re: [vpp-dev] Is there a limit when assigning corelist-workers in
> vpp?
> 
> If I do not manually assign cores to nic, and let vpp assign cores, vpp
> only assign one core per nic queue. If I also assign number of queues to
> 10, all cores are used. Otherwise, only core 4 and 6 are assigned to each
> nic, which has only one queue.
> 
> Probably, there is some smart logic adaptively assigning cores to nics?
> Anyway, a single core per nic is good enough to us for now.
> 
> Without specifying number of rx queues per nic, only core 4 and 6 are
> assigned.
> 
> vpp# sh threads
> ID NameTypeLWP Sched Policy (Priority)
> lcore  Core   Socket State
> 0  vpp_main16886   other (0)2
> 0  0
> 1  vpp_wk_0workers 16888   other (0)4
> 1  0
> 2  vpp_wk_1workers 16889   other (0)6
> 4  0
> 3  vpp_wk_2workers 16890   other (0)8
> 2  0
> 4  vpp_wk_3workers 16891   other (0)10
> 3  0
> 5  vpp_wk_4workers 16892   other (0)12
> 8  0
> 6  vpp_wk_5workers 16893   other (0)14
> 13 0
> 7  vpp_wk_6workers 16894   other (0)16
> 9  0
> 8  vpp_wk_7workers 16895   other (0)18
> 12 0
> 9  vpp_wk_8workers 16896   other (0)20
> 10 0
> 10 vpp_wk_9workers 16897   other (0)22
> 11 0
> vpp# sh interface rx-placement
> Thread 1 (vpp_wk_0):
>   node dpdk-input:
> eth1 queue 0 (polling)
> Thread 2 (vpp_wk_1):
>   node dpdk-input:
> eth0 queue 0 (polling)
> vpp#
> 
> 
> cpu {
>   main-core 2
>   corelist-workers 4,6,8,10,12,14,16,18,20,22
> }
> 
> dpdk {
>   socket-mem 2048,0
>   log-level debug
>   no-tx-checksum-offload
>   dev default{
> num-tx-desc 512
> num-rx-desc 512
>   }
>   dev :1a:00.0 {
> name eth0
>   }
>   dev :19:00.1 {
> name eth1
>   }
>   # Use aesni mb lib.
>   vdev crypto_aesni_mb0,socket_id=0
>   no-multi-seg
> }
> 
> 
> When specifying number of queues per nic, all cores are used.
> 
> vpp# sh int rx-placement
> Thread 1 (vpp_wk_0):
>   node dpdk-input:
> eth1 queue 0 (polling)
> eth0 queue 0 (polling)
> Thread 2 (vpp_wk_1):
>   node dpdk-input:
> eth1 queue 1 (polling)
> eth0 queue 1 (polling)
> Thread 3 (vpp_wk_2):
>   node dpdk-input:
> eth1 queue 2 (polling)
> eth0 queue 2 (polling)
> Thread 4 (vpp_wk_3):
>   node dpdk-input:
> eth1 queue 3 (polling)
> eth0 queue 3 (polling)
> Thread 5 (vpp_wk_4):
>   node dpdk-input:
> eth1 queue 4 (polling)
> eth0 queue 4 (polling)
> Thread 6 (vpp_wk_5):
>   node dpdk-input:
> eth1 queue 5 (polling)
> eth0 queue 5 (polling)
> Thread 7 (vpp_wk_6):
>   node dpdk-input:
> eth1 queue 6 (polling)
> eth0 queue 6 (polling)
> Thread 8 (vpp_wk_7):
>   node dpdk-input:
> eth1 queue 7 (polling)
> eth0 queue 7 (polling)
> Thread 9 (vpp_wk_8):
>   node dpdk-input:
> eth1 queue 8 (polling)
> eth0 queue 8 (polling)
> Thread 10 (vpp_wk_9):
>   node dpdk-input:
> eth1 queue 9 (polling)
> eth0 queue 9 (polling)
> vpp#  sh threads
> ID NameTypeLWP Sched Policy (Priority)
> lcore  Core   Socket State
> 0  vpp_main16905   other (0)2
> 0  0
> 1  vpp_wk_0workers 16907   other (0)4
> 1  0
> 2  vpp_wk_1workers 16908   other (0)6
> 4  0
> 3  vpp_wk_2workers 16909   other (0)8
> 2  0
> 4  vpp_wk_3workers 16910   other (0)10
> 3  0
> 5  vpp_wk_4workers 16911   other

Re: [vpp-dev] Is there a limit when assigning corelist-workers in vpp?

2019-11-14 Thread Chuan Han via Lists.Fd.Io
Ah... I see. That explains everything!

Thanks for catching this.

It will be more helpful to let vpp print more meaningful logs or have
better documentation.

On Thu, Nov 14, 2019 at 7:17 AM Benoit Ganne (bganne) 
wrote:

> Hi Chuan,
>
> I took a deeper look at your conf and actually realized that the dpdk
> 'workers' stanza does not refer to core number but to worker id.
> So, when you say "cpu { corelist-workers 4,6,8,10,12,14,16,18,20,22 }" you
> define 10 workers with id from 0 to 9 and pin them to specific cores (4,6
> etc.).
> Then, with "dpdk { ... dev ... { workers 4,6,8,10,12 } ..." you refer to
> the *workers id*, numbered from 0 to 9, not to cores.
> So the correct dpdk stanzas should be "workers 0,1,2,3,4" and "workers
> 5,6,7,8,9" instead (note: you can also use "workers 0-4" and "workers 5-9").
> What is confusing is the dpdk will happily parse inexistent workers id,
> and then when assigning queues to workers, it will respect the conf for
> known ones (4,6,8) and do round-robin assignment for the unknown ones
> ignoring already set manual assignment.
>
> Best
> Ben
>
> > -Original Message-
> > From: Chuan Han 
> > Sent: lundi 11 novembre 2019 23:23
> > To: Benoit Ganne (bganne) 
> > Cc: Dave Barach (dbarach) ; vpp-dev  > d...@lists.fd.io>
> > Subject: Re: [vpp-dev] Is there a limit when assigning corelist-workers
> in
> > vpp?
> >
> > If I do not manually assign cores to nic, and let vpp assign cores, vpp
> > only assign one core per nic queue. If I also assign number of queues to
> > 10, all cores are used. Otherwise, only core 4 and 6 are assigned to each
> > nic, which has only one queue.
> >
> > Probably, there is some smart logic adaptively assigning cores to nics?
> > Anyway, a single core per nic is good enough to us for now.
> >
> > Without specifying number of rx queues per nic, only core 4 and 6 are
> > assigned.
> >
> > vpp# sh threads
> > ID NameTypeLWP Sched Policy (Priority)
> > lcore  Core   Socket State
> > 0  vpp_main16886   other (0)2
> > 0  0
> > 1  vpp_wk_0workers 16888   other (0)4
> > 1  0
> > 2  vpp_wk_1workers 16889   other (0)6
> > 4  0
> > 3  vpp_wk_2workers 16890   other (0)8
> > 2  0
> > 4  vpp_wk_3workers 16891   other (0)
> 10
> > 3  0
> > 5  vpp_wk_4workers 16892   other (0)
> 12
> > 8  0
> > 6  vpp_wk_5workers 16893   other (0)
> 14
> > 13 0
> > 7  vpp_wk_6workers 16894   other (0)
> 16
> > 9  0
> > 8  vpp_wk_7workers 16895   other (0)
> 18
> > 12 0
> > 9  vpp_wk_8workers 16896   other (0)
> 20
> > 10 0
> > 10 vpp_wk_9workers 16897   other (0)
> 22
> > 11 0
> > vpp# sh interface rx-placement
> > Thread 1 (vpp_wk_0):
> >   node dpdk-input:
> > eth1 queue 0 (polling)
> > Thread 2 (vpp_wk_1):
> >   node dpdk-input:
> > eth0 queue 0 (polling)
> > vpp#
> >
> >
> > cpu {
> >   main-core 2
> >   corelist-workers 4,6,8,10,12,14,16,18,20,22
> > }
> >
> > dpdk {
> >   socket-mem 2048,0
> >   log-level debug
> >   no-tx-checksum-offload
> >   dev default{
> > num-tx-desc 512
> > num-rx-desc 512
> >   }
> >   dev :1a:00.0 {
> > name eth0
> >   }
> >   dev :19:00.1 {
> > name eth1
> >   }
> >   # Use aesni mb lib.
> >   vdev crypto_aesni_mb0,socket_id=0
> >   no-multi-seg
> > }
> >
> >
> > When specifying number of queues per nic, all cores are used.
> >
> > vpp# sh int rx-placement
> > Thread 1 (vpp_wk_0):
> >   node dpdk-input:
> > eth1 queue 0 (polling)
> > eth0 queue 0 (polling)
> > Thread 2 (vpp_wk_1):
> >   node dpdk-input:
> > eth1 queue 1 (polling)
> > eth0 queue 1 (polling)
> > Thread 3 (vpp_wk_2):
> >   node dpdk-input:
> > eth1 queue 2 (polling)
> > eth0 queue 2 (polling)
> > Thread 4 (vpp_wk_3):
> >   node dpdk-input:
> > eth1 queue 3 (polling)
> > eth0 queue 3 (polling)
> > Thread 5 (vpp_wk_4):
> >   node dpdk-input:
> > eth1 queue 4 (polling)
> > eth0 queue 4 (polling)
> > Thread 6 (vpp_wk_5):
> >   node dpdk-input:
> > eth1 queue 5 (polling)
> > eth0 queue 5 (polling)
> > Thread 7 (vpp_wk_6):
> >   node dpdk-input:
> > eth1 queue 6 (polling)
> > eth0 queue 6 (polling)
> > Thread 8 (vpp_wk_7):
> >   node dpdk-input:
> > eth1 queue 7 (polling)
> > eth0 queue 7 (polling)
> > Thread 9 (vpp_wk_8):
> >   node dpdk-input:
> > eth1 queue 8 (polling)
> > eth0 queue 8 (polling)
> > Thread 10 (vpp_wk_9):
> >   node dpdk-input:
> > eth1 queue 9 (polling)
> > eth0 queue 9 (polling)
> > vpp#  sh threads
> > ID NameTypeLWP Sched Policy (Priority)
> > lcore  Core   Socket State
> > 0  vpp_main16905   other (0)2
> > 0