Re: [vpp-dev] Mellanox ConnectX-4 Lx cards binding with DPDK, but not recognized by VPP #vpp

2019-01-10 Thread Stephen Hemminger
On Wed, 09 Jan 2019 00:18:13 -0800
"Nixon Raj via Lists.Fd.Io"  wrote:

> *Setup:*
> 
> ·Platform – GNU/Linux
> 
> ·Kernel – 4.4.0-131-generic
> 
> ·Processor – x86_64
> 
> ·OS - Ubuntu 16.04
> 
>  
> 
> *MLNX_OFED Driver Version :*
> 
> ·4.1-1.0.2.0
> 
> Followed link :
> https://community.mellanox.com/s/article/how-to-build-vpp-fd-io--160--development-environment-with-mellanox-dpdk-pmd-for-connectx-4-and-connectx-5
> 
> Installation Successful and bind to DPDK with vfio-pci, but not recognized by 
> VPP
> # vppctl sh pci 
> address      Sock VID:PID     Link Speed   Driver          Product Name       
>              Vital Product Data
> 
>      :02:00.0      15b3:1015   8.0 GT/s x4  vfio-pci
> 
>      :02:00.1      15b3:1015   8.0 GT/s x4  vfio-pci  
> 
>      :03:00.0      15b3:1015   8.0 GT/s x4  vfio-pci
> 
>      :03:00.1      15b3:1015   8.0 GT/s x4  vfio-pci 
> 
>      :04:00.0      8086:1539   2.5 GT/s x1  igb
> 
>      :05:00.0      8086:1539   2.5 GT/s x1  igb  
> 
> 0   000:06:00.0      8086:1539   2.5 GT/s x1  igb
> 
>      :07:00.0      8086:1539   2.5 GT/s x1  igb
> 
>  
> 
>      :08:00.0      8086:1539   2.5 GT/s x1  igb
> 
> #vppctl sh int
> 
>           Name               Idx       State          Counter          Count
> 
>  
> 
>           local0                            0        down

Mellanox doesn't use vfio-pci (it uses infiniband verbs) so check the hardware
table.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11895): https://lists.fd.io/g/vpp-dev/message/11895
Mute This Topic: https://lists.fd.io/mt/28982352/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Enable all ARM tests

2019-01-10 Thread Juraj Linkeš
Hi folks,

All of the remaining ARM failures have been fixed and now we need to enable the 
disabled tests for ARM CI:
https://gerrit.fd.io/r/#/c/16581/
https://gerrit.fd.io/r/#/c/16569/

Could someone please merge these? They're simple changes and verify shows that 
the errors were, indeed, fixed.

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11896): https://lists.fd.io/g/vpp-dev/message/11896
Mute This Topic: https://lists.fd.io/mt/28994360/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] :: GRE tunnel dropping MPLS packets

2019-01-10 Thread omer . majeed
Hi Neale, 

Route for destination IP of GRE tunnel was also added as an MPLS route. 

And MPLS in VPP doesn't work for L2 forwarding, hence GRE tunnel was
dropping packets. 

Thanks. 

Best Regards, 

Omer

On 2019-01-08 17:12, Neale Ranns via Lists.Fd.Io wrote:

> Hi Omer, 
> 
> Your config looks OK. I would start debugging with a packet trace. 
> 
> /neale 
> 
> DE :  au nom de Omer Majeed 
> DATE : lundi 7 janvier 2019 à 20:47
> À : "vpp-dev@lists.fd.io" 
> OBJET : [vpp-dev] :: GRE tunnel dropping MPLS packets 
> 
> Hi, 
> 
> I'm running VPP on Centos 7 machine (say machine A), and running an 
> application on other centos 7 machine (say machine B). 
> 
> I've made a GRE tunnel between those 2 machines. 
> 
> vpp# show gre tunnel
> [0] instance 0 src 192.168.17.10 dst 192.168.17.6 fib-idx 0 sw-if-idx 8 
> payload L3 
> 
> Made that gre0 interface mpls enabled. 
> 
> I added outgoing mpls routes in VPP for IPs on machine B 
> 
> vpp# show ip fib table 2 
> 
> 192.168.100.4/32 [1]
> unicast-ip4-chain
> [@0]: dpo-load-balance: [proto:ip4 index:47 buckets:1 uRPF:49 to:[0:0]]
> [0] [@10]: mpls-label[2]:[25:64:0:eos]
> [@1]: mpls via 0.0.0.0 gre0: mtu:9000 
> 4500fe2f196ec0a8110ac0a811068847
> stacked-on:
> [@3]: ipv4 via 192.168.17.6 loop9000: mtu:9000 ac1f6b20498fdead00280800
> 192.168.100.5/32 [2]
> unicast-ip4-chain
> [@0]: dpo-load-balance: [proto:ip4 index:46 buckets:1 uRPF:47 to:[0:0]]
> [0] [@10]: mpls-label[0]:[30:64:0:eos]
> [@1]: mpls via 0.0.0.0 gre0: mtu:9000 
> 4500fe2f196ec0a8110ac0a811068847
> stacked-on:
> [@3]: ipv4 via 192.168.17.6 loop9000: mtu:9000 ac1f6b20498fdead00280800 
> 
> For reverse traffic I've added MPLS routes given below 
> 
> vpp# show mpls fib table 0 
> 
> 18:eos/21 fib:0 index:29 locks:2
> src:API refs:1 entry-flags:uRPF-exempt, src-flags:added,contributing,active,
> path-list:[35] locks:20 flags:shared, uPRF-list:31 len:0 itfs:[]
> path:[35] pl-index:35 ip4 weight=1 pref=0 deag:  oper-flags:resolved,
> [@0]: dst-address,unicast lookup in ipv4-VRF:2
> 
> forwarding:   mpls-eos-chain
> [@0]: dpo-load-balance: [proto:mpls index:32 buckets:1 uRPF:32 to:[0:0]]
> [0] [@6]: mpls-disposition:[0]:[ip4, pipe]
> [@7]: dst-address,unicast lookup in ipv4-VRF:2
> 19:eos/21 fib:0 index:38 locks:2
> src:API refs:1 entry-flags:uRPF-exempt, src-flags:added,contributing,active,
> path-list:[35] locks:20 flags:shared, uPRF-list:31 len:0 itfs:[]
> path:[35] pl-index:35 ip4 weight=1 pref=0 deag:  oper-flags:resolved,
> [@0]: dst-address,unicast lookup in ipv4-VRF:2
> 
> forwarding:   mpls-eos-chain
> [@0]: dpo-load-balance: [proto:mpls index:41 buckets:1 uRPF:41 to:[0:0]]
> [0] [@6]: mpls-disposition:[9]:[ip4, pipe]
> [@7]: dst-address,unicast lookup in ipv4-VRF:2 
> 
> When I try to ping from machine B to an IP in machine B (VPP VRF 2) through 
> that GRE tunnel, I receive packets but GRE tunnel drops the packets. 
> 
> vpp# show int gre0 
> 
> Name   Idx   State  Counter  Count 
> gre0  8 up   rx packets   
>  66
> rx bytes  6996
> drops66
> (nil)   66 
> 
> Is there anything else that needs to be done to get MPLS over GRE working? 
> 
> Any suggestions on how to debug the issue? 
> 
> Thanks a lot. 
> 
> Best Regards, 
> 
> Omer 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#11864): https://lists.fd.io/g/vpp-dev/message/11864
> Mute This Topic: https://lists.fd.io/mt/28966281/984664
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [omer.maj...@sofioni.com]
> -=-=-=-=-=-=-=-=-=-=-=-
 

Links:
--
[1] http://192.168.100.4/32
[2] http://192.168.100.5/32
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11897): https://lists.fd.io/g/vpp-dev/message/11897
Mute This Topic: https://lists.fd.io/mt/28966281/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-